News

Deep multi-task learning for a geographically-regularized semantic segmentation of aerial images

article_published_on_label
July 5, 2018

An article of Michele Volpi and Devis Tuia: Deep multi-task learning for a geographically-regularized semantic segmentation of aerial images, has been published in ISPRS Journal of Photogrammetry and Remote Sensing, Volume 144, October 2018, Pages 48-60.

doi:10.1016/j.isprsjprs.2018.06.007

Abstract
When approaching the semantic segmentation of overhead imagery in the decimeter spatial resolution range, successful strategies usually combine powerful methods to learn the visual appearance of the semantic classes (e.g. convolutional neural networks) with strategies for spatial regularization (e.g. graphical models such as conditional random fields).

In this paper, we propose a method to learn evidence in the form of semantic class likelihoods, semantic boundaries across classes and shallow-to-deep visual features, each one modeled by a multi-task convolutional neural network architecture. We combine this bottom-up information with top-down spatial regularization encoded by a conditional random field model optimizing the label space across a hierarchy of segments with constraints related to structural, spatial and data-dependent pairwise relationships between regions.

Our results show that such strategy provide better regularization than a series of strong baselines reflecting state-of-the-art technologies. The proposed strategy offers a flexible and principled framework to include several sources of visual and structural information, while allowing for different degrees of spatial regularization accounting for priors about the expected output structures.

Keywords: Semantic segmentation; Semantic boundary detection; Convolutional neural networks; Conditional random fields; Multi-task learning; Decimeter resolution; Aerial imagery