Multitemporal Very High Resolution From Space: Outcome of the 2016 IEEE GRSS Data Fusion Contest

Published on
August 30, 2017

An article of Lichao Mou, Xiaoxiang Zhu, Maria Vakalopoulou, Konstantinos Karantzalos, Nikos Paragios, Bertrand Le Saux, Gabriele Moser, and Devis Tuia: Multitemporal Very High Resolution From Space: Outcome of the 2016 IEEE GRSS Data Fusion Contest, has been published in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, no. 8, pp. 3435-3447, Aug. 2017.


In this paper, the scientific outcomes of the 2016 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society are discussed. The 2016 Contest was an open topic competition based on a multitemporal and multimodal dataset, which included a temporal pair of very high resolution panchromatic and multispectral Deimos-2 images and a video captured by the Iris camera on-board the International Space Station. The problems addressed and the techniques proposed by the participants to the Contest spanned across a rather broad range of topics, and mixed ideas and methodologies from the remote sensing, video processing, and computer vision. In particular, the winning team developed a deep learning method to jointly address spatial scene labeling and temporal activity modeling using the available image and video data. The second place team proposed a random field model to simultaneously perform coregistration of multitemporal data, semantic segmentation, and change detection. The methodological key ideas of both these approaches and the main results of the corresponding experimental validation are discussed in this paper.

Keywords: Cameras; Data integration; Earth; Image resolution; Iris; Remote sensing; Sensors; Change detection; convolutional neural networks (CNN); deep learning; image analysis and data fusion; multimodal; multiresolution; multisource; random fields; tracking; video from space