An article of Wolfgang Gross, Devis Tuia, Uwe Soergel and Wolfgang Middelmann: Nonlinear Feature Normalization for Hyperspectral Domain Adaptation and Mitigation of Nonlinear Effects, has been published in IEEE Transactions on Geoscience and Remote Sensing, Volume: 57, Issue: 8, p 5975 - 5990.
Domain adaptation in remote sensing aims at the automatic knowledge transfer between a set of multitemporal and multisource images. This process is often impaired by nonlinear effects in the data, e.g., varying illumination conditions, different viewing angles, and geometry-dependent reflection. In this paper, we introduce the Nonlinear Feature Normalization (NFN), a fast and robust way to align the spectral characteristics of multiple hyperspectral data sets. NFN employs labeled training spectra for the different classes in an image to describe the corresponding underlying low-dimensional manifold structure. A linear basis for data representation is defined by arbitrary class reference vectors, and the image is aligned to the new basis in the same space. This results in samples of the same class being pulled closer together and samples of different classes pushed apart. NFN transforms the data in its original domain, preserving physical interpretability. We use the continuous invertibility of NFN to derive the NFN Alignment (NFNalign) transformation, which can be used for domain adaptation, by transforming one data set to the domain of a chosen reference. The evaluation is performed on multiple hyperspectral data sets as well as our new benchmark for multitemporal hyperspectral data. In a first step, we show that the NFN transformation successfully mitigates nonlinear effects by comparing classification of the linear Spectral Angle Mapper on original and transformed data. Finally, we demonstrate successful domain adaptation with NFNalign by applying it to the task of hyperspectral data preprocessing. The evaluation shows that our approach for alignment of multitemporal data produces high-spectral similarity and successfully allows knowledge transfer, e.g., of classifier models and training data.