News

Chameleon AI programme classifies objects in satellite images faster

Published on
January 16, 2024

A programme can train neural networks, using just a handful of images, to rapidly characterise in satellite and drone data new objects like ocean debris, deforestation zones, urban areas and more. In a new study an international team of researchers have developed such an application, METEOR. This can help environmental scientists to obtain big enough datasets for their research needs faster than before.

Images taken by drones and satellites give scientists a wealth of information. These snapshots provide crucial insight into the changes taking place on the Earth’s surface, such as in animal populations, vegetation, debris floating on the ocean surface and glacier coverage. In addition, experts can train neural networks to sort through the images at dizzying speed and spot and classify individual objects.

“However, none of the AI programmes currently available can immediately switch from recognising one type of object to another – like from debris to a tree or building,” says Prof. Devis Tuia, the head of the Environmental Computational Science and Earth Observation Laboratory of the École Polytechnique Fédérale de Lausanne (EPFL). “Today programmers have to train algorithms on each new object type by feeding it vast amounts of field data.” That is what Tuia together with Marc Rußwurm from Wageningen University & Research, together with colleagues of the Massachusetts Institute of Technology, Yale University and the Jülich Research Center, have set out to change with METEOR – a chameleon application that can train algorithms to recognise new objects after being shown just a handful of images.

Just four or five high-quality images are all that’s needed to retrain the system for a new task

When it comes to classifying images, neural networks can do in the blink of an eye what humans would need hours to accomplish. These networks are trained on data that have been annotated manually – the more data fed into a neural network, the more accurate its results will be. For instance, trees and buildings can look very different depending on the region they’re found in. That means a neural network’s algorithms need to be shown many different images of these objects taken under many different conditions to be able to recognise them reliably.

“The problem in environmental science is that it’s often impossible to obtain a big enough dataset to train AI programmes for our research needs,” says Marc Rußwurm, previously a postdoc at EPFL and today an assistant professor at Wageningen University & Research. “That’s especially true if we want to study phenomena specific to a given region, like the extinction of an indigenous tree species, or if we want to identify objects that are statistically small in number but widely dispersed, like ocean debris.”

Another challenge in training neural networks on aerial and satellite images relates to the wide range of image resolutions and spectral bands possible, and to the type of device used (i.e., from drones and satellites). To get around this problem, METEOR was designed to be adaptable and capable of meta-learning – it essentially takes shortcuts based on tasks successfully solved previously, but in other contexts. “We’ve developed algorithms and methods that enable neural networks to generalise the results of earlier deployments and apply that adaptation strategy to new situations,” says Rußwurm. Thanks to their novel approach, METEOR needs only four or five good images of an object to deliver sufficiently reliable results.

Example of two training tasks. Source: Rußwurm, M., Wang, S., Kellenberger, B., Roscher R., and Tuia D. Meta-learning to address diverse Earth observation problems across resolutions (2024).
Example of two training tasks. Source: Rußwurm, M., Wang, S., Kellenberger, B., Roscher R., and Tuia D. Meta-learning to address diverse Earth observation problems across resolutions (2024).

Taking advantage of resolution differences

To test their application, the developers modified a neural network that had been trained to classify various types of land occupation around the world based on images of distinct regions. They made it able to carry out five recognition tasks – measure vegetation coverage in Australia, identify deforestation zones in Brazil’s tropical forest, pinpoint the changes in Beirut after the 2020 explosion, spot ocean debris, and classify urban areas into different types of land use (industrial districts, commercial districts, and high-density, medium-density and low-density residential districts) – using each time a small number of high-resolution drone images and RGB satellite images depending on the problem. “We found that for these tasks when we adapt with METEOR using only a small datasets, our results were comparable to those from AI programmes that had been trained for longer periods and with much more data,” says Rußwurm.

The researchers will now train the basic AI on a multitude of tasks, so that it can further perfect its chameleon powers. This will enable it to adapt even more easily to countless recognition tasks. Also, they’d like to combine their application with a user interface so that human users can click on high-quality images suggested by the neural-network programme. “Since the programme will be shown only a few images, the relevance of those images is really important,” says Rußwurm.