dr. GW (Gert) Kootstra

dr. GW (Gert) Kootstra

Associate Professor

The world’s demand for agricultural products is growing rapidly requiring an estimated 70% increase in agricultural productivity in the next 30 years. There is a strong need for more sustainable agriculture to lower the impact on the environment. Agriculture furthermore suffers from a lack of skilled labour. Agricultural robotics and precision farming are a part of the solution to meet these challenges.

Challenges for agricultural robotics

There are two main challenges for robots to perceive and operate in agri-food environments: (1) the variation in the appearance of objects, environmental properties, cultivation systems and tasks, and (2) incomplete information due to occlusions, sensor noise and uncertainty.


To tackle these challenges, the research in my group targets at the following topics:

  1. Robust perception. Although deep neural networks have greatly improved the performance of machine-vision solutions, even state-of-the-art methods cannot deal well enough with the variation present in agricultural environments. To further improve the performance, we study the generalisability of neural networks and develop methods to deal better with variations:
    1. Uncertainty estimation to get a sense of the reliability of the predictions made by the machine-vision system 
    2. Active continuous learning to let the machine-vision system autonomously collect images that it is uncertain about, to then call in the help of a human annotator to create new training data. 
    3. Unsupervised/self-supervised learning to make use of the vast amounts of unlabelled data to pre-train machine-vision models, improving the generalization when limited labelled training data is available.    
  2. Active perception. To deal with occlusions, robots need to actively perceive the environment to gather information from different view points. We develop and study different methods:
    1. Next-best-view planning to allow robots to analyze which parts of the scene are unobserved to decide on a new viewpoints that optimizes the information gain.
    2. Multi-view perception combining information from multiple viewpoints to improve the accuracy of object detection and scene reconstruction.
    3. Multi-object tracking to associate new observations with the existing model/reconstruction of the world, allowing efficient fusion of information from multiple viewpoints. 
    4. Robotic self-supervised learning to allow robots to gather their own training data relieving the need for manual annotation and allowing systems to continuously learn to improve performance and adapt to new environments.