New ActivityComparing UAV-Based Image Resolution to Deep-Learning Weed-Detection Performance

Organised by Laboratory of Geo-information Science and Remote Sensing

Wed 29 May 2019 13:00 to 13:30

Venue Lumen, gebouwnummer 100
Room 2

By Sebastian Paolini van Helfteren

Environmental degradation due to conventional chemical weed-management, is a widespread issue in most agricultural land. Precision agriculture aims at sustainable agricultural production; reducing inputs by applying these precisely in space and time. Deep-learning-based object-detection systems, such as YOLOv3, can prove valuable in future agricultural systems to detect weeds and can contribute to sustainable agriculture by applying inputs precisely where they are needed. Such a detection system can be based on an Unmanned Aerial Vehicle (UAV), which is a fast and mobile possibility for weed detection and control. However, these aerial weed-detection systems are in their infancy: the practical feasibility, and relationship between image resolution and system performance is unknown. This research uses YOLOv3 to detect weeds on images taken from a UAV at different resolutions. Multiple datasets are created using k-fold cross validation. This research shows that, theoretically, such a system would economically benefit farmers and that a higher resolution is significantly tied to a better performance. However, this was not the case for all runs of the cross validation, indicating a more complex relationship between field characteristics and performance. Compared to other, more optimised weed-detection systems, results were in line with related work. This research explains how the practical aspect of altitude affects model performance (i.e. relating image resolution to model performance), and further delves into other aspects which influence performance; namely, object viewing angle and the underlying cause of false negative errors. Also, it’s explored how performance differs for orthomosaic images; bringing this research closer to the discipline of remote sensing. The results found demonstrate a possibility for future UAV-based weed management, with an average F1-score, precision, recall and mAP of 0.83, 0.95, 0.77 and 0.75 respectively for the original image resolution. This research can be used as a stepping stone for future research which aims to bring theory to practice. To bring a deep-learning-based UAV weed-detection system to practice, lightweight GPUs which can run such models in real-time are currently the limiting factor.

Keywords: UAV, deep-learning, YOLOv3, weed detection, image resolution, viewing angle, orthomosaic, performance, precision agriculture