In cooperation with Harvard University, a framework is developed for the motion control of harvest robots. By adding a camera to the end-effector of a robot, visual information can be used whilst the robot is moving towards the target. Additionally a three-dimensional reconstruction of the scene can be made.
During the past 4 years a sweet-pepper harvesting robot was developed in the European project CROPS. However it turned out that implemented approach was not yet optimal for recognising all fruit. This approach only analyzed a single perspective, after which path planning guides the robot blindly to the target. Unfortunately a single viewpoint is not sufficient to obtain enough information about the crop.
In the new framework an alternative approach is implemented that should solve this problem. For this the plant is first scanned from multiple perspectives. After this the robot is moved towards the fruit incrementally, in each step using images from the end-effector to correct its pose.
The framework will be tested and used in the future series of harvesting robots, aiming on faster and more accurate harvesting cycle times.