EventRec

Automated guided vehicles (AGVs) have changed the intralogistics industry in the last decade due to their increasing popularity. The current generation of AGVs use speeds of 0.1 m/s to 2.0 m/s. In order to meet the requirements of the future, research is being carried out into a new generation of high-speed vehicles that can reach speeds of more than 10 m/s. The challenge for the operation of these vehicles in practice is the perception of the environment. Current navigation and orientation methods have been developed with accuracy in mind rather than speed of movement. Therefore, there is a pressing need to enable fast moving AGVs to perceive the environment accurately.
Project Goal
The goal of the project is to address the challenges encountered by high-speed robots operating in indoor logistical environments. To function reliably, these robots require robust perception systems capable of accurately recognizing both static and dynamic objects. Precise 6D pose estimation of objects relative to the robot is crucial for tasks such as grasping and obstacle navigation. This project aims to enable Automated Guided Vehicles (AGVs) to operate safely at high speeds within warehouse settings, enhancing efficiency and reliability in such environments.
For instance, when a mobile robot needs to pick up an object, it usually has to slow down or stop completely and wait for an automated system to place the load on it. Normally, this process is slower than the speed at which a human can pick up a load. An event camera would allow robots to work faster and more accurately. Therefore, the mobile robot would not have to be slower than the systems currently in existence.
Implementation
The project scenarios will be implemented and evaluated within the research laboratory environment. A Motion Capture system will be deployed to track cameras, robots, and objects within a warehouse setting. A stereo event-camera system, together with an RGB camera, will be mounted on an AGV to perceive the environment. Deep learning and computer vision methods will be employed to develop models for motion segmentation, object detection, and tracking. The development will be carried out using C++, Python, and the Robot Operating System (ROS) framework.
Funding
The project is funded by the industriellen Gemeinschaftsforschung (IGF).
