EventRec
Enabling high speed perception for fast moving AGVs in an indoor logistic setting
Automated guided vehicles (AGVs) have changed the intralogistics industry in the last decade due to their increasing popularity. The current generation of AGVs use speeds of 0.1 m/s to 2.0 m/s. In order to meet the requirements of the future, research is being carried out into a new generation of high-speed vehicles that can reach speeds of more than 10 m/s. The challenge for the operation of these vehicles in practice is the perception of the environment. Current navigation and orientation methods have been developed with accuracy in mind rather than speed of movement. Therefore, there is a pressing need to enable fast moving AGVs to perceive the environment accurately.
Project Goal: The goal of the project is to address the challenges encountered by high-speed robots operating in indoor logistical environments. To function reliably, these robots require robust perception systems capable of accurately recognizing both static and dynamic objects. Precise 6D pose estimation of objects relative to the robot is crucial for tasks such as grasping and obstacle navigation. This project aims to enable Automated Guided Vehicles (AGVs) to operate safely at high speeds within warehouse settings, enhancing efficiency and reliability in such environments.
For instance, when a mobile robot needs to pick up an object, it usually has to slow down or stop completely and wait for an automated system to place the load on it. Normally, this process is slower than the speed at which a human can pick up a load. An event camera would allow robots to work faster and more accurately. Therefore, the mobile robot would not have to be slower than the systems currently in existence.
Implementation: The project scenarios will be simulated within a virtual reality environment in our research laboratory. A Motion Capture system will be deployed to track cameras, robots and objects within a warehouse setting. A stereo event camera system along with an RGB camera is used on an AGV to perceive the environment. Deep Learning and computer vision algorithms will be employed to develop models for motion segmentation and 6D Pose estimation. The development will be carried out using C++, ROS, and Python.
Funded by: The project is funded by the industriellen Gemeinschaftsforschung (IGF).
Industry Partners: Kion, Jungheinrich, MotionMiners, Node Robotics, Framos, Safelog, SmartFork
Contact Person: M Sc. Shrutarv Awasthi, M Sc. Anas Gouda