
Event-camera equipped drones get milliseconds reaction times
The findings published in the Science Robotics journal under the title „Dynamic Obstacle Avoidance for Quadrotors with Event Cameras“ report an overall detection and reaction latency (to initiate an obstacle avoidance manoeuver) of only 3.5 milliseconds, sufficiently low for the drone to avoid even fast-moving obstacles such as a basket ball being thrown at it.
Event cameras are bio-inspired sensors that measures per-pixel brightness changes, asynchronously, capturing a stream of events at microsecond resolution instead of having to generate full image frames as conventional cameras do.

during an arbitrary time window of 10ms. On the right side,
the same events after ego-motion compensation, back-
projected on the image plane.
But standard vision algorithms that analyse image changes across full frames cannot be applied to a stream of asynchronous events, and the researchers had to develop novel algorithms to exploit the temporal information contained in the event stream to distinguish between static and dynamic objects. Their moving-obstacle detection algorithm works by collecting events during a short-time sliding window, it then compensates for the motion of the drone (ego-motion) during that time. Tightly integrated with the drone‘s inertial measurement unit data, this ego-motion compensation approach enables even a moving sensor to remove the pixel events generated by the static parts of the drone‘s environment, leaving dynamic objects to be easily identified.
With an overall latency of only 3.5ms, the algorithm is sufficiently fast to run in real time on a small onboard computer. Object detection was successfully tested both indoor and outdoor as demonstrated in a video where a quadcopter is shown executing various collision-avoidance manoeuvers in front of a ball.
More than 90% of the time, the drone avoided the ball (thrown from a 3-meter distance at 10m/s). When the drone “knew” the size of the object in advance, one camera was enough, but when it had to face objects of varying size, two cameras were used to enable stereoscopic vision.

According to Davide Scaramuzza, who leads the Robotics and Perception Group at the University of Zurich as well as the NCCR Robotics Search and Rescue Grand Challenge which funded the research, these results show that event cameras can increase the speed at which drones can navigate by up to ten times.
“One day drones will be used for a large variety of applications, such as delivery of goods, transportation of people, aerial filmography and, of course, search and rescue,” he says. “But enabling robots to perceive and make decision faster can be a game changer for also for other domains where reliably detecting incoming obstacles plays a crucial role, such as automotive, good delivery, transportation, mining, and remote inspection with robots”, Scaramuzza says.
“Our ultimate goal is to make one day autonomous drones navigate as good as human drone pilots. Currently, in all search and rescue applications where drones are involved, the human is actually in control. If we could have autonomous drones navigate as reliable as human pilots we would then be able to use them for missions that fall beyond line of sight or beyond the reach of the remote control”, added Davide Falanga, PhD student and primary author of the article.
NCCR Robotics – www.nccr-robotics.ch
Related articles:
Stacked event-based vision sensor boasts highest HDR
Machine vision startup secures funding for bio-inspired sensor development
From implantable retinal pixels to visual cortex stimulation
