Not relying on any other sensors but low-cost cameras and an inertial motion unit, the demo was reconstructing the scene in front of the swinging arm, a small Lego house complete with furniture and Lego dolls and displaying the 3D results as seen from the drone’s perspective.
Whilst Parrot already makes 3D mapping commercially available through post-flight image processing software, the novelty here is that the topological reconstruction is done on-board and in real-time. The 3D mapping is synchronized with the IMU data so as to always present the environment relative to the drone’s position.
From watching the demo, one could appreciate the precision of the reconstructed 3D landscape, increasing as the drone accumulated views and extended its field of view. Arguably, the 3D mapped environment could be streamed to the drone operator, but more likely and more useful, it could be used automatically by a self-aware drone to avoid obstacles and navigate into complex and unchartered 3D mazes.
Now, nVidia didn’t want to reveal exactly what Parrot is aiming at, but claimed a mapping resolution down to 1cm within a 5m range while processing 1280x720p resolution video at 30 frames per second (the depth being extracted from black & white frames). Then a trade-off can be made between the speed of acquisition and the level of resolution, which could mean faster 3D mapping for large obstacle collision avoidance scenarios in the automotive sector, only using cheap stereo cameras for ADAS.
Other speculative use cases for this application could include generic 3D mapping as an extension of Google’s StreetView, automated indoor and outdoor architectural exploration and or ready-to-3D-print landmark data acquisition.
Visit Parrot at www.parrot.com
Visit nVidia at www.nvidia.com