Nvidia opens AI robotics research lab in Seattle

January 16, 2019 //By Rich Pell
Nvidia opens AI robotics research lab in Seattle
Graphics processing units maker Nvidia (Santa Clara, CA) has announced that it is opening a new robotics research lab in Seattle, WA to help enable the next generation of robots.

The charter of the lab, says the company, is to drive breakthrough robotics research to enable the next generation of robots that perform complex manipulation tasks to safely work alongside humans and transform industries such as manufacturing, logistics, healthcare, and more.

"In the past, robotics research has focused on small, independent projects rather than fully integrated systems," says Dieter Fox, senior director of robotics research at NVIDIA and professor in the UW Paul G. Allen School of Computer Science and Engineering, which is located near the new lab. "We're bringing together a collaborative, interdisciplinary team of experts in robot control and perception, computer vision, human-robot interaction, and deep learning."

About 50 research scientists, faculty visitors, and student interns will perform foundational research in these areas, says the company. To ensure the research stays relevant to real-world robotics problems, the lab will investigate its work in the context of large scale, realistic scenarios for interactive manipulation.

The first such challenge scenario is a real-life kitchen where a mobile 'kitchen manipulator' solves a variety of tasks, ranging from retrieving objects from cabinets to learning how to clean the dining table to helping a person cook a meal. Demonstrated at an open house on January 11, the manipulator detects and tracks objects, keeps track of the state of doors and drawers in the kitchen, and opens/closes them to get access to objects for manipulation - all approaches that can be applied in arbitrary environments, only requiring 3D models of relevant objects and cabinets.

The robot uses deep learning to detect specific objects solely based on its own simulation, not requiring any manual data labeling. It is powered by the company's highly parallelized GPU processing, which enables it to keep track of its environment in real time, using sensor feedback for accurate manipulation and to quickly adapt to changes in the environment.

The robot uses the NVIDIA Jetson platform for navigation and performs real-time


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.