MENU



The charter of the lab, says the company, is to drive breakthrough robotics research to enable the next generation of robots that perform complex manipulation tasks to safely work alongside humans and transform industries such as manufacturing, logistics, healthcare, and more.

“In the past, robotics research has focused on small, independent projects rather than fully integrated systems,” says Dieter Fox, senior director of robotics research at NVIDIA and professor in the UW Paul G. Allen School of Computer Science and Engineering, which is located near the new lab. “We’re bringing together a collaborative, interdisciplinary team of experts in robot control and perception, computer vision, human-robot interaction, and deep learning.”

About 50 research scientists, faculty visitors, and student interns will perform foundational research in these areas, says the company. To ensure the research stays relevant to real-world robotics problems, the lab will investigate its work in the context of large scale, realistic scenarios for interactive manipulation.

The first such challenge scenario is a real-life kitchen where a mobile ‘kitchen manipulator’ solves a variety of tasks, ranging from retrieving objects from cabinets to learning how to clean the dining table to helping a person cook a meal. Demonstrated at an open house on January 11, the manipulator detects and tracks objects, keeps track of the state of doors and drawers in the kitchen, and opens/closes them to get access to objects for manipulation – all approaches that can be applied in arbitrary environments, only requiring 3D models of relevant objects and cabinets.

The robot uses deep learning to detect specific objects solely based on its own simulation, not requiring any manual data labeling. It is powered by the company’s highly parallelized GPU processing, which enables it to keep track of its environment in real time, using sensor feedback for accurate manipulation and to quickly adapt to changes in the environment.

The robot uses the NVIDIA Jetson platform for navigation and performs real-time inference for processing and manipulation on NVIDIA TITAN GPUs. The deep learning-based perception system was trained using the cuDNN-accelerated PyTorch deep learning framework.

The system is unique, says the company, because it integrates a suite of cutting-edge technologies developed by the lab researchers:

  • Dense Articulated Real-Time Tracking (DART): DART uses depth cameras to keep track of a robot’s environment. It is a general framework for tracking rigid objects, such as coffee mugs and cereal boxes, and articulated objects often encountered in indoor environments, like furniture and tools, as well as human and robot bodies including hands and manipulators.
  • Pose-CNN: 6D Object Pose Estimation: Detecting the 6D pose and orientation of known objects is a crucial capability for robots that pick up and move objects in an environment. This problem is challenging due to changing lighting conditions and complex scenes caused by clutter and occlusions between objects. Pose-CNN is a deep neural network trained to detect objects using regular cameras.
  • Riemannian Motion Policies (RMPs) for Reactive Manipulator Control: RMPs are a new mathematical framework that consistently combines a library of simple actions into complex behavior. RMPs allow the researchers to efficiently program fast, reactive controllers that use the detection and tracking information from Pose-CNN and DART to safely interact with objects and humans in dynamic environments.
  • Physics-based Photorealistic Simulation: NVIDIA’s Isaac Sim tool enables the generation of realistic simulation environments that model the visual properties of objects as well as the forces and contacts between objects and manipulators. A simulated version of the kitchen is used to test the manipulation system and train the object detection network underlying Pose-CNN. Once simulation models of objects and the environment are available, training and testing can be done far more efficiently, saving development time.

“We really feel that the time is right to develop the next generation of robots,” says Fox. “By pulling together recent advances in perception, control, learning, and simulation, we can help the research community solve some of the world’s greatest challenges.”

Nvidia

Related articles:
Nvidia robotics platform heralds ‘new era’ of autonomous machines
Nvidia module enables AI-powered autonomous machines
AI breakthrough renders interactive 3D environments
AI technique helps robots learn by observing humans

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s