Humanoid robot is testbed for vision developers
The Joyce robot allows developers to try out new AI applications for high end vision processing and sensor fusion, says Immervision in Montreal, Canada.
The development kit is available to global developers, universities and technology companies to add additional sensors, software and AI algorithms to enhance perception and understanding of the environment to solve computer vision challenges.
Joyce comes equipped with three ultra-wide-angle panomorph cameras calibrated to give 2D hemispheric, 3D stereoscopic hemispheric or full 360 x 360 spherical capture and viewing of the environment. It uses data-In-picture technology so that each of video frame carries metadata from a wide array of sensors. This provides contextual information to AI and neural networks, computer vision and SLAM algorithms to help increase visual perception.
There is however only one version of Joyce. Immervision is encouraging members of the computer vision community to add their technologies to upgrade the robot in a series of international challenges using the development kit. The winner then has the technology added to the robot.
“We are excited to launch the industry’s first collaborative computer vision humanoid robot, designed for intelligent vision with 360⁰ spherical field-of-view achieved thanks to the most advanced freeform panomorph lens design, data-in-picture technology for fusing data from multiple sensors with vision in real-time, image processing algorithms, APIs and a programming interface from which to add other sensors and software like MEMS, microphones, ultrasound, and additional machine learning and AI algorithms,” said Pascale Nini, President and CEO of Immervision. “We cannot wait to collaborate with the brightest minds in computer vision and AI to explore bleeding-edge solutions to industry-specific challenges.”
Potential use cases include enhancing smart home devices such as vacuum cleaners, lighting
systems and home appliances or improving optics technologies for driver safety in assisted driving ADAS systems. It could also be used to improve medical diagnostics to better identify cancer tumours or other conditions from a CT scan.
The data-in-picture is a key difference. The protocol developed by Immervision adds data to each pixel in a frame, so higher resolution video at higher frame rates carries more information. This avoids the need to synchronise metadata files with particular frames, potentially simplifying the use of the images in in machine learning frameworks once the data has been separated.
The robot will travel around with the different development kits and the visual output will be streamed live at joyce.vision, although the form factor of Joyce has yet to be announced.
Related robot articles
- AR COMPETITION TO DESIGN A MARS ROBOT
- SWISS FIRMS TEAM ON FOUR LEGGED AUTONOMOUS ROBOT
- DANISH COVID-19 CLEANING SYSTEMS GET AUTONOMOUS BOOST
Other articles on eeNews Europe
- AI inference accelerator links to edge development kit
- Dutch deal for edge AI in the IoT
- SiFive takes on ARM with CEO from Qualcomm