
Real-time 3D SLAM reaches millimetre-level accuracy
The company’s embedded solution is based on over 11 years of academic research undertaken at the I3S/CNRS (Centre National de Recherche Scientifique) laboratory by the company’s CTO, Maxime Meilland together with Andrew Comport, Pixmap’s Chief Scientific Officer, a researcher at INRIA Sophia Antipolis where he pioneered work on dense localisation and mapping algorithms.
Both founded the company early 2015 with robotics expert Benoit Morisset (the startup’s CEO) shortly after their paper “On unifying key-frame and voxel-based dense visual SLAM at large scales” received the best scientific publication award at IROS 2013 (International Conference on Intelligent Robots and Systems).
The work detailed in that paper stemmed from French DGA’s Fraudo project (FRanchissement AUtomatique D’Obstacles or automated obstruction clearance), requiring dense localisation and mapping techniques for robots to traverse uneven ground and surfaces autonomously.
Available under licensing agreements, the algorithms at play unify two approaches commonly used to define dense models, volumetric 3D modelling (using voxel grids) and image-based key-frame representations, into a compact low-memory bandwidth solution supporting refresh rates up to 2kHz.

Reality Capture enables robots and drones to robustly map their environment in 3D and in a photorealistic manner to know their position within the world with a millimetric accuracy. Because the 3D maps are metrically accurate and provide a photo-realistic rendering of the environment, they can be used not only by the robot for path optimization and collision avoidance, but also by remote operators willing to visually revisit sites mapped by a robot.
The company typically integrates the 3D real-time robotics localization and mapping technology on the specific hardware (RGB-D sensors, CPU/GPU, architecture) of each customer according to the specifications that they require. Only once the integration is complete, Pixmap gets royalties on the sales of products embedding its algorithms.
The Reality Capture technology is applicable to any type of robot, whatever its shape, size or mode of locomotion. “Today’s robots are nearly blind, reduced to living in a 2D world, and executing basic functions. With the launch of our Reality Capture, robots can open their eyes for the first time and see the world in 3D. Pixmap’s Reality Capture gives robotics developers the technology to create a new universe of exciting applications for their robots and drones”, said Benoit Morisset in a company statement.
The company makes its Reality Capture technology available to robotics developers and programmers in the form of a Software Development Kit (“SDK”) called PX2M (short for PixMap&Motion), using low cost sensors. PX2M provide a multi-sensor fusion mechanism to take into account any additional localization information provided such as odometry, IMU or GPS. It can be fully embedded and rely on on-board processing, but for lighter systems such as drones, it can also be cloud-based and remotely connected to the robots.
The startup’s objectives is to adapt its technology to different segments of the robotics market including personal robots, service robots and drones, before moving into the virtual reality and augmented reality markets, either to generate 3D content or to bring millimetric localization capability to VR or AR headsets.
Visit Pixmap at www.pixmap3d.com
Related articles:
Light centimetre-level SLAM for drones
Rambus prototypes 2x2mm lens-less eye-tracker for headmount displays
UK software developer brings 3D computer vision to embedded modules
Hand-mounted depth camera improves robots’ situational awareness
