MENU

Robots clean up after Covid-19

Robots clean up after Covid-19

Feature articles |
By Nick Flaherty






Robots offer transformational value, but until now have been relegated to tightly controlled operating environments. Brain Corp in San Diego has created AI software that system developers use to build autonomous machines that can navigate safely and efficiently in public indoor spaces such as retail stores, airports, hospitals, and more.

Demand has grown as the result of the Covid-19 pandemic for robot cleaning systems. The company raised $36m (€30m) in April to expand its AI technology globally, bringing the total investment to over $160m (€135m). The robots are trained by operators, once, to follow a route.

“The world of fully autonomous robots has only been around at scale for the last five years. We started with neuromorphic computing research and we looked at how the brain processes vision and how the brain learns and that gave us a technology that could be applied to robotics to solve navigation in complex environments,” said Phil Duffy, vice president of Innovation at Brain Corp. 

“We have over 14,000 robots out in the industry which we believe is the largest fleet operating in public environments, retail, airports,” he said. “The retail  and the cleaning industry adopted robots earlier than anyone one so they were semi-prepared for Covid-19 – we have seen 133 percent increase in usage during daytime hours as result. That leaves staff to sanitise the areas that robots cannot.”

The key is the data showing where the robot has travelled, and customers set their own compliance levels to show that areas are clean.

“One of the reasons we went into retail space is its complex. Cleaning, inventory delivery and scanning – if we can solve retail then we can go into area, and it’s very scalable. When we look at robots today, where autonomous robots navigate such as material handling or cleaning, and then into mechanical arms for industrial automation – there’s a huge opportunity for robots,” he said.

Next: Training robots for autonomy


The technique uses the operators to train the robots, showing them the routes to travel once so the routes can be repeated. “The flow of how the machines are used is integrated into the process of the store itself so we use the operator as the domain expert.”

“What allowed us to do that was a very machine learning environment – that approach of solving individual problems was the way we went to market,” he said. “We have travelled over 4 bn sq m over 3.1m autonomous hours and 4.2m km – it’s the coverage of unique footage we see the edge cases from the network effect of large robotic fleets, the larger the data set, the more edge cases you solve.”

One example is where a supermarket used infrared heaters. When these were switched on, the heaters washed out the sensors. Another is ghost pixels. The robotic platform is visual and if it sees there’s obviously a blockage but not connected to anything, the chances are it’s a ghost pixel, a reflection, especially if its consistently in one area. This usually means the machine would stop because it saw something, as filters for reflections don’t work.

Then there are the really odd cases where it takes human eyeballs to solve the problem. At one store in the mid-west of the US, a robot kept stopping for no apparent reason. It turned out that the doors of the store were opening and rubbish was dancing in the air, which the system saw as an obstacle. These edge cases, such as reflections from different materials, objects sticking out form a shelf, different shopping carts, help to improve the AI framework.

This being used for cleaning to keep stores clear of Covid-19. Some robot systems are using UV-C light for this, but the systems supported by Brain Corp are geared to chemical disinfectants.

“The problem with UV-C is the dwell time is very slow, so even if it is high power it has to sit there for a long time, and it doesn’t clean everything. Chemical disinfectant starts to sanitize the moment it drops, and it fits with a mobile robot. Driving through a retail store at any speed means the dwell time isn’t there.”

The AI platform, BrainOS, is designed to run on off the shelf hardware and multisensor arrays using cameras and lidar. The controllers are a mixture of Intel-based and Qualcomm’s ARM-based Snapdragon chips.  

For the sensors, the platform uses a single lidar or a double lidar from German supplier SICK on large systems that need a range of 10m. “We can automate off a single camera, but we do a full safety analysis , size, weight, speed, the environment,” said Duffy. “We use the lidar for mapping and SLAM and navigation and also people detecting. Then there is a slanted lidar to create a ‘virtual bumper’, as well as three time of flight (ToF) cameras. All these sensors are combined in mapping and navigations with data that overlaps to map the space through training once to create the routes.”

“We have the ability to pair a phone to the machine to select a route – if completes or have an issue or gets blocked sends a text with a picture to the operator. It can locally reroute, its able to go off and plan its way back to the route using a rule-based approach. We are working on the ability to update the map, for example if there is a regular blockage. We are now developing area fill – drive out a perimeter and the robot calculates the best path to fill the area to clean.”

“The Brain Compute Module is a kit for OEMs that includes everything down to the wiring harness,” he said, “We work to integrate that into the machines and we run the pilot manufacturing line, calibration and end of line systems and move that to the OEM for manufacturing and we set up their manufacturing process, their quality systems

This is based on Ubuntu Linux with the BrainOS security layer then full stack with navigation primitives, then a set of capabilities for the user interface (UI) and cloud connections via various networks, including 5G.

“We built our own hardware abstraction layer (HAL) and we take a platform approach to firmware for the sensors and motor drivers so we are hardware agnostic,” he said.   

“Latency is critical to the safety certification so that is all handled with on board processing,” he said. “Even with 5G certain operations may off board but not safety critical. There is the low level deterministic logic for safety perception, which is black and white, the machine is either on and off, it stops without having to process high level logic. At a higher level it cross checks the sensors, handles the movement detection etc – that’s what allows us to avoid the latency in the processing.”

Then there is the opportunity for integrating the frameworks with smart buildings.

“The sensor systems are built in, there is an ability for robots to perform these functions with the focus on retail, for inventory scanning,  add scanning towers, pricing, look at lights out operation, checking occupancy levels in the office, temperature control, WiFi and LTE coverage – that’s the next wave we see,” said Duffy.

www.braincorp.com

Related articles on robots 

Other articles on eeNews Europe 

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s