MENU

Machine learning to remove space debris

Machine learning to remove space debris

Technology News |
By Nick Flaherty



Researchers are using machine learning algorithms trained on simulations of space debris as part of a key project.

With more than 34,000 pieces of junk orbiting around the Earth, their removal is becoming a matter of safety. Earlier this month an old Soviet Parus navigation satellite and a Chinese ChangZheng-4c rocket were involved in a near miss and in September the International Space Station conducted a manoeuvre to avoid a possible collision with an unknown piece of space debris.

A project led by ClearSpace-1, a spin off from research lab EPFL in Zurich, will recover the now obsolete Vespa Upper Part, a payload adapter orbiting 660km above the Earth that was once part of the European Space Agency’s Vega rocket. The mission, set for 2025, aims to ensure that it re-enters the atmosphere and burns up in a controlled way.

One of the first challenges is to enable the robotic arms of a capture rocket to approach the Vespa from the correct angle. For this is will use a camera to control the grasping of the Vespa and then pull it back into the atmosphere.

“A central focus is to develop deep learning algorithms to reliably estimate the 6D pose (3 rotations and 3 translations) of the target from video-sequences even though images taken in space are difficult. They can be over- or under-exposed with many mirror-like surfaces,” said Mathieu Salzmann in EPFL’s Computer Vision Laboratory led by Professor Pascal Fua, in the School of Computer and Communication Sciences.

EPFL’s Realistic Graphics Lab is simulating what this piece of space junk looks like as the ‘training material’ to help Salzmann’s deep learning algorithms improve over time.

“We are producing a database of synthetic images of the target object, including both the Earth backdrop reconstructed from hyperspectral satellite imagery, and a detailed 3D model of the Vespa upper stage. These synthetic images are based on measurements of real-world material samples of aluminium and carbon fibre panels, acquired using our lab’s goniophotometer,” said Prof Wenzel Jakob, head of the lab Salzmann. “This is a large robotic device that spins around a test swatch to simultaneously illuminate and observe it from many different directions, providing us with a wealth of information about the material’s appearance.”

Once the mission starts, researchers will be able to capture some real-life pictures from beyond the atmosphere and fine tune the algorithms to make sure that they work in situ. 

A third challenge will be the need for the inference model to work in space, in real-time and with limited computing power onboard the ClearSpace capture satellite. Dr. Miguel Peón, a Senior Post-Doctoral Collaborator with EPFL’s Embedded Systems Lab is leading the work of transferring the deep learning algorithms to a dedicated hardware platform.

“Since motion in space is well behaved, the pose estimation algorithms can fill the gaps between recognitions spaced one second apart, alleviating the computational pressure. However, to ensure that they can autonomously cope with all the uncertainties in the mission, the algorithms are so complex that their implementation requires squeezing out all the performance from the platform resources,” said Professor David Atienza, head of ESL.

Related space articles

Other articles on eeNews Europe  

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s