MENU

3D human sensing with Wi-Fi

Technology News |
By Rich Pell


Researchers at Carnegie Mellon University say they have achieved a new level of using Wi-Fi signals to identify people in a building through the use of a deep neural network. Their approach, say the researchers, allows for creating images on par with RGB cameras, paving the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.

Their latest work is based on earlier research that showed that Wi-Fi signals could be used to detect the presence of a person in a building by mapping the signals over time, where they could see where the signals were being blocked by a person’s body. Over time, they found that they were able to create stick figures that showed where a person was in a given building at any given time – a process now known as DensePose.

In their new effort, the researchers have taken this approach to a new level by introducing a neural network that helps fill in the bodies of the stick figures, providing much more lifelike images. And, say the researchers, it can do it on the fly, allowing for real-time motion tracking of multiple people in a given area.

To do so, three Wi-Fi transmitters are placed along with three aligned receivers at a scene — indoors in a room, or outside at a chosen site — along with a computer for processing and display. The Wi-Fi equipment used in these experiments, say the researchers, cost just $30 – far less than LiDAR or radar systems.

When running, the Wi-Fi signals are picked up by the receiver, which send them to a GPU inside of a computer for processing. The processing involves using a neural network to map the amplitude and phase of the signals to coordinates onto a virtually created human body — a process known as dense human pose correspondence.

During the process, the virtual human body is broken down into 24 components where two-dimensional texture coordinates are mapped onto them based on WiFi signals. The body parts are then put back together where they resemble a realistic human form — all in real time.

The result is a virtual animation shown on the computer display that mimics the locations and actions of people in the original scene. Their model, say the researchers, can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing Wi-Fi signals as the only input.

For more, see “DensePose From WiFi.”


Share:

Linked Articles
10s