What’s more, they were able to multiplex the ultrasound beam forming to add haptic feedback within the volumetric display so the object on display could be felt by a viewer reaching for it. Alternatively, driving the same transducers used for the haptic feedback, the researchers could encode most of the auditive spectrum to steer audible sound towards the viewer. This “ultrasound demodulation” was done through a upper-sideband amplitude modulation of the traps, to produces the audible sound.
The Multimodal Acoustic Trap Display (MATD) as they describe it in the paper “A volumetric display for visual, tactile and audio presentation using acoustic trapping” published in Nature was built using off-the-shelf components. I consists of two 16x16 arrays of ultrasound transducers, facing each other in a top and bottom configuration to define a working volume within which a lightweight expanded polystyrene particle (1mm in diameter) is driven by steering an ultrasound trap. High intensity RGB LEDs complete the volumetric display, driven synchronously with the particle’s movements to create a “volumetric pixel” whose changing colour across the volume yields a coloured objects through persistence of vision. For their demonstration, the researchers multiplexed the beam-forming capabilities of the ultrasound arrays (refreshed at 40kHz) to create on one hand the volumetric display and on the other hand the haptic effects (with duty cycles of 75% for levitation and 25% for tactile) or the audio content.