
Time-of-flight measurements open up user-interaction scenarios
STMicroelectronics recently unveiled an all-in-one module embedding both a wide dynamic ambient light sensor and a ranging sensor along with an infra-red light emitter. The first member of ST’s FlightSense product family, the VL6180, uses direct time-of-flight (ToF) technology to measure the time the light takes to travel to the nearest object and reflect back to the sensor located right next to the emitter — see Figure 1. The speed of light in air has a well-known value and can be used to reliably convert time into distance regardless of the target object’s properties such as reflectance, unlike conventional amplitude-based optical proximity sensors.

The fundamental sensing technology underlying this sensor is a “Single Photon Avalanche Diode” (or SPAD), which is integrated on a single chip along with everything else except the light emitter. The single photon avalanche diode must be reverse biased beyond its breakdown voltage, which puts it in a very sensitive state, called Geiger mode. When an incoming photon impacts the sensing area, it splits an electron-hole pair. These are subsequently accelerated due to the high electric field and go on to cause a chain reaction, generating an avalanche current in a very short amount of time. This very fast response time, combined with extreme sensitivity, make SPADs a perfect match for time-of-flight constraints and allow them to output two independent measurements: the amplitude of the light reflected back from a target is calculated by counting the photons, and the distance of a target is based on the time-of-arrival information from each photon detected.
Simply detecting a single photon that has traveled from the module to a target object and back to the SPAD detector is not enough to determine the distance. This is because the emitted pulse of light is not infinitely small. We use very short optical pulse, which is essentially a stream of photons, whose arrival time follows a Poisson distribution. When the SPAD detector is triggered, it is not possible to know whether the event detected was due to a photon on the leading edge of the emitted pulse, or from the middle or end of the pulse. To complicate matters further, it is also not possible to know whether an event was detected due to a photon emitted by the module, or whether it was simply a photon from background ambient lighting that triggered the system. To understand whether a photon is correlated to the emitter, or is simply background noise, we need to repeat the optical pulse many times, and essentially build up a histogram to separate the signal from the noise.
The primary application intended for the VL6180 product is a replacement of existing proximity detection technology, which is amplitude-based and cannot measure absolute distance. These proximity sensors are used in nearly all smartphones to detect the user’s head during a phone call, for example. Unfortunately, the amplitude of the reflected light varies according to the distance but also with the reflectance level of the target, which can be as low as 3 percent for dark black hair. This leads to very ambiguous results – quite frustrating to some users: when the amplitude of the light level is low, the amplitude-based proximity sensor may “think” the user’s head is far away, when in fact it is very close, but the user’s black hair is not reflecting enough light. As a result, the touchscreen is not disabled, and the user’s cheek may brush up against any number of buttons and functions (Search “face hang-up” and any smartphone brand on the Internet to find example of frustrated users!). On the other hand, smartphones equipped with the VL6180 will detect the user’s head, irrespective of hair color or hats/glass frames/etc., and shut off the touchscreen to avoid any unwanted touch interactions.
ST’s FlightSense time-of-flight technology enables to measure the distance from the phone to a hand or other object, opening up new user-interaction scenarios that phone manufacturers and app developers can exploit. Even though the system is accurate (see Figure 2), independent of the target object’s reflectance, the detector does need a certain amount of photons in order to confirm the distance. If not enough photons are received back from the target because it is too far, with too low reflectance, then no range will be reported. The net effect is that a high-reflectance target such as a human hand can be detected well beyond the 10cm spec (up to 25cm away), whereas worst-case low reflectance targets such as black wool gloves top out around 10cm. One dimensional (1D) gestures, for applications such as accurate volume control and reliable automatic loudspeaker mode switch demonstrated at Mobile World Congress 2013, can be implemented because of the robustness in detecting all kinds of targets and delivering absolute distance measurement
Click on image to enlarge
Gesture detection
With two independent outputs (amplitude and distance), it is now possible to remove the ambiguity between certain types of gestures. At its simplest, we can look at a single sensor and two gestures. When a user moves a hand sideways through the field-of-view of a conventional amplitude-based optical sensor, the signal varies from very low (as the hand begins to reflect light back to the sensor), to very high (as the hand passes over the middle of the sensor, and light is reflecting back from all parts of the illuminated target), then back to very low as the hand exits the field of view. The exact same waveform can be seen if a hand comes down vertically: low signal when it’s far away, high signal when it gets close to the sensor, and then back to low signal when the hand exits the field of view, either vertically or horizontally. The “sameness” of the sensor response to these two gestures makes it impossible to differentiate.
However, if we add distance data, then it’s suddenly very clear as shown in figure 3.

Building on this, we envisage a system with multiple ToF sensors, spread around a screen or an interface. We can build a very low resolution depth-map of the scene in front of the object in question. A swipe or a flip could be differentiated as shown in figure 4. Even though both moves are in the same direction, a flip of the hand contains much more Z movement than a swipe, which can’t be detected by conventional optical sensors but will be detected by a ToF sensor.


About the authors:
Marc Drader is principal technologist for imaging systems for the Imaging Division of STMicroelectronics.
Laurent Plaza is business development manager for the Imaging Division of STMicroelectronics.
Related articles:
- Near Field Communication (NFC) technology and measurements – Part 1: Overview
- Near Field Communication (NFC) technology and measurements – Part 2: NFC RF measurements
- A measurement approach for IQ offset and imbalance of LTE mobile devices
- Mobile measurement supports onboard EHV high-voltage testing
- Perfect timing: performing clock division with jitter and phase noise measurements
- Measurement accuracy/signal integrity prove king of banner specs
- Measure surface texture and lead angle of shafts for rotary dynamic sealing—An alternative
- Understanding the impact of digitizer noise on oscilloscope measurement
