Power efficiency trade-offs key to wireless sensing success
As applications multiply for autonomous wireless sensor nodes, the trade-offs resulting from limited available power are coming sharply into focus. Until a breakthrough is made in battery design or energy scavenging, designers have some clear choices regarding sample rate, signal resolution and filtering, data storage and transmission, and even what’s to be sensed in order to hit their power and lifetime budgets. For wireless sensor nodes, the key is either to find more power or to be smarter about how to use it.
Some applications, such as the structural health monitoring of bridges, can tolerate a larger sensor node, with a battery sized for a five- to 10-year lifetime. Other applications require smaller form factors. Smart power usage, designed for the sensing application, is a key enabler of these smaller, cheaper nodes.
To maximize power efficiency, focus on when and what to sense. Identify the specific data that must be measured and configure your power parameters around those criteria. Strategically targeting the data capture will allow the sensor node to record only the data required. Try to avoid recording and then transmitting a lot of high-precision zeros.
In terms of when to sense, several schemes can be used depending on the application. A local sentry sensor—a low-power, low-performance sensor in each node—can watch for signals over a defined threshold. The sentry can trigger on the high-performance, high-resolution sensors, recording an analysis to capture the desired event, and then put the system back to sleep once its task is completed. The local sentry sensor then returns to the signal watch mode until the next event occurs, allowing the wireless sensor node system to maintain its measurement and analysis coverage while conserving power.
Low-power schemes
Many vendors have released tiny, low-power sensors suitable for sentry duty. Sensor fusion brings together many sensor variables into a wireless node that allows for such redundancies as accelerometers with different power/performance levels measuring in the same axis.
Another low-power scheme is to use a high-performance sensor in a simple threshold mode. A capacitance-based accelerometer can be used with a single op-amp charge detector. Motion of the sensor node will build up a charge to a certain threshold and can automatically trigger the high-performance electronics to boot up and capture at full resolution. Delay electronics can return the node to sleep mode once the event is complete.
Finally, a remote sentry scheme can enable power savings without significant performance trade-offs. In this “team” approach, the sensor nodes alternate running at full power. While the lead node is running, the other nodes are in sleep mode but check for alerts at specific intervals. When an alert occurs, all the nodes wake up, capture at full resolution, and then drop back into sleep mode to conserve power.
In all of these schemes, fast system wakeup response times are an integral part of electronics design, particularly for applications requiring high performance, low power and long life cycles. The stability of calibrated sensor offsets and scale factors from turn-on to turn-on (ToT) becomes hugely important to the accuracy of data over long time scales. ToT stability requirements demand careful attention to sensor, packaging and system design.
Some sensor node apps, such as structural integrity monitoring, can tolerate a large battery form factor for extended deployment lifetime. Source: HP Labs. |
Smarter sensor applications
Many current sensing applications do not require 24/7 high-resolution measurement and analysis. For example, structural health is typically monitored at long intervals; bridges are manually inspected every one to two years. A wireless sensor network to replace manual inspections does not need to provide updates every millisecond.
Given those parameters, designers can scale back to more reasonable intervals—once an hour, once a day, once a week—and add a sentry approach to extend the system’s power life while capturing the necessary data to alert on major unscheduled events, such as an earthquake or a ship collision.
Another power-saving area is to determine what data is most important to capture and transmit. Careful design at the outset lets nodes deliver narrow but high-res data captures, reducing the performance and power levels required to create good information. Two candidates for optimization are local analysis and qualitative sensing.
Determining your data processing approach can be challenging. For instance, in an unattended ground sensor application that is set up for security perimeter monitoring or to simply count passing cars, do you record the whole signal and spend the power to store and send the data to the cloud for analysis? Or do you execute local filtering and analysis to transmit “…car, car, truck, bike . . . ” signals to the cloud without sending along the “zeros”?
In the cloud example, storage resources and increased processing power, as well as correlating results from multiple sensors, can lead to greater awareness and archiving potential for long-term studies. However, one should not overlook the fact that each sensor node is a core.
Power spent in the distributed system to preprocess data saves power when transmitting and computing in the cloud. The number of cores available in the cloud, though more powerful, will likely be significantly less than in a large-scale wireless sensor application. Understanding latency and data usage can help you extract the maximum power lifetime from your sensor net deployment.
Resolution requirements?
A final area for optimizing sensor power is to determine how much resolution you really need.
There will always be a trade-off between sensor performance and power consumption. In many cases, however, measuring something that is occurring is more valuable than being unaware, even if the data is qualitative. While we would all sense as many variables as possible in a node—as long as it didn’t affect cost, size and power budgets—the path to a truly all-purpose wireless sensor node may lie in being realistic about sensor performance requirements.
In Berkeley Systems’ After Dark screensaver, for example, the setting sliders were not marked in quantity but in quality. The Flying Toaster toast color was measured in -bit precision: light, medium, dark and burned. Now apply that to a humidity measurement example. Are millisecond resolution and 0.1 percent accuracy needed for all apps? In reality, a few bits with 1-minute resolution are likely enough for environmental awareness. Qualitatively, the measurement could be “underwater, Florida, California, dry.” That’s 2 bits and probably enough for many applications. Light sensing offers a similar scenario: Do you need to know the wavelength distribution, or simply whether the lighting intensity is extremely dark, dawn, noon, surface of the sun? Or consider temperature; 1 bit is enough to determine whether a food package is still in the refrigerated truck.
Even with low-performance sensors, low cost enables a redundancy of deployment. And a higher quantity of sensors increases the quality of data. Since proximal sensors should all offer similar readings, anomalous data points can be dismissed as a malfunction if analysis determines the data does not make sense. That significantly reduces false positives systemwide and improves the dependability of operations. For example, you can monitor food shipments using a GPS tag.
Qualitative sensing opens the door to adding a combination of environmental sensors without affecting power and cost of the tag. Intelligent sensing allows sensor nodes to deliver high-resolution results while maximizing battery life, providing greater insight into what is being measured and enabling users to make real-time decisions based on that knowledge.
Some sensor node apps, such as structural integrity monitoring, can tolerate a large battery form factor for extended deployment lifetime. Source: HP Labs. |
About the author
Peter Hartwell is a distinguished technologist and senior researcher for information and quantum systems at HP Laboratories, where he leads the MEMS team. He holds a bachelor’s degree in materials science and engineering from the University of Michigan and a PhD in electrical engineering from Cornell.