
Making fog computing sensors clearly reliable
While the terminology is new, the basic premise of fog computing is classic decentralization whereby some processing and storage functions are better performed locally instead of sending data all the way from the sensor, to the cloud, and back again to an actuator. This reduces latency and reduces the amount of data that needs to be sent back and forth. Reducing latency improves the user experience for consumer applications, but in industrial applications it can improve response times for critical system functions, saving money, or lives.
This distributed approach improves security by reducing the amount of data that needs to be transmitted from the edge to the cloud, which also reduces power consumption and data network loading to enhance overall quality of service (QoS). Fog computing also strives to enable local resource pooling to make the most of what’s available at a given location, and adds data analytics, one of the fundamental elements of the IoT, to the mix.
The nuances of fog computing, in terms of network architecture and protocols required to fully exploit its potential, are such that groups such as the Open Fog Consortium have formed to define how it should best be done (Figure 1, above; The OpenFog Consortium is looking to determine the best architectural and programming approaches to ensure optimum distribution of functionality and intelligence from sensors to the cloud, and back. (Source: OpenFog Consortium)).
Members of the consortium to date include Cisco, Intel, ARM, Dell, Microsoft, Toshiba, RTI, and Princeton University, and it is eager to harmonize with other groups including the Industrial Internet Consortium (IIC), ETSI-MEC (Mobile Edge Computing), Open Connectivity Foundation (OCF), and the OpenNFV. The consortium has already put out a white paper that will guide you through its current thought processes (you have to register to download it.)
Reliable sensors for fog computing
As fog computing rolls in, the onus is upon designers to figure out how much intelligence should be at each node of the system for optimal performance. This implies then that sensors will need to start being more intelligent, with some level of built-in processing, storage, and communications capability. This has been coming for some time, but it seems to be reaching a tipping point, becoming a necessary option from sensor providers, though there are the usual cost, space, power, and footprint tradeoffs.
While MEMS sensors have been a boon to designers with regard to small size and functional integration, on-going integration to meet the smart-sensor needs of fog computing naturally raises the question of reliability. To date, the integration of digital functions on MEMS sensors has enabled bi-directional communication, self test, and the implementation of compensation algorithms (Figure 2 and Ref.1).
Figure 2. Increasing levels of digital integration, from basic analogue signal conditional (A) through to on-board MCUs (B), local memory, and ADCs (C) have helped make MEMS sensors more capable of implementing self test and active compensation routines, but real-time reliability monitoring remains elusive.
Such features are critical if MEMS sensors are to be trusted long term for monitoring electrical energy distribution, medical system functions, and industrial systems status and processes. Such MEMS sensor applications are so critical that researchers at the Universidad Veracruzana (Xalapa, Mexico) have looked into alternatives to the reliability assurance methodologies that depend on monitoring generic failure rates for reliability prediction. These methods, as the researchers point out, lack realism in their ability to predict reliability in various operational environments, from the arctic to the tropics.
As we tumble head first toward fog computing with ubiquitous smart sensors, ensuring the reliability of the data coming from these sensors becomes increasingly important. At the same time, the deployment of fog computing principles means that the communications infrastructure is being put in place to ensure better communication between nodes. These two factors make the university’s development of a real-time sensor failure analysis methodology even more interesting and applicable to the new sensing and networking paradigm.
In the proposed design, the team used a low-power 8-bit PIC18F4550 MCU, a 10-bit analogue-to-digital converter (ADC), a Texas Instruments INA333 instrumentation amplifier, and a HC-05 Bluetooth module to monitor sensor health (mean time between failure (MTBF)) and communicate that to a smartphone (Figure 3). Failures could be something as simple as a communications link drop.
Figure 3. The proposed methodology for real-time sensor monitoring removes the vagaries of sensor reliability prediction to make critical IoT MEMS sensor data more reliable over the long term.
The key here is that the MTBF for all sensors is stored locally in non-volatile memory, and as the sensor ages its reliability is constantly being recalculated and updated.
Adding more smarts to sensors is good, but as we become more reliant upon those sensors, having an improved awareness of sensor (and system) status provides the opportunity to ensure the data we use for our fog computing is, in itself, reliable.
Reference
1. Analysis of the development of smart sensors based on MEMS devices and smart sensor platform proposal (Universidad Veracruzana via IEEEexplore)
See also;
