Simulating an IoT system for home automation
However, adding a bit of connectivity to those basic control systems would give you a much more interesting system solution.
Your customers have told you that it would be great to have a central dashboard telling them the total power consumption of all AC units in their building, as well as potentially tweaking the set temperatures of various hotel rooms in order to save power by just slightly adjusting things.
Also, you see some new business models emerging where you could sell software updates to the AC units and the dashboard unit. Existing hardware could be updated with new smarter algorithms without needing a service technician to go out and replace flash ROMs or memory cards. Customers could subscribe to updates, and you could have the AC units report wear so that units can be serviced before they fail rather than after. The sky’s the limit!
The overall system design ends up being like this:
At the top, we have the electronics unit attached to the actual AC mechanics and the control panel manipulated by the hotel guest. The electronics unit contains not just the old AC controller board, but also an add-on wireless unit that turns the old AC controllers network connected IoT nodes.
A wireless unit is also found on the gateways or concentrators, boxes that tie the wireless network to a standard wired or wireless Ethernet network. The wireless network is typically a mesh network that self-organizes and makes sure that signals can get from any AC unit to its designated gateway, even if they are not directly in contact with them.
The system rapidly becomes big. If we put a gateway for each floor of a building, we easily have ten or more of them, with each gateway in charge of ten or more AC units. The gateways in turn connect to a central server, where the actual business logic and value added services are implemented. Finally, users interact with the system using a web browser on a laptop or tablet. In this way, you can control the temperature in a room from a tablet, but it sure is a long winding path.
The very important question is, just how do you test the system and the software in a realistic environment?
Doing the traditional AC control is suddenly the easy part out on the edge. The system complexity has increased many times over, as we have added several separate software stacks.
Each software stack might be based on different operating systems and contains third-party software from a number of sources. You need to test the network of systems, the hierarchy of wired and wireless networks, and how the gateways and servers deal with large numbers of AC units.
What you are looking for are the behaviors that emerge as the nodes are networked together and their software content increased beyond the basic local function. As is well known from both philosophy and practical experience, emergent behaviors cannot really be predicted from the individual nodes.
Building a building-scale test system to achieve this is a bit of a challenge. It’s not practical or realistic to create a full-size system in order to test. Buying a hotel and installing test units everywhere would not be possible.
Instead, simulation has to be used to enable exploration and testing of software behavior as the system is scaled up. For automated testing and continuous integration of daily or hourly builds of the system software, simulation is absolutely necessary – there is no way to automate testing on a distributed system like this. So, how do we enable that?
The key is to use virtual platform technology to build models of the hardware in all the nodes, and actually run the real software stacks on the virtual platforms. In this way, we will be testing the final code, not some abstraction of it.
This is critical for agile development practices and continuous integration, since what you’ll ultimately want is to have tested true code that could in principle be shipped at any time. The software complexity and behavior comes from all levels of the software stack, and the best way to find out what is going on is to put it all together as early and as often as possible. Virtual platform technology enables just that, since you do not need hardware to run tests.
To make the tests interesting, you also need to simulate the AC units so that the control software has something to control, and it can report realistic power consumption numbers and room temperature changes back to the server. Sometimes, such models are purpose-written models that only drive software testing, but quite often there are already physics models available.
A simulation setup would look something like what is shown below. The innermost system is repeated 10 times for each instance of the outer system, and the outer system is repeated 10 times itself. All connected to one real server. Note that we run the real software stacks throughout the system, but on virtual platform models.
The setup shown above is just one possibility. Since a simulator is programmed, changing the setup is very easy.
The number of AC units per gateway can be changed with a few keystrokes and the topology of the wireless network can be varied to simulate building layouts that are square or elongated. For example, showing just a single wireless network:
In this way, the system functionality can be exercised under a wide variety of circumstances, without having to reconfigure a physical test system. It is possible to make networks fail entirely, have gateways crash, and checking that redundancy kicks in as it is supposed to. Crazy values can be fed to the AC control units in order to check how they report exceptional conditions.
It is possible to test the robustness of over-the-network software updates to the AC units, for example, by simulating network connections that are unreliable or slow. The variety of tests that can be programmed into a simulator is boundless!
Obviously, we cannot rely entirely on the simulation to test the system. In the end, the physical system as-built has to be tested in the physical world as it is – for as far as that is physically feasible.
Results from physical experiments should be used to update and tune the simulation, to make it more realistic and add models for failures observed in the physical world. If simulation has shown that the software and system design and code works across system sizes and configurations, it is possible to get away with smaller physical test labs that focus on particular set ups.
In the end, the simulator becomes an evolving test and design tool that supports development all the way to deployment of a system and beyond. After a system is deployed, simulation can be used to model particular customer setup to allow testing and error analysis to be performed in the lab rather than on-site.
New generations of software can be tested and developed on the simulation of the existing system, and new generations of hardware can be made available early and in volume to speed the development of next-generation systems. Ultimately, using simulation technology is powerful way to unlock new possibilities and dramatically transform the development process.
About the author:
Jakob Engblom is Product Line Manager, System Simulation at Wind River – www.windriver.com