MENU

Biology, deceit & security in the Internet of Things

Biology, deceit & security in the Internet of Things

Feature articles |
By eeNews Europe



Over the last several months, Jim Hogan and I have been kicking around ideas with regard to the issue of security in the Internet of Things (IoT). This all began with an off-hand comment I made at a Hogan panel at DAC. The idea is that some non-traditional approaches to security at the system-level — particularly based on biological analogies — should become relevant.

This topic surfaces from time to time (there was even a blog about it last year in EETimes), but rarely seems to go very deep. We thought it was time to drop down a level and to delve into why this should be relevant to the IoT and how the mechanics might work. We hope you find it interesting.

Alternative security strategies
Security techniques in the Internet have largely drawn inspiration from physical security — keys, firewalls, trusted zones, and more. However, there are other possible sources of inspiration, biology being an obvious example.

We routinely talk of viruses, but that analogy extends only to the concept of a malicious attacker and the spread of infection, not so much to methods of defense. This is unfortunate, because the ability of living organisms to defend against infection should be just as much a source of inspiration. After all, life has evolved some pretty sophisticated defenses against pathogens over billions of years.

Biological analogies for security in cyberspace have been investigated in a number of papers (e.g., "The Biological Analogy and the Future of Information Security"), but they have not seen wide-scale adoption. Furthermore, these methods are all rooted in what one might consider the "classical" Internet — workstations, laptops, even tablets and smartphones connected through wired and wireless channels to the Internet at large.

We argue here that new challenges suggested by the burgeoning IoT may re-awaken interest in biological strategies for defense.

First, some definition and motivation. The IoT is a forecasted extension of our current Internet to connect not just computers and communication devices as we currently understand them, but also devices we wear (jewelry and clothing), medical devices implanted in our bodies, intelligent appliances in our homes, sensors attached to points of stress on bridges, buildings, airplanes, and ships, along with sensors and actuators in roadways, on the power distribution grid, and in many more applications in factories, departments stores, malls, and elsewhere.

This is not just connecting for the sake of connecting — all of this technology can enhance our health and safety, optimize our use of resources, and further improve our quality of life, but only if IoT devices can connect to the Internet so we can manage and automate control and monitoring of this widely-distributed network.

IoT Examples (source: Opinno.com) (Click here to see a larger image.)

IoT Examples (source: Opinno.com)
(Click here to see a larger image.)

The number of devices and sensors (the "edge-nodes") required to support this (hopefully) utopian vision is no longer scaled by the human population, but rather by the number of "things" to be interconnected. Some estimates place the number of likely edge-nodes as high as one trillion. To put that number in perspective, such a system could scale up the number of devices connected to the Internet today by a factor of one thousand. We are now at the early stages of this adoption curve.

One trillion is an interesting scale — one at which we arguably do not have a lot of engineering experience. However, it is a common scale in biology; a newborn child, as just one example, contains around a trillion cells and is surprisingly well-defended against infectious attacks through a sophisticated immune system for detection, counter-attack, and isolation; also through redundancy, which enables recovery from partial losses resulting from an attack.

While we should be careful not to over-stretch this analogy, perhaps we have something to learn from how nature has fashioned such capable systems. More particularly, biology can help us think in a different way about defense — at the system level — when much of our security design today is focused at the unit-level. Deception, which we’ll touch on later, is another system-level strategy.

Throughout the remainder of this article, what we will suggest is intended to augment — not replace — existing strategies, which will continue to be essential and which must continue to evolve rapidly.


 

Challenges unique to the IoT
Why is the IoT not just a larger version of the cyber-universe we already know? Why do we even need to consider new approaches?

First, the IoT has an unusually large attack surface. In the world of computer security, an attack surface is the sum of the points at which an attack can be launched. Every one of those trillion edge-nodes — whether a strain sensor, a pace-maker, or a refrigerator — represents a potential point of attack through wireless snooping, subversion, or outright replacement by a corrupt device. The CIA, no less, has raised concerns about our ability to defend against attacks in the IoT. In the classical Internet, we have built defenses using firewalls and anti-viral screens, but these are cumbersome methods to deploy around edge-nodes that will be ubiquitous, widely distributed, easily accessible, and often difficult to physically monitor or protect.

Another concern is cost. To reach these levels of deployment, edge-nodes will need to be very cheap, at least on average. An IoT at this scale will not be economically viable at unit prices we see today for consumer devices. We need to think of $1 per unit, but — at these rates — systems manufacturers will have slim margins and little room to add complex security solutions.

If prices could be subsidized by carriers, as phone prices have been, this problem could be mitigated, but many applications have no consumer in the middle to tap for long-term service contracts. Even consumer applications will be bounded by what the consumer can afford. Smartphones and tablets have raised our perception of affordability somewhat, but when it comes to buying yet more devices, our pockets are not arbitrarily deep.

Power consumption and battery life (where applicable) in various electronic applications. Many IoT applications will sit in the lower-right hand cornet of this chart (Click here to see a larger image.)

Power consumption and battery life (where applicable) in various electronic applications. Many IoT applications will sit in the lower-right hand cornet of this chart
(Click here to see a larger image.)

Finally, the power consumed by IoT devices is a major constraint. Many edge nodes will need to operate for years on a single battery charge. Medical implants and remote strain sensors are clear examples. Power significantly limits what software can run on these nodes. Any known form of anti-virus software is very compute-intensive and guzzles power. It is difficult to see this style of defense ever being viable on power-sipping edge nodes. One common counter-argument is to move checking to a less power-constrained host — the cloud or a gateway node for example. But checking cannot be continuous for the same reasons and — in the meantime — the edge-node is exposed to subversion in any manner of ways.

These new challenges seem ripe for new solutions. Edge-nodes in particular are very exposed and need cost-effective solutions for defense and local solutions to quickly contain attacks. Biological and deceit-based defenses are well worth considering in this context.

Strength through diversity
One important defense in biological systems is diversity. Agriculture experts now recognize that monoculture faming, while economically efficient, has an exploitable weakness. If a pest or pathogen can successfully attack any part of a crop, it can propagate quickly throughout the crop resulting in catastrophic failure. A notorious example of monoculture gone wrong was the Irish potato famine in the mid-nineteenth century. Attack by a single pathogen — phytophthora infestans — caused a general failure of the potato crop, which lacked genetic variability.

In diversified farming, even if a pathogen successfully attacks one crop, it is less likely that it will succeed in other crops. There are arguments about the best way to manage this risk while retaining economies of scale, but there is no question that diversity results in systems that are more robust to attacks.

In the case of the IoT, there is an interesting parallel. Since we expect the market to be large, we might expect to see healthy competition and growth of a very heterogeneous system. However, this diversity may be more apparent than real. The core compute engine underlying many of these systems is likely to converge on a limited number of suppliers (or possibly just one). The wireless interface will likely also converge quickly around a small number of suppliers. Similarly, the trend to open-source software (Linux, Java, and Android, for example) also reduces variability. Thus, we could see the same monoculture risks as in agriculture unless these suppliers take additional steps to re-impose diversity.

Methods to add diversity, especially to an underlying monoculture, are still evolving. Some automated methods include address space randomization, randomizing source code by adding dummy code, and stack layout randomization. Each of these increases the difficulty of an attack breaking into code, jumping to malicious routines, or launching new forms of attack. These methods, to these authors at least, seem especially interesting for edge node defenses given their potentially low cost.

Another rather obvious example of enabling diversity is to make sure we use the encryption capabilities built-in to IoT hardware and to use different encryption keys throughout the network. This way, even if one key is compromised, security in the rest of the network will remain intact. Perhaps all of this should be self-evident, but setting up different keys for each node carries a cost. Given our track-record in trading off unappreciated risk for convenience, we should be cautious this will not become yet another example.

 


Limitations of anti-viral defenses

The most widely known type of defense today in the system context is anti-viral. These techniques depend almost entirely on signature recognition. The biological analogue would be first to detect a disease (defining the signature), then build an anti-virus, and then add that anti-virus to the list of all the others you take in addition to the updates you constantly require for new mutations of known diseases.

There are already indications that this approach is not scaling, even in the traditional IT world. The problem is that anti-viral defenses are not adaptive, so each pathogen must be uniquely identified by experts in a central location, and then the anti-virus must be built and distributed. This can take days, by which time significant damage may already be done. Worse yet, viruses are now appearing with the ability to mutate in the wild, potentially defeating any attempts at signature recognition. And we know that signature-based anti-viral detection is power-hungry, therefore inappropriate for IoT edge-nodes. Consequently, while the anti-viral approach continues to be a necessary layer of a whole-system defense, it is clearly insufficient on its own.

Immunological defense
One interesting alternative is immunological defense. This may be the closest parallel between biology and cyber-system defense, driven quite intentionally through model development around biological immunology. These approaches aim to overcome the fundamental un-scalability of signature-based recognition by instead relying on behavior recognition, which need not examine the level of pathogen detail required to match signatures.

One example looks at detecting network intrusion and mimics the immune process quite closely (see "TAT-NIDS: An Immune-Based Anomaly Detection Architecture for Network Intrusion Detection"). Immune cells tune themselves to distinguish between "self" (normal behavior) and "non-self" (attackers) by looking at behavior. For example, behavior may be characterized by a requestor IP-address and a target IP-address, or it might be triggered by some signature of an encryption key appearing in a communication channel. Some of these patterns will be learned to be "self" behavior — the normal operation of the system. Anything outside the self-behavior is considered to signal an intrusion attempt.

What is especially interesting about this general approach is that it is local, it is potentially quite inexpensive (behavior detection is typically much simpler than matching many signatures), and it is automatically able to detect new "non-self" threats (anything which is not identified as self is automatically a potential threat). Thus, it can respond quickly at minimum cost and it can autonomously detect new threats (where a signature-based method could not).

The immune response system (Click here to see a larger image.)

The immune response system
(Click here to see a larger image.)

We are concerned in this section with detection/recognition. This stage detects antigens (by-products of a pathogen). Subsequently, through several steps, signals need to generate anti-bodies for broader defense.

There is, unfortunately, a downside to behavioral detection — by the time a problem has been detected, the node may already be infected (as in the case where an encryption key is detected outside the area it should be accessed). Similar problems occur in cell biology and suggest a system-level view of defense as discussed in the next section.

Responding to an attack
Once a threat has been detected, it must be removed or isolated. Autonomous removal may not be a good idea — think of the complexity of removing a virus from a PC and the subsequent uncertainty that the problem has really been fixed. Any such approach seems completely unmanageable at IoT edge-nodes. Cleanup probably requires human intervention, but we need an immediate mechanism to stop the spread of infection.

Again we can learn from biology. A damaging attack triggers cell death through either necrosis or apoptosis (see Molecular Biology of the Cell, 4th edition). Necrosis is uncontrolled death — the successful completion of an attack. Cells swell and burst, spilling their contents and further spreading infection. Apoptosis (programmed cell death) is controlled death — a successful defense against infection. This is a sequence of steps that slices the complete cell up, along with invaders, into small components. The details behind this disassembly are fascinating, but what is important here is the triggering process. This can be intrinsic, triggered internally by mitochondria detecting damage to the cell, or extrinsic, triggered externally by T-cells detecting antigens and then creating proteins that bind to the "death receptor" protein on the surface of the infected cell, thereby stimulating the apoptosis pathway inside the cell. Allowing for both solutions provides a double defense, accepting that the internal defense may be compromised before it can fully take effect.


We can follow a similar process in the IoT. First we must start with an alarm: "I think I have been compromised" (this signal has no direct correspondence in cell biology — a reminder that analogies are useful for inspiration, but not for blueprints). This can be used both to request service in the field and to signal danger to neighbors. The compromised node can then attempt to isolate itself or power down. This intrinsic trigger is a form of biological altruism: "I may not be able to save myself, but at least I can minimize impact to the rest of the system through isolation." At the same time, neighboring nodes, alerted by the alarm, can act to isolate the infected node, providing the equivalent to extrinsic triggering of cell death.

What happens subsequently at the infected node depends on safety and criticality considerations. For non-critical applications, a simple solution is to power down; in effect, to die, as would a compromised cell. This at least halts further replication and propagation of the pathogen from that site. A slightly less drastic solution might be still to let the principal function die, but to fail over to a non-programmable hardware option, something that will keep the base function ticking over but with no communication or adaptation support. This might be a better approach for medical implants. For example, a pacemaker might fail over to a hard-wired default pacing mode, which is not modifiable in software, while also alerting the wearer that they need to get to a hospital immediately.

More general solutions may require redundancy, which is the default solution in biology. We can have one kidney fail completely and still function normally because we have a second kidney. A similar principle could be applied in the IoT through hardware redundancy. Silicon is after all cheap, or so we are told. Thus, at edge nodes, one could have multiple copies of the complete function, or at least of those parts that could be compromised. (In fact, multiple copies of the complete function may be interesting from a service point of view. Remotely switching a failing node to a redundant copy may be significantly cheaper than a service call.)

In the security case, the compromised function dies but fails over to a redundant copy. Some care is needed here — you don’t want to share memory with the compromised node, so the copy will need to cycle through a full startup with no knowledge of prior state. There will be a glitch in support and log data will be lost, but otherwise there is promise that the edge node can restart in an uncompromised state.

Deception and active defense
An unfortunate reality for cyber defense is that methods of attack will continue to evolve, so effective security methods must find ways to defend against forms of attack not yet seen. We may be able to learn from one method pathogens use to overcome natural defenses — deception. For example, the immune system looks for non-self actors; therefore, if a pathogen can appear to be a self, it will not be attacked. One of the most striking examples of this is the HIV virus, which wraps itself in an envelope of phospholipids and proteins taken from the host human cell, thereby looking about as much like self as possible. Perhaps we can turn this concept of deceit back at the pathogens. Even better, perhaps rather than waiting to defend against attacks, we can take the fight at least part way to the attackers.

Work in this area has not been based on biological analogy — examples we have seen draw more on analogies with deception in spy networks. In any event, the method appears useful in our overall concept of system defense. The goal is to present pathogens with attractive targets — commonly called "honeypots" — which are actually traps to detect intrusion attempts. These might be dummy DNS targets, empty file or directory links, or dummy accounts with temptingly easy passwords. Any attempt to probe one of these targets is suspicious — multiple attempts trigger blocking on the probing IP address.

Since it is often not difficult to create more of these honeypot targets than real targets to be protected, and since an IP address can be blocked once identified, this approach can provide a defense with a high probability of success against probing attacks.

Summary
The size, exponential growth, distributed nature, and economics of the Internet of Things present new, arguably paradigm-shifting challenges in security management. Conventional approaches to security, while absolutely necessary, may be far from sufficient to protect this new, fast-growing, and exposed surface, or to adequately protect every component of the system against continually-evolving attacks.

Biology-inspired and deceit-based strategies offer new ways to think about defense against pathogens at a system-level. Given the nature of these defenses, they do not try to protect everything absolutely. Instead they aim to protect the health of the total system, understanding that local sacrifice or temporarily reduced function may — at times — be a necessary tactic to defend the greater good.

In fact, we already acknowledge that absolute protection using conventional techniques is a mirage given constantly-evolving threats. While we strive to overcome these threats with evermore sophisticated methods, this article suggests that a system viewpoint can significantly raise the bar for attackers, both present and future.

About the author:

Dr. Bernard Murphy is Chief Technology Officer of Atrenta Inc – www.atrenta.com

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s