To say the least, Artificial Intelligence (AI) and neural networks were a hot topic, with numerous papers looking at different trade-offs to optimize neural network efficiency, often relying on approximate computing techniques to minimize a smart node’s energy consumption or memory requirement.
In his keynote address, “Design automation in the era of AI and IoT: challenges and pitfalls”, Senior Vice President, Hybrid Cloud and Director at IBM Research ( Armonk, NY) Arvind Krishna gave the audience an interesting insight into the future of IoT.
First, Krishna briefly reviewed the different types of revolutions that the humankind experienced since the industrial revolution, back in the 18th century, steam and railways gave way to electricity and steel-based heavy engineering, followed by oil, automobiles and the realm of mass production. Next, Information and Telecommunications (ITC) truly revolutionized our society, IoT being a growing part of it.
Describing the explosive growth of IoT and the trillions of gigabytes that internet-connected devices are predicted to generate in the early 2020’s, Krishna highlighted that while data is becoming the most abundant natural resource in the world, 60% of valuable sensory data loses value in milliseconds. That means, data must be acted upon or analysed quickly to make the most of it.
The IBM Research director also flagged up another puzzling statistic: 90% of data created over the last 10 years was never captured or analyzed.
“In 2017, the collective computing and storage capacity of smartphones will exceed that of servers. We need another revolution. We are living the end of the ICT revolution and next will be the AI revolution, powered by data” Krishna said.
While governments are spending money on networks, computing capacity and data centres, AI is today where semiconductors were in the 1960’s, he argued.
Krishna’s belief is that as neural networks become more common, AI is going to impact all industries in very much the same way silicon impacted all sectors of industry, and what he described as the “cognitive IoT era” will call for a novel system architectural vision.
“In a few years, we would shift from today’s proven interconnected multicore processor architectures to mobile swarm computing architectures with on-demand support from the cloud” he hinted, showing a slide titled “Mobile cognition: Analytics at the edge of the network”.
In a mobile swarm computing architecture, the heavy processing and long-term learning would take place in the cloud while the real-time cognitive reaction would take place in mobile devices, with some level of approximate learning.
“From the end-user’s perspective, a device would behave as a truly intelligent agent with real-time cognition reaction, and the deeply cognitive cloud would be hidden” Krishna said.
There are a couple of dozen companies doing AI chips to accelerate specific cognitive tasks, bridging the efficiency gap through approximations. But which one should be used for what application? There is no golden answer and the perfect answer would be too expensive, Krishna noted while putting forward IBM’s 24-core POWER9 acceleration platform to be designed in a 14nm finFET process and sporting 8 billion transistors.
“IoT is not a single chip, it is a collection of systems. We need tools capable to work at a system level, we need to do more exploration at system level using different AI techniques” Krishna said, considering that cognitive acceleration comes in different flavours.
Analyzing the lengthy and expensive design resource profile required to tape out a high-end server microprocessor today (requiring thousands of person-years and hundreds of millions of dollars), Krishna noted it would not be practical to expect a similar semi-custom design methodology to be applied for the off-load sea of accelerators needed to realize the full potential of the cognitive IoT era.
Instead, IoT design automation tools should become cognitive too, with intelligent design flows that automate the decisions of skilled engineers by leveraging the latest deep learning advances, Krishna argued, taking IBM’s SynTunSys system as an example.
SynTunSys relies on machine learning to automate the synthesis-parameter tuning process and was used during the design of IBM 22nm processors. The tool learns from prior design runs and achieves a quality of results beyond what human designers alone can achieve, while providing significant savings in human design effort.
Cognitive design tools may be the next frontier in EDA, Krishna concluded.
IBM imitates neurons in cognitive computing advance
IBM neurocomputer detailed
Lawrence Livermore, IBM partner on brain-like supercomputer
Will AI condition human behavior?
IBM: Five innovations that will change our lives within five years
IBM launches Watson IoT global HQ