The challenge at the edge of memory

March 09, 2020 //By Jeff Lewis
edge
The recent drive to bring intelligence to the edge, rather than the cloud, has created a conundrum - hardware constraints are beginning to cripple innovation. But new memories are being developed to replace SRAM and DRAM to solve this dilemma.

If data is the new oil, then artificial intelligence (AI) is what will process data into a truly invaluable asset. It’s this belief that’s causing the demand for AI applications to explode right now. According to PwC and MMC Ventures, funding for AI startups is rapidly increasing, reaching over $9 billion last year with tech startups that have some type of AI component receiving up to 50% more funding compared to others. This intense investment has led to rapid innovation and advances for AI technology. But the traditional AI use model of “sweep it up and send it to the cloud” is breaking down as latency or energy consumption can make transmission impractical. Another major challenge is that consumers are increasingly uncomfortable having their private data in the cloud potentially exposed to the world.

For those reasons, AI applications are being pushed out of their normal data-center environments, allowing their intelligence to reside at the edge of the network. As a result, mobile and IoT devices are becoming “smarter,” and a whole variety of sensors—especially security cameras—are taking up residence at the edge. However, this is where hardware constraints are beginning to cripple innovation.

Increasing the amount of intelligence living at the edge requires much more computational power and memory compared to traditional processors. Studies have repeatedly shown that AI model inference accuracy strictly relies on the amount of hardware resources available. Since customers require ever-higher accuracy—for example, voice detection has evolved to multifaceted speech and vocal pattern recognition—this particular problem only continues to intensify as the complexity increases with these AI models.

One significant concern is simply the need for electrical power. Arm has predicted that there will be 1 trillion connected devices by the 2030s. If each smart device consumes 1W (security cameras consume more), then all of these devices combined will consume 1 terawatt (TW) of power. This isn’t simply an “add a bigger battery” problem, either. For context, the total generating capacity of the U.S. in 2018 was only slightly higher at 1.2 TW. These ubiquitous devices, individually insignificant, will create an aggregate power catastrophe.

Of course, the goal is to never let the power problem get to that point. AI developers are simplifying their models, and hardware power efficiency continually improves via Moore’s Law and clever circuit designs. However, one of the major challenges remains the legacy memory technology, SRAM and DRAM (static and dynamic RAM, respectively). These memories are hitting a wall on size and power efficiencies and now often dominate system power consumption and cost.


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.