MENU

Exploiting parallelism in neural network workloads for scalable acceleration hardware

Exploiting parallelism in neural network workloads for scalable acceleration hardware

By AImotive



  window.dataLayer = window.dataLayer || [];

  function gtag(){dataLayer.push(arguments);}

  gtag(‘js’, new Date());  gtag(‘config’, ‘UA-160857065-1’);

The reality is almost all automotive neural network applications comprise a series of smaller NN workloads. By considering the many forms of parallelism inherent in automotive NN inference, a far more flexible approach, using multiple NN acceleration engines, can deliver superior results with far greater scalability, cost effectiveness and power efficiency

When considering the design of a hardware platform capable of executing the AI workloads needed for automated driving, many factors need to be considered. However, the biggest one is uncertainty: what capabilities does the hardware actually need to execute the worst case NN workload, and how much performance is needed to safely and reliably execute that? And how much of the time does that worst case workload have to run?


Disclaimer: by clicking on this button, you accept that your data might be communicated to this company. If you do not want us to communicate your data, please update your details on your profile

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

White Papers
View all White Papers
10s