Exploiting parallelism in neural network workloads for scalable acceleration hardware

By AImotive
Download White Paper
Share:

Many automotive system designers, when considering suitable hardware platforms for executing high performance NNs (Neural Networks) frequently determine the total compute power by simply adding up each NN’s requirements – the total defines the capabilities of the NN accelerator needed. Tony King-Smith at AIMotive looks at the techniques to exploit parallelism in NN workloads to realize scalable, high performance acceleration hardware. Read More


  window.dataLayer = window.dataLayer || [];

  function gtag(){dataLayer.push(arguments);}

  gtag(‘js’, new Date());  gtag(‘config’, ‘UA-160857065-1’);

The reality is almost all automotive neural network applications comprise a series of smaller NN workloads. By considering the many forms of parallelism inherent in automotive NN inference, a far more flexible approach, using multiple NN acceleration engines, can deliver superior results with far greater scalability, cost effectiveness and power efficiency

When considering the design of a hardware platform capable of executing the AI workloads needed for automated driving, many factors need to be considered. However, the biggest one is uncertainty: what capabilities does the hardware actually need to execute the worst case NN workload, and how much performance is needed to safely and reliably execute that? And how much of the time does that worst case workload have to run?


Disclaimer: by clicking on this button, you accept that your data might be communicated to this company. If you do not want us to communicate your data, please update your details on your profile

Download White Paper
White Papers
10s