Graphics processing for AI in security, monitoring and more: Page 2 of 6

December 13, 2019 //By Andrew Grant
Graphics processing
We have seen a revolution in the field of GPUs (graphics processing units) driven not by making prettier pixels but by artificial intelligence (AI), specifically relating to computer vision and data-driven decision-making.

In the last ten years the advances in visual processing have been progressing at an exponential rate due to the increasing amount of affordable compute power and the development of convolutional neural networks (CNNs) and the sensors to go with them. Specifically, the ability to “learn” and “develop” a representational model of the world (through inputs from sensors, datasets and SLAM – simultaneous location and mapping algorithms) means that systems can begin to grasp context and their position in space as well as make predictions and act on them. Sophisticated systems, trained in the cloud are now capable of significantly faster inferencing, which means that object identification can be done at a speed that allows real-time decision-making. Embedded systems with multiple sensors in autonomous vehicle can identify other cars, distinguish roads from sidewalks, and pedestrians from animals. They can then begin to make predictions as to whether a pedestrian is about to walk into the road.

What is important here is that this sophisticated inferencing traditionally performed in the cloud is now being run on a device at the edge, that is in a local embedded processor taking one or two millimetres squared of silicon which can accelerate network layers with exceptional performance. This means that powerful compute for AI can now be built into the smallest sensors, electronic control units (ECUs) and “Internet of Things” (IoT) devices.

As AI moves closer to the edge and into devices such as sensors, cameras and mobiles, it not only eliminates the need for racks of cloud-based inference and instead moves the analysis to the device itself, removing latency in processing and reducing data transmission and bandwidth while potentially increasing security. A powerful CNN can through quantization and adaptation be deployed in a small edge device, and when inferencing can be run on a chip the size of a pin head this allows these devices to impact a plethora of markets including security, retail, factories, homes and vehicles, becoming ubiquitous.

Neural networks are becoming a vital component in heterogeneous systems, involving combinations of GPU and NNA, each doing what they do best and complementing each other.

Design category: 

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.