Graphics processing for AI in security, monitoring and more

December 13, 2019 //By Andrew Grant
Graphics processing
We have seen a revolution in the field of GPUs (graphics processing units) driven not by making prettier pixels but by artificial intelligence (AI), specifically relating to computer vision and data-driven decision-making.

The arrival of neural networks has enabled vision processing to become a critical factor in the modern world. This has driven change in industry with robotic process operation, smart cameras for surveillance and monitoring and advanced driver-assistance systems (ADAS) in our vehicles – and there is so much more to come as these technologies emerge fully.

This means that professionals now need to consider where the market not only is now, but where it could be in a few short years’ time. With development continuing at breakthrough pace and volumes of investment in AI outstripping almost every other sector it’s only a matter of time before everything we do is influenced by AI. Think of the huge volume of new applications added to mobile devices since the first smart phones, unlocking a world of location-based services, social interaction, commerce and entertainment. AI has the potential to unlock both new applications and to evolve those already created to make them exponentially better servants of the user.

The cloud contribution

Vision processing for AI has moved rapidly from the data centre to the edge and the latest IP for Application Specific Integrated Circuit (ASICs) and system on chips (SoCs) is geared towards variations on a theme: that is, pre-processing of visual information, traditional computer vision algorithms and then edge inferencing using neural networks to generate object detection, recognition and suitable actions.

AI is used as an umbrella term for multiple flavours of machine learning, including deep learning for computer vision. These networks are designed to mimic the brain’s neurons and synapses, using the digital equivalent, perceptrons, and they usually rely on being trained to recognise patterns in data (visual or otherwise) and then when exposed to new data, infer from that what the data could signify. Training is usually done in the data centre on racks of computers, usually GPUs which are well suited to parallel pipelined tasks, although inference is often done locally using GPUs or dedicated neural network accelerator IP (NNA).

Design category: 

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.