AI processing in the smart city, smart factory and autonomous vehicles
When a GPU and NNA are combined into the same chip, there is the opportunity to get the best of both worlds, where graphics vision compute processing is combined with neural networks, often using shared memory to reduce bandwidth and external data transmission. Several use cases are examined in greater detail below.
Smart cities are all about infrastructure. In the smart city, sensors relay data back to “brains” in the cloud to direct traffic smoothly by monitoring traffic flow to increase road efficiency. In a smart city, vehicles will rely on this smart infrastructure to keep drivers informed about upcoming traffic conditions. So, while talking to lamp posts, traffic lights and street signs may seem crazy to the average person, in the future, your car will do it all the time. As such, we’ll see increasing uptake of vehicle-to-vehicle (V2V), vehicle-to -infrastructure (V2X) and interaction between what the intelligent edge sensors are “seeing” and how that is relayed as useful information.
V2X will become a basic requirement – one requiring AIoT (Artificial Intelligence in the Internet of Things) on trillions of sensors. AIoT will enable this vehicle-to-infrastructure communication, which means there will be a multi-way exchange of information allowing the vehicle to make informed choices based on real-time and predicted information. For instance, how frustrating is it when motorway signs display out-of-date information because a human controller hasn’t realised it needs to be refreshed? Or wouldn’t it be better to know to take the exit before rounding the corner and becoming part of a three-mile tailback?
Currently, sat-nav systems do this by relying on crowd-sourced data but using real-time information would automate this process and reduce the delay in obtaining the data.