AI processing on edge device, says the company, particularly AI vision-specific industries, eliminates privacy concerns, while avoiding the speed, bandwidth, latency, power consumption and cost issues of cloud computing.
“The emerging smart CMOS image sensors technology trend is to merge ISP functionality and deep learning network processor into a unified end-to-end AI co-processor,” says Dr. Manouchehr Rafie, Vice President of Advanced Technologies at Gyrfalcon. “This white paper defines a new paradigm for on-device integrated AI-camera sensor co-processor chips. The chips’ built-in high-processing power and memory allow the machine- and human-vision applications to operate much faster, more energy-efficiently, cost-effectively and securely without sending any data to remote servers.”
The white paper explores how an integrated edge AI Gyrfalcon co-processor chip in a camera module can produce real-time images and video streams that are superior to some of the existing high-end and expensive smartphones. AI-powered camera sensors, says the company, offer distinct advantages over standard cameras by performing not just the capture of enhanced images, but also by performing image analysis, content-aware and event/pattern recognition, all in one compact system.
An integrated AI image co-processor chip into a camera module can directly use raw data from the sensor output to produce DSLR-quality images as well as highly accurate computer vision results. The white paper also explains why smartphones and automotive are the dominant drivers due to their fastest growth and largest volume shipment and revenue in edge vision computing.
Edge inferencing server combines high AI performance, low energy use
Edge AI to propel TinyML chipset growth
Intelligent vision sensors have AI processing functionality
AI at the edge and beyond – whitepaper
Edge AI chipset market to surpass that of cloud in 2025