AI semiconductor startup DeepX Co. Ltd. (Seongnam, South Korea) is planning to launch four neural processing units (NPUs) at the 2022 Embedded Vision Summit next week.
The four devices are intended or processing AI at the edge and are the DX-L1, DX-L2, DX-M1, and DX-H1. The L1 and L2 offer between 2TOPS and 7TOPS of processing capability intended for relatively low data throughput. This might include small applications mounted on home appliances such as vacuum cleaners and refrigerators and camera modules. The M1, with 25TOPS processing targets autonomous vehicles. The H1 offers 2.4 peta operations per second for AI servers.
The chips are designed for implementation in Samsung Foundry’s 5nm manufacturing process and the architecture provide an energy efficiency of 10TOPS/W. These processors will be launched sequentially from 2H22 through to 1H23 and will be delivered to more than 20 global customers the company said.
At the exhibition, buyers will be able to see demonstrations run on an FPGA-based NPU prototype that include object detection, face recognition and image classification.
DeepX was founded in 2018 and is led by CEO Lokwon Kim, who previously helped design the A11 Bionic processor for the iPhone X at Apple. He was also involved in the design of the A12 Bionic for iPhone XS including the world first general-purpose embedded neural processing unit.
Since its formation DeepX has received a cumulative investment of US$25 million.
Related links and articles:
News articles:
Strong backing for digital-in-memory computing
Untether to develop autonomous vehicle perception for General Motors
Startup Syntiant ships 20 million AI processors, raises $55 million
NXP partners with Hailo on automotive AI
Open engineering consortium for machine learning formed from MLPerf