The company has an experienced founding team and some innovations to offer. Vinod Dahm, who serves as chief operating officer, is known as the “father of the Pentium” and CEO Nagaraja has processor architecture design time at Qualcomm, ST-Ericsson and Nvidia under his belt.
AlphaICs is coming with a new twist on an old idea – that of the software agent or virtual machine – but one that is designed to perform artificial intelligence operations. The company is developing what it describes as an agent-based real AI processor architecture (RAP) that will be able to perform learning, unsupervised learning, inference and decision making.
A second twist is that AlphaICs processor is being architected to perform a range of artificial intelligence operations including training, inference and unsupervised learning.
The company executives like to benchmark the performance of their deep learning approach to the use of graphics processor units (GPUs) by companies such as Nvidia. Not surprisingly in such comparisons AlphaICs scores well although most people understand that GPUs are typically inferior to neural network optimized hardware.
This comparison to GPUs may partly because AlphaICs CEO Nagaraja spent three years with Nvidia between 2013 and 2016 where he ended his employment there as lead chip designer. Prior to that he was a lead architect for smartphone SoCs with ST-Ericsson and prior to that had spent 8 years in chip engineering with Qualcomm.
However, rather than develop a chip that is simply hardwired for neural networking with seas of multiply-accumulators (MACs) AlphaICs is taking a broader view of the machine-learning market. It is developing an instruction set architecture (ISA) and hardware that, while multiplier and adder rich, will be optimized for numerous deep learning tasks. AlphaICs describes each of the core elements as a real AI processor (RAP) agent and intends to develop chips with between tens and hundreds of these agent-cores.
Nagajara explained that agents are a software entity that is roughly analogous to the hardware entity that is the processing element. But a single processing element can support multiple agents and a single agent’s activity can be spread across multiple processing elements.
Next: What is an agent?
AlphaICs states on its website that: “Each agent is a ‘group of tensors’ enabling high dimensional computation. These agents operate in multi-agent environment for asynchronous processing of AI algorithms. The company has not yet made clear how resource allocation is done at run-time or how high efficiency of execution is maintained between agents and the hardware that supports them.
What the company has said is that the RAP architecture uses more than 200 single instruction multiple agent (SIMA) instructions to provide high code density and energy efficiency. This allows a high data flow as well as a high level of program control, the company claims. The former is required for deep learning algorithms while the latter is required for reinforcement learning and unsupervised learning.
One of the advantages of this form of programming, compared to other AI chips, is that it will be easier to debug, the company claims. It also means there is a lot of software to be written including runtime libraries and compilers. The RAP chips AlphaIC is developing will be programmable in C code through the TensorFlow framework with others to follow.
The libraries include: alphaDNN supporting both training and inferencing on neural network chains; alphaSIMA to enable the combination of perception and decision making; and openEIGEN which provides a general purpose high performance computing and linear algebra library.
RAP software stack from applications to metal showing frameworks, language, compiler, libraries and runtime environment. Source; AlphaICs.
The company was founded in October 2016 and has a demo with a 16-agent chip and plans to have its first commercial chip with 32 agents available by mid-2019. According to reports, the first chip has already taped out in the 16FFC process from TSMC. It will be the RAP-E, with E standing for edge, and consume about 3W. The RAP-E offers 30TOPS of deep learning performance, supports most forms of NN and offers less than 2 millisecond latency for RNN inference.
Next: A follow-on chip
A follow-on chip is the 100W RAP-C, with C standing for core or center. With 64 agents this is intended for use on large-scale neural networking in data centers. That is currently being brought up in FPGA form but more funding will be required for the design, tape-out and production of that chip targeting 7nm silicon.
The FPGA version of RAP-E beat Nvidia’s Volta V100 on an image recognition test using videos and convolutional neural net algorithms created by automotive supplier Visteon Corp. RAP-E beat Volta V100 in all metrics by margins varying between 50 percent and 400 percent, according to AlphaICs investor Emerald Technology Ventures.
AlphaICs is pursuing a dual business strategy. It is developing chips and support software for key applications in high volume sectors – such as autonomous driving, drones and robots – and in high value applications such as data centers – while also allowing the licensing of RAP cores in narrower embedded markets.
A single RAP core supporting 2, 4 or 8 agents can perform such tasks such as noise cancellation, face recognition, object detection and enables voice control, machine translation and better query generation for AI assistants. The IP is available for low power embedded applications such as dialogue systems, augmented realty, gaming, surveillance and hand-held devices, the company states.
AlphaICs has raised $2.5 million in a seed funding/Series A round in October 2017, about a year after the company was founded. The company now plans to raise a Series B round of about $15 million and it is notable that AlphaICs is pitching at the European Venture Fair that takes place September 12 and 13 in Zurich, Switzerland.
Next: ‘Father of the Pentium’ on board
However, AlphaICs should not find it difficult to get introductions to venture capital as Vinod Dham, who is serving the company as chief operating officer, is the founding managing director of IndoUS Venture Partners (IUVP), an early stage venture capital fund focused on investing in India.
Dham spent the first part of his career at Intel where he worked on flash memory before going on to manage the introduction of the Pentium. He went on to become CEO of NexGen, a microprocessor startup, which was sold to AMD for $800 million. Next Dham was CEO of Silicon Spice, a chip design startup he sold to Broadcom for $1.2 billion. Following on from that Dham made the move into venture capital as the co-founder of NewPath Ventures LLC.
Related links and articles: