Versal is the name CEO Victor Peng gave away, talking about the first ACAP, one that combines Scalar Processing Engines, Adaptable Hardware Engines, and Intelligent Engines, all with leading-edge memory and interfacing technologies to deliver powerful heterogeneous acceleration for any application.
A recurring theme of course is that Versal ACAP’s hardware and software can be programmed and optimized, not only by hardware developers, but also by software developers, data scientists and alike, thanks to a host of tools, software, libraries, IP, middleware, and frameworks provided by Xilinx.
Interestingly, this announcement fittingly comes as if completing Omnitek's AI engines announcement at the same Developer Forum, without ever mentioning the company although Xilinx has a 15% stake in it and could well have been inspired from it to create a dedicated fabric for such AI acceleration IP.
The Versal portfolio is built on TSMC’s 7-nanometer FinFET process technology and as promised earlier (see Xilinx promises revolutionary architecture at 7nm), it combine software programmability with domain-specific hardware acceleration and reconfigurability. The portfolio includes six series of devices uniquely architected to deliver scalability and AI inference capabilities for a host of applications across different markets. Namely, those include the Versal Prime series, the Premium series and the HBM series designed for the most demanding applications, and the AI Core series, the AI Edge series, and AI RF series, all three featuring an AI Engine, a new hardware block designed to address the emerging need for low-latency AI inference for a wide variety of applications.
The AI Engine is tightly coupled with the Versal Adaptable Hardware Engines to enable whole application acceleration, meaning that both the hardware and software can be tuned to ensure maximum performance and efficiency.
The portfolio debuts with the Versal Prime series, delivering broad applicability across multiple markets, and the Versal AI Core series, which according to Xilinx estimates, delivers a 8X AI inference performance boost versus industry-leading GPUs.