Rapid Silicon lets engineers use GPT for FPGA design
FPGA developer Rapid Silicon Inc. (Los Gatos, Calif.) has announced it is launching RapidGPT, an AI chatbot-based FPGA design tool with conversational features and automated completion of code.
The company claims RapidGPT is an intelligent, efficient, and seamless interface based on natural language processing that enables hardware designers to increase their productivity and time-to-market. Rapid Silicon is making use of AI chatbots’ creativity and ability to write software code.
Because of the conversational interface FPGA designers can interact with hardware description language (HDL) via AI in a more natural and intuitive way, Rapid Silicon claims. It goes on to say that RapidGPT understands the intent of designers’ commands and provides relevant suggestions, significantly reducing the learning curve and the time needed for FPGA designers to become productive with new tools and platforms.
RapidGPT’s code autocompletion provides FPGA designers with relevant and contextual suggestions based on their code, removing errors and streamlining the code writing process.
“Rapid Silicon’s RapidGPT represents a major breakthrough in FPGA design flows,” said Professor Pierre-Emmanuel Gaillardon, CTO of Rapid Silicon, in a statement. “Unlike any other solutions on the market, our AI approach leverages advanced natural language processing, code autocompletion and conversational features, enabling FPGA designers to work more efficiently and effectively than ever before. RapidGPT saves time and resources, mitigates design errors, and produces optimized solutions.”
Rapid Silicon did not provide any benchmarks of the efficiency gains users could expect.
The interface to RapidGPT is similar to other hardware design tools but includes the ability to let the tool autocomplete parts of the design and to be asked for explanations of what it has done or about the function of particular modules in the design.
The fast, and sometimes uncontrolled rise of the use of AI chatbots in business and engineering settings has give rise to concerns over security (see ChatGPT leaking Samsung chip secrets is iceberg’s tip).
A spokesperson for Rapid Silicon told eeNews Europe that RapidGPT is tied to several large language models (LLMs) allowing it to be run both locally and from the cloud. In particular, it pairs a domain-specific model dedicated to HDL writing with a general-purpose AI chatbot that serves adversarial discussions and extends the capabilities of the core engine. It allows RapidGPT to protect information flow with on-premise execution while still benefiting from the power of general-purpose models.
Rapid Silicon is a startup that was founded in 2020 by chairman and CEO Narveed Sherwani.
Related links and articles:
ChatGPT leaking Samsung chip secrets is iceberg’s tip
SpiNNaker2 spiking neural network project gets €2.5 million
Microsoft announces multibillion investment in OpenAI
ChatGPT gets its Wolfram superpowers