MENU

The tech behind Quantum Generative AI

The tech behind Quantum Generative AI

Feature articles |
By Nick Flaherty

Cette publication existe aussi en Français


Quantum computing giant Quantinuum has developed a Quantum AI framework it calls Gen QAI to use quantum-generated data for generative AI.

“We are at one of those moments where the hypothetical is becoming real and the breakthroughs made possible by the precision of this quantum-generated data will create transformative commercial value across countless sectors. Gen QAI is a direct result of our full-stack capabilities and our leadership in hybrid classical-quantum computing, delivering an entirely new approach that stands to revolutionize AI,” said Dr. Raj Hazra, President and CEO of Quantinuum.

This follows a deal with Softbank to use the technology in Asia. Softbank is also leading the Stargate project in the US to roll out datacentres for traditional generative AI. Compatitor IonQ is also working on ways to use its quantum computers to run generative AI models on smaller systems with less power consumption.

One of the key collaborations is with motor and control system maker HPE Group in Italy to use the QAI quantum generative AI technology in automotive applications

“At HPE, we have a long standing tradition of employing cutting-edge technologies for our clients in the motorsport industry. Our collaboration with Quantinuum will leverage quantum generated data for applications such as battery development, aerodynamic optimization and fuel innovation,” said Enzo Ferrari, Executive Vice President of HPE Group. 

The Quantinuum technology works by translating key innovations in natural language processing — such as word embeddings, recurrent neural networks, and transformers — into the quantum realm. The goal is not just to port existing classical techniques onto quantum computers but to develop new approaches that take advantage of quantum computers, particularly to reduce power consumption.

This means avoiding the temptation to take the maths from a classical version and directly implement that on a quantum computer using entanglement and interference rather than classical algorithms. 

A key example of this is the development of quantum recurrent neural networks (RNNs). RNNs are commonly used in classical NLP to handle tasks such as text classification and language modeling. 

This will be used by Quantinuum’s H2 quantum computer to generate data to train AI systems, significantly improving the fidelity of AI models to tackle challenges previously deemed unsolvable. It plans to use its next generation Helios system in the middle of 2025 to extend the performance.

The team at Quantinuum developed a quantum version of the RNN using parameterized quantum circuits (PQCs). PQCs allow for hybrid quantum-classical computation, where quantum circuits process information and classical computers optimize the parameters controlling the quantum system.

In a recent experiment, the team used a quantum RNN to perform a standard NLP task: classifying movie reviews from Rotten Tomatoes as positive or negative. Remarkably, the quantum RNN performed as well as classical RNNs, GRUs, and LSTMs, using only four qubits. This shows that quantum models can achieve competitive performance using a much smaller vector space, and it demonstrates the potential for significant energy savings in the future of AI.

Quantinuum has also developed a quantum version of the transformer model that underpins many of todays large language modems (LLMs) that run on Nvidia GPUs.

By using quantum algorithmic primitives, Quixer is optimized for quantum hardware, making it highly qubit efficient. In a recent study, Nikhil Khatri and Dr. Gabriel Matos applied Quixer to a realistic language modeling task and achieved results competitive with classical transformer models trained on the same data. 

This also marks the first quantum machine learning model applied to language on a realistic rather than a sample dataset. 

Tensor networks are also a vital part of AI frameworks, used to perform tasks like sequence classification, where the goal is to classify sequences of words or symbols based on their meaning. These have been run on the Quantinuum System Model H1 quantum computer, the first time a scalable model was run on quantum hardware.

QAI power consumption

The company has also compared the power consumption of classical AI to the QAI version. In some benchmarks, the quantum computer used 30,000x less energy to complete a task than Frontier, the classical supercomputer at the Oak Ridge National Laboratory in the US. That advantage scales, as Quantinuum sees it as generally more efficient to use around 100 qubits than it is to use 10^18 classical bits, which would be around 30PFLOPS with 32bit floating point arithmetic.

Quantum models tend to require significantly fewer parameters to train than classical models with billions of parameters. Quantum models can use superposition to achieve comparable performance with a much smaller number of parameters. This could drastically reduce the energy and computational resources required to run these models.

www.quantinuum.com

 

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s