
Generative AI drives bug-free software method

Researchers in the US have developed a method for automatically verifying software with nearly 66% efficacy using large language models.
The team of computer scientists led by the University of Massachusetts Amherst developed the method for automatically generating whole proofs that can be used for bug-free software and verify that the underlying code is correct.
This new method, called Baldur, uses Large Language Models (LLMs), and, when combined with the state-of-the-art tool Thor, yields unprecedented efficacy of nearly 66%. The team was recently awarded a Distinguished Paper award at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering.
- Generative AI silicon design challenge readies for fab
- MediaTek looks to generative AI to replace touchscreens
- Cadence launches generative AI for PCB design
“We have unfortunately come to expect that our software is buggy, despite the fact that it is everywhere and we all use it every day,” says Yuriy Brun (above, right), professor in the Manning College of Information and Computer Sciences at UMass Amherst.
Bugs in software can be potentially catastrophic when it comes to security breaches or the precision software used for space exploration or for controlling health care devices.
Formal methods for bug-free software can generate a mathematical proof showing that the code does what it is expected to do, and then use a theorem prover to make sure that the proof is also correct.
But manually writing these proofs is incredibly time-consuming and requires extensive expertise. “These proofs can be many times longer than the software code itself,” says Emily First (above), the paper’s lead author who completed this research as part of her doctoral dissertation at UMass Amherst.
A large language model is a possible way to generate such proofs automatically.
However, “one of the biggest challenges with LLMs is that they’re not always correct,” says Brun. “Instead of crashing and letting you know that something is wrong, they tend to ‘fail silently,’ producing an incorrect answer but presenting it as if it’s correct. And, often, the worst thing you can do is to fail silently.”
The team used Minerva, an LLM trained on a large corpus of natural-language text, and then fine-tuned it on 118GB of mathematical scientific papers and webpages containing mathematical expressions.
Then there was more fine tuning of the LLM on a language, called Isabelle/HOL, in which the mathematical proofs are written. Baldur then generated an entire proof and worked in tandem with the theorem prover to check its work.
When the theorem prover caught an error, it fed the proof, as well as information about the error, back into the LLM, so that it can learn from its mistake and generate a new and hopefully error-free proof.
This process yields a remarkable increase in accuracy. The state-of-the-art tool for automatically generating proofs is called Thor, which can generate proofs 57% of the time. When Baldur (Thor’s brother, according to Norse mythology) is paired with Thor, the two can generate proofs 65.7% of the time.
Though there is still a large degree of error, Baldur is by far the most effective and efficient way yet devised to verify software correctness for bug-free software.
