AI tool cuts 3nm chip design times
Cadence Design Systems has developed a machine learning tool to accelerate the design of complex leading edge chips on 5nm and 3nm process technologies.
The Cerebrus Intelligent Chip Explorer tool works across the synthesis, floor planning and implementation tools using a reinforced-learning (RL) algorithm to test out many different scenarios and converge on a version that heads towards the constraints on performance, power and area set by the designer. Using the tool in the cloud allows more processors to be used to speed up the implementation and cut the design time.
“Optimising the full flow is important as what you do in synthesis impacts on the implementation,” said Rod Metcalfe, Group director machine learning product management at Cadence
The tool runs on multiple processors in the cloud to boost design productivity by a factor of ten and shorten design times to days rather than months. “This is a way of getting more productivity from an existing team,” said Metcalfe. ”Customers need more capacity in existing teams and they just can’t hire the skilled engineers, it is a real problem.”
Cadence has worked with Renesas on the tool flow, and cites an example where a 5nm mobile system-on-chip design where the tool improved the performance by 14 percent to 420MHz, reduced the leakage power by 7 percent to 26mW and the total power by 3 percent to 62mW, all with a 5 percent reduction in size. This took just ten days to achieve the power, performance, area (PPA) optimisation that can take months for a leading edge chip design with billions of transistors.
“Following this success, the new approach will be adopted in the development of our latest design projects,” said Satoshi Shibatani, director, Digital Design Technology Department, Shared R&D EDA Division at Renesas.
“There has to be a number of things coming together to make this happen,” said Metcalfe. “Cerebrus uses distributed compute that is fault tolerant where we understand how to restart jobs, Both those worlds have come together, the machine learning and the massive compute both in the cloud and on-premises.”
Cerebrus is designed to be scalable so can easily run into the hundreds of cores but Cadence has used 20 machines with 320 cores to run the scenarios to generate the data on a 5nm system on chip design. “We also have the system up and running on cloud providers, on AWS for example.”
The machine learning algorithm learns as the design progresses.
“You don’t have to run every single scenario through the whole flow and it learns as the flow is running, If something is not converging we can stop the flow and reuse the flow for another scenario but you do need to allow the machine to learn for a period of time to push through things like local minima.
Related articles
- EDA moves to the cloud
- AWS, Arm demonstrate production-scale EDA in the cloud
- Cadence teams with Google, Microsoft, Amazon, on cloud
- Si2 group for AI and machine learning in chip design tools
- Cadence, Nvidia to apply machine learning to EDA
- AI boost for standard cell layout at 3nm
Cerebrus understands the different options in the Cadence tool flow and how to drive those tools in different directions, he says, and it comes with a pre-packaged set of default constraints to get started quickly. Cadence is also developing agents operating in each of the individual tools, connected to the overall flow optimisation.
“You have the flow optimisation and on top of that you have higher level optimisation such as the floor planning. We are working on other high level technology
The scenario data from each run in each tool is stored in a central database that all the tools have access to, and Cerebrus generates a machine learning model which learns from the design. At the end of a project, this results in a persistent pre-trained model that can be used by the next design. “We see this helping an engineering team with the next design,” he said.
“The scenario information has to be kept long enough to train the model but you don’t have to keep all the scenario information and we have a built a design cockpit to look at the numbers from the scenarios. For example out of 200 scenarios you might want to keep 20 to learn from them. Cerebrus tells you the best scenario and you can keep that,” he said.
Cerebrus works with the Genus Synthesis tool, Innovus Implementation System, Tempus Timing Signoff Solution, Joules RTL Power Solution, Voltus IC Power Integrity Solution and Pegasus Verification System.
These tools have been validated on the 3nm process at Samsung Foundry.
“As Samsung Foundry continues to deploy up-to-date process nodes, the efficiency of our Design Technology Co-Optimization (DTCO) program is very important, and we are always looking for innovative ways to exceed PPA in chip implementation,” said Sangyun Kim, vice president, Design Technology at Samsung Foundry. “As part of our long-term partnership with Cadence, Samsung Foundry has used Cerebrus and the Cadence digital implementation flow on multiple applications. We’ve observed more than an 8 percent power reduction on some of our most critical blocks in just a few days versus many months of manual effort. In addition, we are using Cerebrus for automated floorplan power distribution network sizing, which has resulted in more than 50 percent better final design timing. Due to Cerebrus and the digital implementation flow delivering better PPA and significant productivity improvements, the solution has become a valuable addition to our DTCO program.”
Other articles on eeNews Europe
- European alliances for 2nm processors and edge computing
- TSMC looks to 2nm process technology
- IBM shows first chip built on a 2nm process
- First complementary vertical organic transistors reach GHz speeds
- Intel in talks to buy Globalfoundries