“For lower nodes you have to take into account the variability of the process and the liberty format has an added format so not only do you need to compute the nominal value but the statistical components with a monte carlo analysis. With a simple calculation to provide enough data you get 1 trillion simulations. This takes too long and can introduce lots and lots of errors,” he said. “There are simplifications that can be done but its clearly a bottleneck but its on the critical path of the digital design flow.”
“The objective we have set for the end of the programme in 2022 is to use machine learning to get an order of magnitude faster throughput in characterisation and also provide easy to use library verification tools so the users can have tools to explore the libraries, to explore gigabytes of complex data and seamlessly debug any errors,” he said. “We have a little over two years to achieve this.”
Mentor acquired the Solido Characterization Software Suite with machine learning technologies in 2018 that increases the throughput of library characterization by orders of magnitude, while producing accurate Liberty files and statistical data. It also provides tools and a designer-centric user interface that enable the exhaustive verification of characterized Liberty files.
“Using machine learning you can select a smart corner that we simulate and from that build an ML model across the range of corners, and this gives 100x acceleration,” said Colin-Madan at Mentor.
“To find outliers or inconsistencies in the vast amount data is impossible by navigating text files so people use scripts with rule based checks. Building machine learning models helps to extract everything, finding new classes of problem that rule based tools cannot find,” he said.
Another part of the collaboration is the graphical interface and how to present results so they are easily understood.
“We believe that machine learning is the key technology for the future of a lot of tools and so far what we have is unique. The full semiconductor industry will benefit from the project and we are building on a very solid foundation,” said Colin-Madan.
Cloud computing is also key to increasing the performance of the library chacterisation by increasing the number of CPUs that can be used.
“The current technology is already scalable to large designs with hundreds of thousands of parameters,” said Talbot. “What we aim to do is instead of using several thousand CPU cores for four weeks we want to decrease that to a few days. Its orthogonal – we want to benefit from the scalability in the cloud and use the active learning – we will use both.”
“Everything is running on CPUs, we are not using GPU acceleration,” said Talbot. “We are surveying what other type accelerators can bring in general but we have not implemented anything. What makes the strengths of these tools is the deep integration and you need to be able to have very efficient software infrastructure across a large number of CPUs and deeply integrate with SPICE simulators and a wide variety of ML learning models so the benefit of integration is speed.”
“Mentor’s research and industrial partnership with STMicroelectronics has established a long and successful track record of collaboration, resulting in advancements to Mentor’s tools, which in turn deliver immediate value to STMicroelectronics and other customers,” said Ravi Subramanian, Ph.D., senior vice president, IC Verification Solutions for Mentor. “Mentor is pleased to expand on this successful collaboration with the Nano 2022 program, and we look forward to working with STMicroelectronics to achieve our mutual goals.”
- SIEMENS SET TO BUY EDA FIRM SOLIDO
- HIGH-SIGMA MONTE CARLO META-SIMULATOR FOR MEMORY DESIGN
- TOOLS ENABLE 3nm DESIGNS ON SAMSUNG PROCESS
- RUTHENIUM SHOWS WAY TO 2nm
Other articles on eeNews Europe