The acceleration of EDA to the cloud in 2020

The acceleration of EDA to the cloud in 2020

Business news |
By Ally Winning

If you are an engineer of a certain age, you will remember for ‘engineering workstation’. Those high cost, high performance machines were in short supply and were used to run the leading edge electronic design automation (EDA) tools.

But the workstation also drove chip and graphics architecture. Silicon Graphics drive the MIPS architecture (and was subsumed into Cray and now HP Enterprise), Sun Microsystems had its own Sparc architecture before the takeover by Oracle, and Digital Equipment (DEC, RIP) had its ARM-based Alpha chip, developed by a team that went on to Apple and arguably has resulted in the M1 chip powering the latest Macbook.

However, these mighty machines (for their time) were overtaken by the network. Client-server architectures allowed higher performance servers to run the EDA tools.

These servers were consolidated into server farms, with racks of processors on-site, and now we are seeing the next stage in the evolution into the cloud.

The ‘big three’ Eda suppliers have had cloud in their strategic roadmaps for several years, and this work has come to fruition this year. Earlier this month, ARM showed the importance of the cloud.

“Arm is moving the majority of its EDA workloads to AWS as part of our effort to reduce our global datacentre footprint by at least 45% and our on-premises compute capabilities by 80% as we complete our migration to AWS,” said Rene Haas, President of thee IP Group at Arm. “We have already realized a 6x improvement in performance time for EDA workflows on AWS and see the potential for increasing throughput by 10x.”

As part of this move, AWS used the VCS Fine-Grained Parallelism (FGP) technology from Synopsys running on Arm-based Graviton2 servers. This enables accelerated development and verification of breakthrough connectivity technology and SoCs.

Next: Digital twin EDA 

The VCS native integration with Synopsys Verification IP and Verdi advanced debug solutions enables design teams to achieve higher productivity for accelerated verification closure with superior hardware price/performance.

“AWS has been an early adopter of Synopsys’ functional verification solutions to accelerate the development of our next-generation datacentre chips,” said David Brown, Vice President of Amazon ECS. “Using Synopsys verification tools on AWS Graviton2 allows us to perform full chip simulation at lower cost.”

“As SoC design complexity grows, so do the number of simulation cycles they require, which increases the demand for more compute power,” said Sandeep Mehrotra, VP of engineering in the Verification Group at Synopsys. “Our verification technology collaboration enabled AWS to perform full-chip simulation for their datacentre SoCs and find bugs faster. With VCS optimized for multi-core and many-core Arm-based CPU platforms to the cloud, users are able to move simulation workloads to the cloud, enabling faster time-to-market.”

This follows Cadence completely re-architecting its simulation and analysis tools with a technology it calls Cloudburst.

“Why this is the right time is the fact that Cloudburst which really is the answer for EMI testing without spending the money on a chamber with a 500 core simulator,” said Brad Griffin, Product Management Group Director, Multi-Physics System Analysis at the Cadence Custom IC & PCB Group. “We can set all the software up running in an environment where everything is running properly and customers can log in and use the cloud for how many cores to use to get the simulation results back. This gives the same results as if you bult a prototype and put it in a test chamber,” he said.

Mentor, now Siemens EDA, has also been moving to the cloud for its EDA tools to combine with . Working with ST Microelectronics, it is working to accelerate the characterisation of libraries, scaling up its cloud processing to reduce the time taken from weeks to hours.

But this move to the cloud is also about adding new capabilities, particularly machine learning, as the Mentor/ST project also shows.

This ability to add in other capabilities such as machine learning means the hybrid cloud is set to go the way of the workstation. There will still be a need for local processing, particularly for low latency design flows such as physical emulation on FPGAs, but even this is moving into the cloud.

Digital twin

But the cloud is more than just accelerating point tools in an EDA flow. Siemens has been pushing the idea of the digital twin as the heart of the complete design process, from chip to software to end system.

“When we first started talking about digital twins the reaction was we simulate designs all the time. That’s part of what enables a digital but there’s not just the digital twin of the design but of the manufacturing process and the way the device is used that provides feedback,” said Joe Sawicki at Siemens EDA

Infineon is using the digital twin technology for chip design for automotive designs,

“What is changing is we are moving away from hardware boards, a lot of these things are done in simulation with online tools that allows people to be faster with early prototyping,” said Hans Adlkofer, Senior Vice President Automotive System Group at Infineon Technologies. “What is more and more of a challenge for developers is the software

ARM has already used the technology to model an entire car.

“The digital twin is not just useful in autonomous vehicles,” said Sawicki. “In 5G we are looking to model the behaviour of the edge and combine that with the cell tower, the channel and the interface up to the cloud so we can model the entire stack and look at how the 5G applications will perform.”

This of course brings challenges further down the tool chain, particularly in verification and validation

Next: What’s next for EDA

The consolidation in the cloud is just at the start. The ability to accelerate individual tools will speed up verification and improve quality at a time when the size and complexity of chip designs is booming with the coming 3nm chips that will be in production in 2024. The 2nm generation will bring even more challenges for tools.

The ability to combine tools to create a digital twin is even more exciting. Building a digital twin for each new design is not cost effective, so the twin becomes the platform for a wide range of products, both in development and in operation. Instead, the digital twin is used throughout the lifetime of the product, supporting the development and testing of software upgrades and monitoring the performance to provide predictive maintenance to highlight problems before they occur in the field.

“For example, when the device in the field has issues, these can be captured and fed back into the digital twin,” said Sawicki. This is what led to the acquisitions of UltraSoc and Moortec this year to provide data from chips back to the digital twin models. 

All this means that innovative new tools will of course be cloud-native, designed from the start to make use of the phenomenal

But the cloud opens up new directions for tools by the interconnection. Quantum computers are pretty much only accessible via the cloud, and supported by classical cloud computing. Annealing algorithms as well as place and route explore the design space, looking for local minima, and this is well suited to quantum computing. Access to the AWS Braket service and quantum computing from Microsoft Azure and Google Cloud.;;;

Related articles 

Other articles on eeNews Europe


If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles