MENU

Why cold computing matters for the next generation of computing

Why cold computing matters for the next generation of computing

Interviews |
By Nick Flaherty



Power consumption is the major limiting factor for next generation computing systems, says Craig Hampel. Rambus has made its name in the development of memory sub-systems and is working with Microsoft on new low temperature memory technologies, says its chief scientist. This can lead to higher performance, lower power and extend Moore’s Law for up to a decade.

“As you decrease the temp and increase the compute density the enemy is thermal noise and self induced voltage-based noise, so temperature is the free variable,” he said.  H epoints to Microsoft putting datacentres into the sea as the first step in reducing the temperature.

“You have datacentres now in the sea, and that’s impressive. The next point is cryogenics, and here liquid nitrogen is extremely cost effective so that’s where some of our work is at 77K and CMOS and RAM has interesting properties at that temperature. There’s a 4 to 8x improvement in energy efficiency  – usually leakage current prevents lower voltages but we think at 77 K we can get threshold voltages down to 0.4 to 0.6V and the leakage goes away so you get 4 to 8 years of Moore’s law scaling,” he said.

“The next place that you get even more radical power and computational scaling is a 4k with superconductors so we use 77K as a step to true superconducting and we are working with Microsoft on superconducting processors – perhaps with the memory subsystems at 77K. The extreme on that curve is quantum computers and these tend to be in the millikelvin range – we view thermal improvements as an enabler to extending computational density for the next 20 years or so.”


At 77k to get a true power advantage you need to consider the costs of the cooling, but thermal density then becomes the limit for computational density as liquids such as nitrogen are more efficient conductors of heat. That means you can build smaller data centres and the communication systems can scale as the computing gets more dense, so there are a number of additional benefits that are orthogonal to the thermal energy consumption, he says.

“We believe paying for more cooling cost to improve the compute density is a key way forward,” he said. “At 4K there’s 10,000x boost in performance for a 100x increase in power consumption.”

The research projects are looking at how these environments can work together.  

“Our primary work is communication from the 4K to more conventional CMOS at 77K so its about escaping the 4K environment – we are getting more exposure to the superconducting process and materials and experimenting there but the primary work is communicating with the 4K system.

“The first challenge is that superconducting processors are very low energy,” he said. ”A 2mV pulse for example is a picosecond or so, and there’s really no easy way to receive that with CMOS, so there’s some amplification and material science needed. This has to use really advanced signalling technologies from high speed SERDES with differential forward feedback and error correction and DSP-like technologies to communicate between the 4K device and CMOS.”

“Getting all the CMOS to work reliably at 77K is the second big problem,” he said. “We are partnering to help develop DRAM technologies that operate at this temperature and logic processes so we are developing process files for how a conventional technology can be used. It’s essentially a new PDK [process design kit] for conventional processors so it’s characterising the processors for these temperatures.”

Any digital function translates well, but the problems come with the analogue functions which don’t work as they used to, so the PLLs and mixed signal paths needs to be redesigned. “We haven’t found anything that can’t be done but there are some aspects of regulators and voltage generation that cannot lock or regulate correctly, it just requires a different point on the design curve,” he said.


“So 77K ends up be fairly conventional but there’s a discontinuity at 4K,” he points out. “There’s things we know don’t work down there such as a PLL where there’s not enough gain in the feedback loop, and copper doesn’t superconduct so its niobium, tantalum that are more attractive.”

He believes this is one of the only options for future computation. “Most quantum machines need a conventional error correction processor near them and that’s another aspect that’s driving this architecture,” he said. “The more likely transition is 77K and 4K machines deploy next to the quantum machines.”

The value for Rambus is in providing the buffer technology for such systems, as chips or as IP, in the next few years.

“Today we have a significant offering in memory buffers – these get more challenging and valuable when they also translate between temperature domains so primarily we would sell buffers that intermediate between superconducting domains and 77K – that’s the most natural approach.”

“We think that 77K DRAM subsystems will be thermally attractive and possible in three years or less,” said Hampel. “We are building prototypes today, and that can be used 3 to 5 years if things go well. The superconducting processor still needs a lot of engineering but if things go well it’s at the same timescale.”

www.rambus.com

Related stories:

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s