MENU

Intel on Moore’s Law: ‘Setting the record straight’

Intel on Moore’s Law: ‘Setting the record straight’

Technology News |
By Rich Pell



Intel recently held an event in San Francisco to give insights on how it proposes to advance Moore’s Law, with speakers variously explaining why “the Law” still matters to the world of computing, economic performance and beyond. On specifics, Mark Bohr, Intel Senior Fellow and director of process architecture and integration, discussed why the industry needs a standardised density metric. He argued that to level the playing field, customers should be able to compare various process offerings of a chip maker, as well as those of different chip makers.

Here, in full, are the summaries on both topics, as posted by Intel;

Moore’s Law: Setting the Record Straight

Advances in semi manufacturing continue to make products better and more affordable
Stacy Smith, Intel’s EVP leading manufacturing, operations and sales writes:

“We’ve heard a lot lately about Moore’s Law. Unfortunately, much of it wrong. Some say that Moore’s Law doesn’t matter anymore and that it’s just a technical issue or a race between a few giant companies. Others say it’s become too expensive to pursue any further except in a few specialized niches. Others say that it’s dead. Let’s set the record straight.

First, Moore’s Law matters. A lot. Moore’s Law democratizes computing. It’s a pretty powerful law of economics: It says that by advancing semiconductor manufacturing capability at a regular cadence, we can bring down the cost of any business model that relies on computing. Imagine what would happen if other industries experienced innovation at the rate of Moore’s Law, i.e., a doubling of capability every two years.

Car mileage would be so efficient by now that a car could drive the equivalent distance between the earth and the sun on a single gallon of gas[oline]. Agriculture productivity would be improved to a level that we could feed the planet on a square kilometre of land. As for space travel – by now we could be zooming at 300 times the speed of light.

Ultimately, these economics make Moore’s Law an essential driver of both the U.S. and global economies, enabling people to connect, play and learn. By improving compute capability year in and year out, innovators across the planet can economically apply computing cycles to address some of the world’s biggest problems and make lives better.

Second, in today’s world Moore’s Law can be delivered only by a few companies. Every new process node gets harder and therefore more expensive. Just putting the equipment in an existing fab shell [facilitised wafer fab building – ed.] can cost $7 billion.

Realistically, that means there is expected to be continued consolidation in semiconductor manufacturing, as fewer companies can afford to move forward. Intel’s ability to advance Moore’s law – to make products less expensive and more capable every year – is our core competitive advantage.

Third, what Moore’s Law enables is not a race. It is a cooperative undertaking to set a high standard across the industry, in which different companies have different areas of expertise. Intel’s role has been, and will continue to be, that of the technology leader driving Moore’s Law. Today we have about a three-year lead in process technology.

That leadership may not be obvious from the news. Sixteen nanometer, 14, 10, 7 – it looks like a horse race. The problem is that those figures, which are numbers that used to have real, physical meaning, no longer mean anything at all. There needs to be a metric that captures a process’ ability to deliver usable transistors to chip designers. Below, Intel process guru Mark Bohr describes just such a metric.

That brings us to the big question: What about the end of Moore’s Law? We have seen that it won’t end from lack of benefits, and that progress won’t be choked off by economics. But what about physics? Doesn’t Moore’s Law say that eventually transistors will be smaller than atoms?

Yes, someday we may reach a physical limit. But we don’t see that point on our horizon. I remember in 1990, when the features on the wafer were the same size as the wavelength of the light we used to print them: 193 nm. Physics was very clear. We couldn’t go any further.

But we met that challenge. We printed with the interference fringes from the patterns on the masks. We developed computational lithography and multiple patterning.

In retrospect, 193 nm wasn’t even a speed bump, and today we are doing 20 times better than that because of continued innovations like FinFET transistors and hyper scaling, which we implemented with our current 14 nm process. Today we’re talking more about further hyper scaling enhancements for our upcoming 10 nm process and how, thanks to this new process breakthrough, we continue to realize the same cost-per-million transistors.

How are we doing this? In historical Intel fashion we’ve continued to push through the barriers by identifying challenges, isolating them and solving them. Nearby, we see some specific challenges we must solve soon. That’s where we are today with 7 nm.

Further out, we see challenges that might have several alternative solutions. We pursue them all until it is clear which will work best. We are always looking three generations – seven to nine years – ahead. Today we have line of sight to 7 and 5 nm. We may not know exactly which approaches will prove best for 5 nm yet, but our culture thrives on those challenges. It has for generations.

So, no, Moore’s Law is not ending at any time we can see ahead of us. We will continue to take new nodes into production, and to ready them for our growing community of foundry customers. In fact, today we are announcing a new foundry offering: an ultra-low-power 22 nm FinFET process. Our progress, our foundational role as industry and technology leader, and our part in making life better for people will go on for many years to come.

Intel’s Mark Bohr takes over:

Let’s Clear Up the Node Naming Mess
The industry needs a standardized density metric to show where a process stands in relation to the Moore’s Law Curve

“Moore’s Law, as stated by our co-founder over half a century ago, refers to a doubling of transistors on a chip with each process generation. Historically, the industry has been following this law, and has named each successive process node approximately 0.7 times smaller than the previous one – a linear scaling that implies a doubling of density. Thus, there was 90 nm, 65 nm, 45 nm, 32 nm – each enabling the packing of twice the number of transistors in a given area than was possible with the previous node.

But recently – perhaps because of the increasing difficulty of further scaling – some companies have abandoned this rule, yet continued to advance node names, even in cases where there was minimal or no density increase. The result is that node names have become a poor indicator of where a process stands on the Moore’s Law curve.

The industry needs a standardized density metric to level the playing field. Customers should be able to readily compare various process offerings of a chip maker, and those of different chip makers. The challenge is in the increasing complexity of semiconductor processes, and in the variety of designs.

One simple metric is gate pitch (gate width plus spacing between transistor gates) multiplied by minimum metal pitch (interconnect line width plus spacing between lines), but this doesn’t incorporate logic cell design, which affects the true transistor density. Another metric, gate pitch multiplied by logic cell height, is a step in the right direction with regard to this deficiency. But neither of these takes into account some second order design rules. And both are not a true measure of actual achieved density because they make no attempt to account for the different types of logic cells in a designer’s library.

Furthermore, these metrics quantify density relative to the previous generation. What is really needed is an absolute measure of transistors in a given area (per mm²). At the other extreme, simply taking the total transistor count of a chip and dividing by its area is not meaningful because of the large number of design decisions that can affect it – factors such as cache sizes and performance targets can cause great variations in this value.

It’s time to resurrect a metric that was used in the past but fell out of favour several nodes ago. It is based on the transistor density of standard logic cells and includes weighting factors that account for typical designs. While there is a large variety of standard cells in any library, we can take one ubiquitous, very simple one – a 2-input NAND cell (4 transistors) – and one that is more complex but also very common: a scan flip flop (SFF). This leads to a previously accepted formula for transistor density:

(0.6 x (NAND2 tr. Count)/ (NAND2 cell area))

  + (0.4 x (Scan flip flop tr. Count)/(Scan flip flop cell area))

      = # Transistors/mm²

(The weightings 0.6 and 0.4 reflect the ratio of very small and very large cells in typical designs.)

Every chip maker, when referring to a process node, should disclose its logic transistor density in units of MTr/mm² (millions of transistors per square millimetre) as measured by this simple formula. Reverse engineering firms can readily verify the data. There is one important measure missing: SRAM cell size. Given the wide variety of SRAM-to-logic ratios in different chips, it is best to report SRAM cell size separately, next to the NAND+SFF density metric.

[Intel therefore proposes that] by adopting these metrics, the industry can clear up the node naming confusion and focus on driving Moore’s Law forward.

Related articles:
Moore’s Law is collapsing…or is it?
FinFET’s father forecasts future
Intel completes Altera acquisition
ARM: IoT marks a watershed

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s