Fullscreen User Comments
Share on Twitter Share on Facebook Share on LInkedIn Share on GooglePlus






Clarifying Moore’s Law: a new density metric required


Source: Intel Mark Bohr, Intel Senior Fellow and director of process architecture and integration

  • Intel’s Mark Bohr speaks up for Moore’s Law
  • Proposes a new ‘Standardized density metric’
  • This would show clearly whether Moore is being followed or not

I remember the first time I read of a microprocessor. It was 1972 and as a schoolboy I had been given a subscription to Newsweek, not so much a present, more a nudge in the right, studious direction and away from noisy ‘pop’ music.

One Newsweek story I remember  was about the arrival of the microprocessor - a whole computer on a tiny chip of silicon - and how this was going to revolutionise everything. I now know that the chip in question was almost certainly Intel’s 4004 and the magazine story proved to be right on the money.

A few years earlier, in 1965 one of Intel’s founders, Gordon Moore, famously observed that the number of transistors per square inch on an integrated circuit (we hadn’t got to microprocessors yet) seemed consistently to double every two years. He predicted that this would continue and his observation was hardened into ‘Moore’s Law’.

It’s a ‘law’  that has proved to be remarkably well obeyed, despite (or perhaps because) of it periodically being deemed ready to run out of tarnac and hit some sort of barrier… so far it hasn’t.

But if a barrier were to be encountered (or perceived to be encountered) there may be enough incentive for someone to  find a way around, over or under it -  a way, it’s implied,  which might just about follow the letter of Moore’s Law as it has developed, but not its spirit.  

Mark Bohr, who is an Intel Senior Fellow and its director of process architecture and integration, has penned an article entitled ‘Let’s Clear Up the Node Naming Mess’ which seeks to cut off this possibility.

Mark points out that the industry has been following the ‘ law’, and has “named each successive process node approximately 0.7 times smaller than the previous one – a linear scaling that implies a doubling of density.”  So 90 nm went down to 65 nm to 45 nm then 32 nm – each of these 0.7 reductions implied a chip developer was packing twice the number of transistors into a given area than with the previous node.

But now, says Mark, perhaps because the scaling down is getting harder, some companies have not followed the law but have continued with the node names, “even in cases where there was minimal or no density increase. The result is that node names have become a poor indicator of where a process stands on the Moore’s Law curve.”

The answer he says, is to develop an industry-standardized density metric to level the playing field. That way customers can accurately compare the various process offerings.  

But there are choices.

“One simple metric might be gate pitch (gate width plus spacing between transistor gates) multiplied by minimum metal pitch (interconnect line width plus spacing between lines), but this doesn’t incorporate logic cell design, which affects the true transistor density.

“Another metric, gate pitch multiplied by logic cell height, is a step in the right direction with regard to this deficiency. But neither of these takes into account some second order design rules. And both are not a true measure of actual achieved density because they make no attempt to account for the different types of logic cells in a designer’s library. Furthermore, these metrics quantify density relative to the previous generation. What is really needed is an absolute measure of transistors in a given area (per mm2).

So why not just count transistors and divide by area? This wouldn't be meaningful, he says, “because of the large number of design decisions that can affect it – factors such as cache sizes and performance targets can cause great variations in this value.”

The solution

“It’s time to resurrect a metric that was used in the past but fell out of favor several nodes ago. It is based on the transistor density of standard logic cells and includes weighting factors that account for typical designs. While there is a large variety of standard cells in any library, we can take one ubiquitous, very simple one – a 2-input NAND cell (4 transistors) – and one that is more complex but also very common: a scan flip flop (SFF). This, he claims, leads to a previously accepted formula for transistor density:


Every chip maker, he says, when referring to a process node, should disclose its logic transistor density in units of MTr/mm2 (millions of transistors per square millimeter) as measured by this simple formula. Reverse engineering firms can readily verify the data, he claims.

According to Mark, by adopting these metrics the industry can clear up the node naming confusion and focus on driving Moore’s Law forward.

What’s clear is that 50 years after Moore’s original observation, it’s been adopted, not so much as a law, but as a permanent industry expectation. One way or another, transistors (and computer power) must keep doubling.

Join The Discussion

x By using this website you are consenting to the use of cookies. More information is available in our cookie policy. OK