It was Gordon Moore, the co-founder of Intel, who made the observation that, over the history of computing hardware, the number of transistors that can be packed onto integrated circuits doubles approximately every 18-months to two years, thus effectively doubling chip performance. By Martyn Warwick.
That was back in 1965, an unimaginably distant epoch in terms of computing history, and "Moore's Law", as it now known everywhere on the planet, was based on the man himself making the observation that the number of circuits on a board or chip doubled every 18 months or so throughout the seven-year period of 1958 to 1965.
Extrapolating from this phenomenon, Gordon Moore then predicted that then same thing would happen "for at least ten years" into the future. It certainly did that - and for decades thereafter - but with each passing year industry Jeremiahs have been forecasting the imminent collapse of Moore's Law.
Not unreasonably, the doubters have it that, notwithstanding incredible advances in miniaturisation and micro-circuitry, the day will inevitably dawn when scientists will find it physically impossible to cram double the number of circuits onto a chip every year and a half or so. However, it seems they may not have reckoned the remarkable properties of carbon nanotubes.
The advance, first described in the most recent edition of the journal Nature Nanotechnology is based on carbon nanotubes, miniscule straws with superior electrical properties to silicon-based technologies. They have long been considered as a possible alternative material from which to fashion the basic structure of microchips of the future but hitherto it has proved difficult to bond arrays of these carbon nanotubules onto silicon strata.
Now though, IBM scientists working at the Watson Research Centre in New York have succeded, and the way may now be open to the development of smaller and smaller, faster and faster chips with greater and greater capacity.
However, in the public and consumer arena we are not likely to see any commercial devices based on the new technology before 2020.
Todays silicon wafer chips are manufactured by using lithography, whereby comparatively large silicon wafers are layered with other materials with different electrical properties. The wafers are then etched out by a focused electron beam.
On the carbon nanotubes, on the other hand, single sheets of carbon are rolled up to form minute tubes. What IBM has succeeded in doing is to place tiny squares of the carbon material in regular arrays by steeping them in a soapy, alkaline solution that makes the carbon water soluble. Thereafter a chemically-based epoxy-like process permits the carbon nanotubes to adhere to some surfaces but not others. The beauty of this process is that it results in a series of closely - almost perfectly - aligned nanotubes that are already effectively "wired-up" within a given grid pattern at a density of a billion per square centimetre.
That in itself is very impressive but the scientists at IBM freely admit that the nanotube density is still too low to form the basis of a whole new family of microprocessors. What they have done to date is better than the best ever previous density of nanootubes by a factor of ten, but the reality is that density will have to be improved by a factor of 100 before new utrafast chips will be able to power a new generation of computing devices and use a tenth of the energy currently needed to power today's computers.
That breakthrough will come though and Moore's Law looks likely to be with us for a long time yet. However, it will evetually fail because, as IBM says, in the end even nanotubes will face the same limits as silicon chips face today because, "as far as we know we will, in the end, be limited by the size of the atom. We don't think we can do much about that."
please sign in to rate this article