No Moore?

We are about to come to the end of an era. Moore's law is coming up against fundamental limits. Right now the smallest transistors require hundreds to thousands of atoms. It will hard to get transistors much smaller than that. In addition as transistors get smaller than that, overall circuits actually get slower and require more power.

Chip and IC equipment makers are at a crossroads as they enter an era that might be called "More than Moore."

The relentless pursuit of scaling over the last 40 years, in accordance with the famed postulate known as Moore's Law, continues to be an aggressive goal.

However, the buzz at the Semicon West equipment show last week suggests the time has come to rethink what is scalable and examine other ways of adding value to semiconductor devices.

Although leading IC makers Intel and IBM remain committed to Moore's Law (Intel in part out of respect for founder Gordon Moore's scaling formula), both are starting to address its limits. In addition, those limits are not just technical; they are economic as well.

Is it still practical?
At Semicon West, where the relentless market pressures facing chipmakers are measured in the progress of tools able to refine physical transistor gate lengths down to 22nm, the Greek chorus of industry gurus sounded a warning: In chasing after ever smaller and denser devices, it might just not be practical to go on scaling for the sake of scaling.

"It's been an economic issue all along," said keynoter Bernie Meyerson, an IBM fellow and CTO of the IBM Systems and Technology Group.

"Moore's Law stipulates that you need to double the density of chips every 12 to 18 months [for scaling purposes]; that's an economic, not a technical issue."

The recipe for scaling is expensive and geometries are approaching single atoms, which won't scale. Those facts are forcing the industry to look "beyond CMOS," simply because "the result of further scaling is more power consumption, more costly [devices] and slower operation," said Meyerson.

Currently chips are being made with photo lithography using ultraviolet light that has a wave length of 193 nm. So how do you make transistors with feature sizes of 22 nm using 193 nm light? With great difficulty.

So what are some possible answers? Stacking chips is one answer. That assumes that you can get the power out of a 3D structure without raising temperatures excessively. Another possibility is more efficient computer languages that get more done with fewer instruction cycles. As many of you know I like FORTH for that purpose. It is a language that lends itself well to mechanization in silicon. Our premier language today, C and its variants - not so much. Another thing FORTH has going for it is that the number of transistors for a given processor (8 bit, 16 bit, 32 bit etc.) is much smaller than the number required for current designs. Fewer transistors means that the transistors will be closer together (that will speed things up because the speed of light is now a fundamental limitation) and fewer transistors also means less heat production. Heat slows down the kind of transistors used in computers (MOSFETs) and it also causes problems because that heat must be dissipated.

Quantum computing might also help. Except for a couple of things. The number of bits (Q bits) is currently small and quantum computing requires temperatures near absolute zero.

One thing to keep in mind is that we have at least another 10 years to go with what we currently know. We may find an answer in that time. Another thing that will help is that every 10 years we double the area that is produced per batch (wafer). That means that cost reductions will slow down if that is the best we can do. However, we still have a ways to go before cost reductions stop all together.

Cross Posted at Power and Control

posted by Simon on 07.22.08 at 04:26 PM





TrackBack

TrackBack URL for this entry:
http://classicalvalues.com/cgi-bin/pings.cgi/6933






Comments

If you look to more efficient languages, then you have enormous sunk costs in existing software. I have worked with FEA for 35 years - the truly ugly and sophisticated core continuum mechanics calculations are still FORTRAN. Touch it and its guaranteed to break. The existing coding isn't elegant by any standard, but nobody wants to untangle the Gordian knot.

chuckR   ·  July 22, 2008 08:39 PM

chuck,

You can emulate any language with any other. Look at how many different processors run C.

I don't see sunk costs as a big obstacle.

BTW FEA currently is not very efficient. So far it is good enough. That may not always be true.

In any case eventually some one will cut the Gordian Knot.

M. Simon   ·  July 22, 2008 08:53 PM
Another possibility is more efficient computer languages that get more done with fewer instruction cycles.

Moore's law, in its purest form, is only a description of how many of the smallest component (in the case of current computers, transistors) exist on a single inexpensive chip. More efficient computer languages could keep technology progressing, but would not keep Moore's law going.

The number of bits (Q bits) is currently small and quantum computing requires temperatures near absolute zero.

This is not quite accurate. Some forms of quantum computing require near-zero temperatures, such as those which exploit the attributes of a Bose–Einstein condensate-based. Not all do, however : photons will remain entangled at any temperature, thermal ensemble NMR operated at room temperature (although it could not be reliably scaled up). There are some other tricks that allow fairly reliable storage of quantum states at higher temperatures, usually relying on diamond flaws or other tricks.

The big concern is not the number of qubits generated, or the temperatures involved (although those are issues for making quantum computers relevant); the real issue for Moore's law is that, even of the quantum computing techniques for which the phrase "chip" is relevant, the individual components are typically much larger and much more expensive than anything in the conventional electronic world. From the viewpoint of users, quantum computers run normal tasks no faster than conventional ones, even presuming the same number of fundamental components and operations per cycle -- quantum computers excel at the factoring of numbers, but other quantum operations are generally meaningless to users both in terms of method and actual result.

The more relevant technologies will likely be those that decrease the cost of creating transistors or chips. You don't really need to mess around with quantum computers or programming languages to do so; graphene alone looks to be an excellent method, assuming a reasonably inexpensive method of creating the stuff can be made.

gattsuru   ·  July 22, 2008 10:02 PM

I remember reading, long ago, an sf story wherein information was stored on notched electrons.

Bleepless   ·  July 22, 2008 10:03 PM

I am pleased that FORTH has any following whatsoever. As a BASIC programmer from 1970 (am I bragging or apologising? Dunno. Both, I guess) it is the only language that looks like it might still make some intuitive sense to me.

Fortran. God help us, that's still in the foundation of this high-rise? Just shoot me.

Assistant Village Idiot   ·  July 23, 2008 08:50 AM

Not sure about the whole switching to FORTH thing. I don't want to give up Python and Ruby.
I remember some discussion about other methods for doing work at a much smaller scale than current processors, but I don't remember enough (no words to search, just concepts). Also, in Neal Stephenson's Diamond Age there was talk of "rod logic", I guess sort of nanoscale mechanical computer processors. Huh.

raptros-v76   ·  July 23, 2008 08:55 AM

In FEA, nobody can make an economic case to wholesale convert the calculation engine part of the codes. Over time, it will be converted piecemeal - or new parts will be OEMed from academe, etc. Beyond the costs is the trust issue. Trust is not useful when a vendor is adding features, as they have for the past 40 years or so. These codes are production rather than research oriented and users expect verifiably correct answers above all other considerations. Nonetheless, there are useful things being done with Python, tcl and perl on the periphery - we're not complete mossbacks!

I never thought of FEA in terms of efficiency relative other procedures, but I do know that a big problem is that you often can't shovel the results out of the way fast enough to move forward as quickly as the solvers can. For moderate sized problems (a few x 10e6 DoF), having several GB of RAM available for system IO processing is desirable. New languages or new processors still have to deal with this.

chuckR   ·  July 23, 2008 10:27 AM

Didn't I read an article similar to this one around 12 or 15 years ago? Moore's Law might never end.

BackwardsBoy   ·  July 23, 2008 01:32 PM

Post a comment

You may use basic HTML for formatting.





Remember Me?

(you may use HTML tags for style)



July 2008
Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31    

ANCIENT (AND MODERN)
WORLD-WIDE CALENDAR


Search the Site


E-mail




Classics To Go

Classical Values PDA Link



Archives




Recent Entries



Links



Site Credits