Moore's Law Lives: the Future Is Still Alive
When Gordon Moore first enunciated his Law, only a handful of industries -- the first minicomputers, a couple scientific instruments, a desktop calculator or two -- actually exhibited its hyperbolic rate of change. Today, every segment of society either embraces Moore’s Law or is racing to get there. That’s because they know that if only they can get aboard that rocket -- that is, if they can add a digital component to their business -- they too can accelerate away from the competition. That’s why none of the inventions we Baby Boomers as kids expected to enjoy as adults -- atomic cars! personal helicopters! ray guns! -- have come true; and also why we have even more powerful tools and toys -- instead. Whatever can be made digital, if not in the whole, but in part -- marketing, communications, entertainment, genetic engineering, robotics, warfare, manufacturing, service, finance, sports -- it will, because going digital means jumping onto Moore’s Law. Miss that train and, as a business, an institution, or a cultural phenomenon, you die.
So, what made this week’s announcement -- by Intel -- so important? It is that almost from the moment the implications of Moore’s Law became understood, there has been a gnawing fear among technologists and those who understand technology that Moore’s Law will someday end -- having snubbed up against the limits of, if not human ingenuity, then physics itself. Already compromises have been made -- multiple processors instead of a single one on a chip, exotic new materials to stop leaking electrons -- but as the channels get narrower and bumpier with molecules and the walls thinner and more permeable to atomic effects, the end seems to draw closer and closer. Five years away? Ten? And then what? What will it be like to live in a world without Moore’s Law ... when every human institution now depends upon it?
But the great lesson of Moore’s Law is not just that we can find a way to continuously better our lives -- but that human ingenuity knows no bounds, nor can ever really be stopped. You probably haven’t noticed over the last decade the occasional brief scientific article about some lab at a university, or at IBM, Intel, or HP, coming up with a new way to produce a transistor or electronic gate out of just two or three atoms. Those stories are about saving Moore’s Law for yet another generation. But that’s the next chapter. Right here and now, the folks at Intel were almost giddy in announcing that what had been one of those little stories a decade ago -- tri-gate transistors -- would now be the technology in all new Intel chips.
I’m not going to go into technical detail about how tri-gate transistors work, but suffice to say that since the late 1950s, when Jean Hoerni, along with the other founders of the semiconductor industry at Fairchild (including Gordon Moore), developed the "planar" process, all integrated circuits have been structurally flat, a series of layers of semiconductors, insulators, and wiring "printed" on an equally flat sheet of silicon. For the first time, Intel’s new tri-gate technology leaves the plane of the chip and enters the third dimension. It does so by bringing three "fins" of silicon up from beneath the surface, having them stick up into the top, transistor, layer. The effect is kind of like draping a mattress over a fence -- and then repeating that over a billion fences, all just inches apart. The result is a much greater density of the gates, lower power consumption, faster switching, and fewer quantum side-effects. Intel claims that more than 6 million of these 22 nanometer Tri-Gate transistors can fit in the period at the end of this sentence.
The first processors featuring Tri-Gate transistors will likely appear later this year. And you can be sure that competitors, with similar designs, will appear soon after. But that’s their battle.
What counts for the rest of us is that Moore’s Law survives. The future will arrive as quickly as ever ....