Tag Archives: Moores-Law

Dark Silicon: an end to Moore’s Law?

From the New York Times:

paper presented in June at the International Symposium on Computer Architecture summed up the problem: even today, the most advanced microprocessor chips have so many transistors that it is impractical to supply power to all of them at the same time. So some of the transistors are left unpowered — or dark, in industry parlance — while the others are working. The phenomenon is known as dark silicon.

As early as next year, these advanced chips will need 21 percent of their transistors to go dark at any one time, according to the researchers who wrote the paper. And in just three more chip generations — a little more than a half-decade — the constraints will become even more severe. While there will be vastly more transistors on each chip, as many as half of them will have to be turned off to avoid overheating.

Personally, I’m not going to sing requiem for Moore’s Law just yet; many brick walls for it have been suggested, and they’ve always been engineered around eventually; that said, there are limits to almost everything, and perhaps silicon architecture will finally meet its apogee. I think the real question to ask here is “would that be a bad thing?” An upper limit on computing power might just lead to software that uses what’s available more efficiently…

(Top marks on the suitably doomy and mysterious moniker “dark silicon”, though; that’s a post-cyberpunk novel title just waiting to be used.)

The slowing of technological progress

technology_plug_laptopAlref Nordmann writes in IEEE Spectrum of how technological progress is, contrary to the promises of singularitarians like Ray Kurzweil, actually slowing down:

Technological optimists maintain that the impact of innovation on our lives is increasing, but the evidence goes the other way. The author’s grand mother lived from the 1880s through the 1960s and witnessed the adoption of electricity, phonographs, telephones, radio, television, airplanes, antibiotics, vacuum tubes, transistors, and the automobile. In 1924 she became one of the first in her neighborhood to own a car. The author contends that the inventions unveiled in his own lifetime have made a far smaller difference.

Even if we were to accept, for the sake of argument, that technological innovation has truly accelerated, the line ­leading to the singularity would still be nothing but the simple-minded ­extrapolation of an existing pattern. Moore’s Law has been remarkably successful at describing and predicting the development of semiconductors, in part because it has molded that development, ever since the semiconductor manufacturing industry adopted it as its road map and began spending vast sums on R&D to meet its requirements.

there is nothing wrong with the singular simplicity of the singularitarian myth—unless you have something against sloppy reasoning, wishful thinking, and an invitation to irresponsibility.

This is the same point made by Paul Krugman recently. Nordmann points out that most of the major life-changing technological changes of the past 100 years had all already happened by about the 1960s, with the IT revolution of the last fifty years being pretty much the only major source of technological change[1] to impact him over his lifetime.

This arguments suggests that the lifestyle of citizens industrialised countries will remain fairly stable for a lengthy period of time. It raises the serious point that the best we can hope for vis a vis technological change over the next few decades will just be incremental improvements to existing technologies, and greater adoption of technologies by people in poorer countries.

This would be no bad thing of course, but the suggestion that Ray Kurzweil’s revolutions in nanotechnology, genetics, biotechnology, and artifical intelligence may not arrive as early as Kurzweil predicts is pretty disappointing.

It could be that, to paraphrase William Gibson, the future is in fact here, it’s just not evenly distributed.

[1]: By “major source of technological change” I mean things like antibiotics, mass personal transport, and heavier-than-air flight. There certainly have been improvements in all these areas in the last 50 years, and much wider adoption, but these have not had as great an initial impact.

[from IEEE Spectrum, via Slashdot][image from Matthew Clark Photography & Design on flickr]

Moore’s Law gets a new lease of life

digital camera CCD chipGood news for Kurzweilian Singularitarians and flop-junkies – Moore’s Law has been looking increasingly likely to derail as we approach the lowest practical limit for semiconductor miniaturization, but newly announced research means there’s life in the old dog yet:

Two US groups have announced transistors almost 1000 times smaller than those in use today, and a [nano-scale magnet-based] version of flash memory that could store all the books in the US Library of Congress in a square 4 inches (10 cm) across.


Using 3-nanometre magnets, an array could store 10 terabits (roughly 270 standard DVDs) per square inch, says Russell, who is now working to perfect magnets small enough to cram 100 terabits into a square inch.

“Currently, industry is working at half a terabit [per square inch],” he says. “They wanted to be at 10 terabits in a few years’ time – we have leapfrogged that target.”

If this were Engadget, we could squee about how we’ll have laptops the size of wristwatches by the end of the decade, but that would be to miss an important point. The ever-falling cost and size of memory and processing power will certainly mean more gadgets, but those gadgets will bring social changes along with them – as Charlie Stross pointed out a while ago, if you can read and write data at the atomic scale then physical storage capacity becomes a complete non-issue, allowing you to record everything – literally everything. [image by Fox O’Rian]

When you can record everything, how do you go about managing and using what you’ve recorded?

“Tech support? I need a plumber.”

In an attempt to pre-empt the engineering problems posed by the relentless march (or final splutterings) of Moore’s law, IBM has unveiled plans for an very different kind of hydraulic computing;

A network of tiny pipes of water could be used to cool next-generation PC chips, researchers … have said.

Scientists at the firm have shown off a prototype device layered with thousands of “hair-width” cooling arteries.

They believe it could be a solution to the increasing amount of heat pumped out by chips as they become smaller and more densely packed with components.

So – let me get this straight – give it five years, and to support my ultra-powerful palmtop, I’ll have to plumb the darned thing into the domestic water supply or ensure a steady supply of bottled mineral water? Either way, surely that’d negate the whole portability issue?

[Image and story via the BBC]

Memristors – The new component of electronics

A new component of electronics, first proposed in 1971, has been built by researchers at Hewlett Packard. Memristors join the three existing main components of a circuit – capacitors, resistors and inductors. The main feature of a memristor is its ability to ‘remember’ what charge it had when power runs through it.

Today, most PCs use dynamic random access memory (DRAM) which loses data when the power is turned off. But a computer built with memristors could allow PCs that start up instantly, laptops that retain sessions after the battery dies, or mobile phones that can last for weeks without needing a charge. “If you turn on your computer it will come up instantly where it was when you turned it off,” Professor Williams told Reuters.

In addition the memristor is very small and once fully commercialised could allow computer chips far smaller than those today, giving good old Moore’s Law another reprieve as conventional methods to keep it going begin to run out of steam.

[via BBC]