Tag Archives: heat

Dark Silicon: an end to Moore’s Law?

From the New York Times:

paper presented in June at the International Symposium on Computer Architecture summed up the problem: even today, the most advanced microprocessor chips have so many transistors that it is impractical to supply power to all of them at the same time. So some of the transistors are left unpowered — or dark, in industry parlance — while the others are working. The phenomenon is known as dark silicon.

As early as next year, these advanced chips will need 21 percent of their transistors to go dark at any one time, according to the researchers who wrote the paper. And in just three more chip generations — a little more than a half-decade — the constraints will become even more severe. While there will be vastly more transistors on each chip, as many as half of them will have to be turned off to avoid overheating.

Personally, I’m not going to sing requiem for Moore’s Law just yet; many brick walls for it have been suggested, and they’ve always been engineered around eventually; that said, there are limits to almost everything, and perhaps silicon architecture will finally meet its apogee. I think the real question to ask here is “would that be a bad thing?” An upper limit on computing power might just lead to software that uses what’s available more efficiently…

(Top marks on the suitably doomy and mysterious moniker “dark silicon”, though; that’s a post-cyberpunk novel title just waiting to be used.)

Throw another process log in the data furnace, darling

Via SlashDot, an intriguing idea comes a-squirming out of Microsoft’s Research wing: the data furnace. You know how your computer hardware chucks out a whole lot of heat as a waste product? Well, imagine how much a datacentre has to cope with. So why not put that waste heat to good use, and use it to heat people’s homes?

The genius of this idea is that Data Furnaces would be provided by companies that already maintain big cloud presences. In exchange for providing power to the rack, home and office owners will get free heat and hot water — and as an added bonus, these cloud service providers would get a fleet of mini urban data centers that can provide ultra-low-latency services to nearby web surfers. Of course the electricity cost would be substantial — especially in residential areas — but even so, the research paper estimates that, all things considered, between $280 and $324 can be saved per year, out of the $400 it costs to keep a server powered and connected in a standard data center. From the inverse point of view, heating accounts for 6% of the total US energy consumption — and by piggybacking on just half of that energy, the IT industry could double in size without increasing its power footprint.

You will have, of course, already thought of the most obvious objection or snag:

The main problem with Data Furnaces, of course, is physical security. Data centers are generally secure installations with very restricted access — which is fair enough, when you consider the volume and sensitivity of the data stored by companies like Facebook and Google. The Microsoft Research paper points out that sensor networks can warn administrators if physical security is breached, and whole-scale encryption of the data on the servers would ameliorate many other issues. The other issue is server management — home owners won’t want bearded techies knocking on their door every time a server needs a reboot — but for the most part, almost everything can now be managed remotely. Managing business data with effective data engineering consulting is easier and can help for more exposure online.

An interesting idea, certainly, but one that still depends on the extant hierarchical model of CPU/storage/bandwidth distribution. Better still (at least for this anarchist) would be for every home to have its own datacentre, with multiple redundant backups stored across fragments of other HDDs on other machines in a torrent-like fashion; flops and bytes are already arguably basic utilities for life (for the more privileged among us, at least), and are unlikely to become less essential to us barring some sort of existential-risk scale catastrophe… so the ubiquitous home server becomes as inevitable as the microwave oven. Sure, that model’s not without its risk scenarios, but it devolves responsibility for (and management of) said risk to the end user, removing it from the corporation or government. Of course, not everyone sees that degree of personal responsibility for risk as a net social good… 🙂

More obviously still, though, the flaw to the data furnace plan is that it overlooks the most logical response to waste heat, namely the development of more efficient computing hardware… after all, we have way more flops and bytes than the average domestic application really demands by this point… so instead of chasing BiggerBetterFasterMore, we could maybe chase SmallerCoolerLighterLess.

Recycling waste heat in computers to increase efficiency

computer processor pinsThe ever-louder whining of my computer’s processor fan is a constant reminder that there’s a lot of energy wasted in modern microprocessors (and that it’s high time I replaced the ageing beast for a machine less likely to collapse at any moment).

While we’re unlikely to be offered room-temperature computer systems any time soon, engineers in the emerging field of phononics are looking at ways to harvest that waste heat and make computers more efficient in the process:

It exploits the fact that some materials can only exchange heat when they are at similar temperatures. The small memory store at the heart of their design is set to either a 1 or 0 temperature by an element that can rapidly shunt in or draw out heat. The store itself is sandwiched between two large chunks of other materials.

One of those materials is constantly hot, but can only donate heat to the memory store when that too is hot, in the 1 state. The material on the other side of the memory patch is always kept cold, but can draw heat away from the store whatever state it is in.

Early days yet, of course, but maybe thermal computing will give Moore’s Law another stay of execution when we reach the practical limits of circuit integration. [via SlashDot; image by Ioan Sameli]