Quantum Processor Weirdness

Wayne Rash

It was a gathering after a party, after a meeting. I was in a place with people from Microsoft in Bellevue, Wash., and we were talking about the seeming limits on server performance. Lately, they noticed, the processors in server platforms weren't any faster than they have been for a while. Instead, they seemed to be getting more processors.

'It's quantum effects,' I explained. The real problem wasn't really speed, of course. The problem was that it's more difficult to write software that uses multiple processors-or multiple cores in one processor-and uses them efficiently. So I explained what I meant by quantum effects in regards to processors.

Basically, I explained to the people I was visiting, there comes a point in which you run afoul of the laws of physics. To be faster than they are currently, a processor needs to be smaller because of the limits on how fast information can transit the distance across the actual circuitry of the CPU. But the problem is that the smaller the device is, the more it's likely to be affected by the uncertainty that the electrons that actually compose the signal the processor needs will be where they're supposed to be when they're needed.

Electrons, it seems, aren't so much actual objects as they are probability fields, and the smaller a device is, the lower the probability that a given electron will be where it's supposed to be.

This sounds pretty weird, and that's because it is. But it's still how the world works on that scale. The best and least expensive way around the problem is to stop making the insides of the computer faster, and instead to make more of them. Thus the popularity of slightly slower, multi-core devices inside your servers.

All of this is intellectually interesting, of course, but it actually matters to you. The reason it does is that the old brute force means of increasing performance had undesired side effects, like heat and lost energy. Newer designs, in addition to avoiding conflicts with those pesky laws of physics, also run cooler, used less energy, and do lots of parallel processes. In other words, they are more efficient, and they handle more information.

Of course, there was another side effect. Programmers would have to get used to dealing with multiple cores instead of just one or two. Scheduling the operations of such devices is really complex, and it takes skilled people to make it work right. This means that writing code for your new servers costs more and takes longer.

On the other hand, it also means that your new servers are likely to be more efficient and more reliable. This is a good thing.

But it still takes some getting used to. The idea that objects might not be what you think they are, and that digital circuits might not do exactly what they're supposed to do isn't what we grew up knowing about computers. The easiest thing to do is to try not to think too much about this, but instead be satisfied that your servers will work better, cost less, and won't soak up so much energy. Never mind that it's because electrons aren't always where you thought they should be.
 



Add Comment      Leave a comment on this blog post
Dec 4, 2009 10:12 PM Anonymous Anonymous  says:
Strange that the hardcore, physical certainty of physics appears to be bending in these respects to the more "touchy,feely" behaviours of the biological world. Namely, it might work, but then again it might not. A slight touch of irony that shouldn't be missed by those of us condescended to by smugly superior mathematicians and physicists in our undergraduate years. Reply
Jan 14, 2010 12:01 PM Mike Cairns Mike Cairns  says:
"Newer designs, in addition to avoiding conflicts with those pesky laws of physics, also run cooler, used less energy, and do lots of parallel processes. In other words, they are more efficient, and they handle more information." Err, ah-hem. That sounds just like the old dinosaur, the IBM Mainframe - which for 40 years now has run slower cores in parallel, and put more real business work through the system than any rival architechture. Mainframe CPU's typically run at near 100% utilisation for example. "Of course, there was another side effect. Programmers would have to get used to dealing with multiple cores instead of just one or two." Not if the operating system does this for them they don't. The mainframe architecture has hidden these implementation issues from programmers with full backward compatibility for, umm, what was it? Oh yes - 40 years again. That's unheard of in the 'IT industry' - programs being able to run across operating system and hardware upgrades unchanged for 40 years. Isn't that a real example of protecting the companies investment in all those programmer hours? I could say a lot more - but there's plenty of blogs out there for those open-minded enough IT pro's who really are looking to get off the continuous revision game that small end servers have been subjecting their customers to since the late 80's (dancingdinosaur.wordpress.com is a good start). "Next Generation Servers" - sorry, I laugh and gag at the very concept - these systems are obsolete the moment they're announced. Cheers - Mike Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.