It was a gathering after a party, after a meeting. I was in a place with people from Microsoft in Bellevue, Wash., and we were talking about the seeming limits on server performance. Lately, they noticed, the processors in server platforms weren't any faster than they have been for a while. Instead, they seemed to be getting more processors.
'It's quantum effects,' I explained. The real problem wasn't really speed, of course. The problem was that it's more difficult to write software that uses multiple processors-or multiple cores in one processor-and uses them efficiently. So I explained what I meant by quantum effects in regards to processors.
Basically, I explained to the people I was visiting, there comes a point in which you run afoul of the laws of physics. To be faster than they are currently, a processor needs to be smaller because of the limits on how fast information can transit the distance across the actual circuitry of the CPU. But the problem is that the smaller the device is, the more it's likely to be affected by the uncertainty that the electrons that actually compose the signal the processor needs will be where they're supposed to be when they're needed.
Electrons, it seems, aren't so much actual objects as they are probability fields, and the smaller a device is, the lower the probability that a given electron will be where it's supposed to be.
This sounds pretty weird, and that's because it is. But it's still how the world works on that scale. The best and least expensive way around the problem is to stop making the insides of the computer faster, and instead to make more of them. Thus the popularity of slightly slower, multi-core devices inside your servers.
All of this is intellectually interesting, of course, but it actually matters to you. The reason it does is that the old brute force means of increasing performance had undesired side effects, like heat and lost energy. Newer designs, in addition to avoiding conflicts with those pesky laws of physics, also run cooler, used less energy, and do lots of parallel processes. In other words, they are more efficient, and they handle more information.
Of course, there was another side effect. Programmers would have to get used to dealing with multiple cores instead of just one or two. Scheduling the operations of such devices is really complex, and it takes skilled people to make it work right. This means that writing code for your new servers costs more and takes longer.
On the other hand, it also means that your new servers are likely to be more efficient and more reliable. This is a good thing.
But it still takes some getting used to. The idea that objects might not be what you think they are, and that digital circuits might not do exactly what they're supposed to do isn't what we grew up knowing about computers. The easiest thing to do is to try not to think too much about this, but instead be satisfied that your servers will work better, cost less, and won't soak up so much energy. Never mind that it's because electrons aren't always where you thought they should be.