Over at The New York Times, Ashlee Vance is writing that, once again, thin clients are coming and, I think, suggesting that this represents some risk to Microsoft. Dennis Byron, who also writes for IT Business Edge, wrote that the New York Times piece was satire, and I can hardly blame him. Vance seems to have stepped back into the late 90s and spent time reading too much old Larry Ellison and Scott McNealy collateral from that time to get so excited about the Samsung PC-less monitor he is hyping.
I've been covering thin client computing for well over a decade. If it were coming by sea, it would have to be on the equivalent of the Gilligan's Island tour boat transport. Apple has a larger desktop market share than all of the thin client industry combined, and it got there by ignoring the entire thin movement.
Why Can't Thin Clients Get a Break?
The thin client model is what the mainframe was, updated for the software we use today. The platform has many of the same benefits, except one. That is that servers as they currently exist are not designed as mainframes were; they are I/O bound. Servers are built to handle large amounts of data but not designed to multiplex massive numbers of users pounding on the processor and internal systems all at once. They are data boxes, not processing powerhouses. They tend to bottleneck really badly, virtualization or not, because they simply can't cost-effectively do the processing for lots and lots of people.
In addition, while PCs have been in price freefall, the prices on the kinds of servers that would be best suited for this kind of an implementation haven't been dropping as fast. Sixteen-way quad cores (and in this kind of implementation, you can really never have enough cores) remain relatively low volume and are comparatively very costly. In addition, data centers are coming close to hitting or exceeding thermal limits. If you started dropping several hundred cores into most of them, you could likely have a BBQ in the room just by bringing your meat (or your body) into it.
Finally, what made the PC industry go were standards and, for thin clients, there are far too few of them. This means if a company does make a massive investment, and it is generally massive, in a thin client platform they are stuck with the vendor. This is one of the reasons HP is in this space; with that kind of mainframe-level risk, buyers want a major company that they know will remain with the platform behind it. To take off, the industry would have to be much more standardized than it is to reduce the risk associated with the move.
Mobile Remains a Massive Problem
The fastest-growing PC segment is mobile. The problem with thin client computing is that if you aren't connected, you basically have a cute little tech brick on your desk. There are thin client notebooks. HP actually has several, and with the right Wi-Fi technology they actually work on premise rather well. With enough WAN bandwidth, they are also very usable and, unlike a regular notebook, don't contain a data. If they are lost, it won't result in a formal, embarrassing reporting incident.
Unfortunately, WAN plans are expensive, WAN doesn't work in planes (and the Wi-Fi in some planes lacks the needed bandwidth), and these laptops are rather expensive for what they do (because they ship in relatively low volume).
The Future Could Still Be Thin
This is why older thin client architectures are giving way to more of a client/server design, which distributes the processing power (processors are inexpensive, after all) and leaves the storage centralized. The endpoint hardware is looking more like a PC and less like a TV with extra parts, which was what the original thin client model was working against.
By simply removing the drive and connecting to a high-speed, iSCSI SAN, like Intel is promoting with its offering, you can get most of the advantages of a thin client while retaining flexibility (the terminals are PCs without drives and could be made back into PCs again by adding drives), and you take significantly less risk. The Intel technology was architected specifically to handle the kinds of loads placed on it by desktop clients, and I've been told by several third parties (Lenovo, Wyse) that the technology exceeds the requirements set for it in this role.
The problem of mobile still remains, but here, WiMax may provide enough low-cost bandwidth to overcome this challenge. As I look at Netbooks and think of what a secure Netbook product would look like, with technology like TPM, biometric or smart card user authentication, and WiMax networking technology, we are very close to finally having a lot of the parts needed to provide the benefits of thin client computing, though it will look vastly different from what Larry Ellison and Scott McNealy first imagined.
This doesn't inherently put Microsoft at risk. Then again, that may be part of the promise because, for any new technology to succeed, it generally must embrace the technology it replaces first. Thin may be in your future but it may actually, at least initially, not look that different to the user than what came before. That may be the single biggest reason why it could actually take off before the end of the decade.
At the end of the NYT piece, Vance quotes Roger Kay, one of my friends and traveling companions, as saying thin client is gaining momentum. It is, but only because it is looking more and more like a next-generation PC.