So who has the fastest server in town? That age-old question is on the table once again following a series of claims and counter claims over benchmark tests last week. But perhaps the real question should be whether the benchmarks themselves have any real meaning anymore in the virtual/multicore age.
The subject of benchmarks came up following an announcement from IBM claiming that its System p 550 Express machine outperformed HP's PA-RISC system during recent TCP-C tests. In a bid to wrangle the nearly 170,000 PA-RISC users from HP's migration strategy to the Integrity line, IBM reported that an eight-core p550 using a 4.2 GHz Power6 processor and running a single instance of DB2 Enterprise 9.5 on AIX 5.3 with DS3400 Express storage handled 629,159 transactions per minute (tpmC), 16 percent more than a 64-core HP 9000 Superdome and 1.6 times more than an Itanium-2-based Integrity rx6600. IBM also noted that the p550 uses only 9 percent of the energy of the Superdome and takes up only 2 percent of the space.
Numbers being what they are, it wasn't long before the flaws in the comparison came out. As this article in the Register points out, the HP Superdome results in question were from 2003 with processors running at 875 MHz, a comparison the paper describes as "skinning corpses." And even the comparisons to the Integrity system overlooked the fact that they come at a cost of $2.49 per transaction, compared to $1.81 for HP.
Meanwhile, NEC is claiming new records for the TCP-E benchmark, used to calculate performance of customer-related accounts at, say, a brokerage firm. The company's NEC Express5800/1320Xf server using the S2500 storage system made 1,126.49 transactions per second (tpsE), a 70 percent improvement over the old record. The system used 32 dual-core Itanium 9150s running SQL Server 2008 on the Windows Server 2008 OS. That performance comes at a price of $2,771.79/tpsE.
Gaming benchmark results is a time-honored tradition in IT circles, and calls for reform have risen from time to time. But as Laurianne McLaughlin on CIO.com points out, there is a growing chorus of voices arguing that the very idea of industry benchmarks has become obsolete. With virtualization and multicore technology producing an explosion of designs and architectures, it's become nearly impossible to compare one server to another anymore. Many firms have begun establishing their own benchmarks to gauge the criteria they deem most important, whether that be transaction/application performance, power/density requirements, or a host of other measurements.
Love 'em or hate 'em, the existing benchmarks aren't likely to go away soon. But as the enterprise grows more diverse in its function and underlying technology, they may prove increasingly less reliable when determining system performance in the real world.