Both AMD and Intel have been toying with memory virtualization as a way to share server workloads among multiple systems. Now, it looks like a startup is taking the concept and applying it across the data center at large.
RNA networks has just launched the first product of its Memory Virtualization Platform (MVP), the RNAmessenger. The idea is to link all the memory systems in a server architecture to form a single memory pool stretched across any number of network server nodes. The pool operates like any other pooled resource, except that it's reserved for the CPU memory needs of the data center, rather than bulk server or storage capabilities.
Company execs say the goal is to keep clustered server memory capabilities in line with the multicore processors that are driving performance. As the number of cores per server increases, memory subsystems become stressed from all the competing demands for attention. On its own, server virtualization doesn't address this problem because multiple virtual machines don't have access to memory outside of the physical server.
Naturally, efficient pooling of server memory would require high-speed networking of the first order. RNA already has Mellanox on board with its Infiniband product line, primarily the ConnectX IB HCAs that are already optimized with what the company calls Channel I/O Virtualization (CIOV) to boost the memory virtualization capabilities of top-end multicores from AMD and Intel.
RNA is said to be packing a lot of experience in high-end networking solutions. The company boasts recruits from Cray, Intel, QLogic and Akamai. It also just pulled in $7 million in VC funding from Menlo Ventures.
We take it on faith that whenever someone unveils a complex networking system that they've performance-tested it in the lab first to ensure that it will get along with the more common data center platforms out there. There's every reason to believe RNA users won't be faced with integration issues any more serious than other new technologies. But we won't know for sure until we see real-world deployments.
Until then, pooling server memory is just as sensible an idea as pooling any other data center resources, perhaps even more so considering the immediate nature of the transactional or database information that resides in memory. When that gets flooded or is knocked out of commission, everyone on the network knows it instantly.