AMD had a bit of a party at SIGGRAPH and I flew down to LA to share in the festivities of what was to be a processor and graphics card launch bash. Well, they did showcase the new Threadripper CPU and graphics (both consumer and professional) solutions, but the most interesting thing came at the end of the event: a petaflop single-rack server. This rack had 20 servers, each with one EPYC processor and up to four Vega graphics cards, each daisy-chained to get to this performance level. Now, this server could support up to 1,280 workstation sessions, according to AMD, but focused on a single task it was a render king and even untuned produced near real-time rendering, suggesting we are getting really close to high frame rate, high resolution, real-time rendering.
The product, which will be sold by Inventec as part of the P-Series, and distributed by AMAX toward the end of 2017, represents a massive performance boost for those who need either centralized workstation capability or shortened render speeds. Eventually it will likely be used for AI development and deployment.
Let’s talk about what a petaflop server means for the market.
One of the big problems with trading floor implementations is that they use a lot of workstations distributed in cubicles and offices, each consuming power and generating substantial amounts of heat that all must be managed. Going centralized has always been a desire to move the heat and power to where they could be better managed and to better secure the solution. But the complexity of rack mounting large numbers of workstations or getting around the I/O limitations of servers seems to have largely prevented the centralization.
Project 47, once tuned, should be able to adequately address the needs of a decent-sized trading floor, removing a great deal of the air conditioning load and better securing and protecting the result. Granted, you’d still need to assure redundancy and failover, but this could be an ideal solution, particularly if you needed to set up a trading floor quickly or were undergoing lots of changes (which is common now).
Setting up the capability for doing high-resolution military simulations in the field is difficult because the equipment needed would take up too much room. This means compromising the experience or doing it remotely at some well-fortified command center. Project 47 is small enough to be put in the corner of a mobile command center or even on a large military aircraft, allowing for real-time rendering of data captures though drones or other sensors, potentially providing a view of the engagement theater that would be unprecedented and a huge potential strategic advantage.
Presenting friendly and hostile forces, much like you would in a real-time strategy game, would give decision makers far better intelligence on logistics, troop movements and resource placement. It might alone define which side won or lost in a major engagement.
So much is done with green screens and CGI for movies today that it is often impossible to tell the difference between what is real and what is rendered. But rendering a film frame by frame is an expensive, time-consuming process. Studios have massive render farms they manage and because each frame can take hours to render, the creation process is slow and very resource intensive.
Systems like the Project 47 could dramatically reduce render times, shortening the time and resources needed to create a major movie. This lowers the studio risk and allows projects to advance more quickly to revenue, potentially helping the top and bottom lines. Granted, they still must avoid producing crap, which has often proven difficult, but this should at least reduce the loss even when that happens.
But this also anticipates a time when the system itself could automatically generate scenes around actors, both real and digital, further blending gaming and creation and driving to a time when you could have a reality TV show in a fully interactive virtual world.
And There Is More
Bounding a system like this is difficult because it clearly could step into many of the super-computer, machine and deep learning efforts, and do some interesting things with weather and space mapping, not to mention large-scale modeling and simulation. It also may give us an idea of the performance a single workstation will have in around a decade.
Wrapping Up: Imagining the Future of Computing
I think we are only touching the tip of the iceberg right now regarding the next wave of computing performance. We are approaching the ability to do real-time, high frame, high resolution rendering, and that will change many industries. For now, the Project 47 represents the cutting edge of production single-rack supercomputers. That means it will also provide more low-cost alternatives to supercomputer projects and more distributed supercomputer capabilities to the schools and employers that need them.
Given that we went from an entire room needed for a petaflop computer in 2008 to one rack in 2017, you kind of wonder whether we’ll have a petaflop on the desktop by 2026? If we do, the world will look very different than it does today. If you don’t like change, this is likely the wrong decade for you because we are due for a ton of it.
Rob Enderle is President and Principal Analyst of the Enderle Group, a forward-looking emerging technology advisory firm. With over 30 years’ experience in emerging technologies, he has provided regional and global companies with guidance in how to better target customer needs; create new business opportunities; anticipate technology changes; select vendors and products; and present their products in the best possible light. Rob covers the technology industry broadly. Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group, and held senior positions at IBM and ROLM. Follow Rob on Twitter @enderle, on Facebook and on Google+