The Art of Rackonomics - Part I

Julius Neudorfer

As I sit here this morning, I am facing the design challenge presented by a new client's data center project. They just purchased a small 80-year-old, six-story, 30,000-square-foot building in the heart of a major city as their new headquarters. They have already decided that the data center will be located there and have assigned it to the ground floor, along with the main entrance and reception area, freight entrance, as well as the IT department. Like many companies that are �growing up,' this is their first 'real' data center and they have many different people all 'contributing' to the design requirements.

Of course, they did not want to have any of the problems they have with their existing computing environment. Mostly related to their 'make-the-best-of-it' approach, such as their multiple distributed 'server closets' located across several locations, homed back to a 'pseudo' data center in a small corner of their warehouse building. So they decided that the ground floor would be ideal for the new data center and should provide plenty of space. After all, this will be their new headquarters and the cornerstone of their business for the future.

Of course, when management decided to purchase the property, it did not really consider IT's requirements and what they expected of the data center -- only that they knew that it should be 'first-class' (presumably to be interpreted as senior management-speak for 'Tier IV').

Ah, the perfect site, a 'classy' address, right next to city hall and down the block from a high-profile federal government agency, so it should be very secure.

So down to the basics. The IT department went off the deep end, since it was directed by senior management to make this data center 'future proof' with plenty of growth. They had specified that it should have 60 racks (they only had 12 racks before, so I guess that projecting 500 percent growth seemed more reasonable than 1,000 percent). In addition, they wanted everything redundant: power, cooling, communications, LAN & WAN (who could blame them, since they were used to 'server closets'-one of which was actually shared with the janitorial supplies).

While the IT equipment list for 'day one' was being created and re-created by the IT department, it was clear that some of their IT architecture designs were still a little 'fuzzy," but they knew that this was their one chance to get new equipment. They had no real idea about power and cooling requirement for the new equipment, but that was to be my problem. Otherwise, why else would they need me as a data center design consultant?

So, of course, I began to do a few preliminary layouts and calculations, but in the interim, management wanted to 'fast track' the project, and I was told the architect has already divided up the ground-floor space. Suddenly, the data center was this odd-shaped piece containing three main building columns.  He had already heard that IT needed 60 racks and since he had experience designing spaces with racks (so what if they were 2-post open racks in wiring closets), he knew that there should be plenty of space if we 'laid them out efficiently.'

In fact, he showed me his sketch with a 36-inch front aisle and a 24-inch rear aisle, and all the 2-post racks neatly facing the main door (and freight entrance, so that it would easier for the IT people to bring in the new equipment). In fact, to save even more space, he had specified overhead HVAC units to be installed in the ceiling. He even had allowed a 'generous' 15 square feet per rack, which accounted for 900 square feet and even added another 100 square feet for 'miscellaneous' electrical gear. (He assumed that the UPSes were going to be mounted in the bottom of each rack, since that was what they were already using). 

Moreover, he asked 'one of' the IT guys what size UPS they used and was told it was 1400VA.  So he knew that his cooling load was approximately 24 tons. In fact, to ensure high availability N+1 cooling, he had specified six overhead 5-Ton HVAC units, instead of only five. Imagine, he had fitted the entire data center into only 1,000 square feet. So what if it was somewhat 'U' shaped and only 8 feet wide in some places? Look how much larger and nicer the main reception area looked.

Management was very pleased with his preliminary sketches (especially the lobby) and in particular the data center infrastructure budget of under $200 per square foot ($195,000 for the six HVAC units, the 60 2-post racks and 100kVA of electrical panels). After all, they had heard that data center buildout cost several thousand dollars per square foot.  All I had to do now was review his design, perhaps just add some details and approve it.

Thus began my first challenge, to raise the red flag and explain the real costs of a 'first class' data center to senior management (not to mention design criteria to the IT department and the architect).

So stay tuned for part two, to see if I remain as the consultant on this project, after I explain 'Rackonomics' to the client. 

Disclaimer: In case you were wondering, I did not coin 'rackonomics." It has been used by the blade server contingent. However, I like to use it as a straightforward description of cost per rack-which I believe is more important, rather that speaking in terms of cost per square foot for data center space.

Add Comment      Leave a comment on this blog post
Aug 19, 2011 5:08 AM proteine muscu proteine muscu  says:
Hello my family member! I wish to say that this post is amazing, nice written and include almost all significant infos. I would like to see more posts like this. Reply
Sep 22, 2011 3:09 PM faire revenir son ex faire revenir son ex  says:
You have brought up a very great points, thanks for the post. Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.