How Data Center Consolidation Will Impact Power and Cooling Strategies
Perhaps I should have titled this “How Data Center Consolidation Should Impact…” or perhaps “How Data Center Consolidation Might Impact…”, because, after all, the logical conclusion and the actual conclusion do not always neatly align. For example, after ASHRAE TC 9.9 published the allowable environmental envelope for Class 3 servers and then almost all new production servers came with Class 3 operating requirements, the logical conclusion would be that there is no reason to build a data center with any mechanical cooling, unless you’ve settled on some prime real estate in the Lut Desert, Death Valley or the Australian Badlands. Surprisingly, there have been very few takers on that entirely logical proposition. With that caveat then, we can explore what we might expect to see in the mechanical and electrical distribution arenas with the prevailing migration toward data center consolidation.
You can’t swing a dead search engine around by the tail without hitting articles on the benefits of consolidation and the secret tips for successfully consolidating data centers, and Bill Kleyman addressed “Data Center Consolidation: A Manager’s Checklist” in this space back in September of 2015, so I will limit my comments to some ramifications that perhaps not everyone has seen lurking on the horizon. First, in the broadest view, data center consolidation is actively adopting more efficient, cost-effective technologies to stem the tide of adding and maintaining more (read: unnecessary or redundant) data center space. There can be three very general paths to consolidation: 1.) An enterprise consolidates several data centers into a few data centers. 2.) An enterprise moves most of its hardware into one or more consolidated co-location data centers. 3.) An enterprise gets out of the business completely and migrates everything to the cloud. Just for clarification, consolidation does not necessarily mean going from many to few; it can also just mean reducing the resources required for a particular data center. Each of these paths to consolidation will have varying levels of benefit and require different management strategies, but, depending on critical mass, they will share some common implications for mechanical and electrical distribution infrastructure.
Furthermore, for the sake of keeping things simple, I will appear to be confusing “cloud” and “consolidation” in some of the following observations, but only because the ultimate blurring of those distinctions “is only logical.” Some of our plot points here could be Oracle CEO Mark Hurd’s prediction that 80% of corporate data centers will disappear in the next eight years into the cloud, or the “Cisco Global Cloud Index,” prediction that hyper scale data centers will grow from housing 21% of the world’s servers to 47% within the next couple years and from handling 34% of data center traffic to 53%, and that global cloud IP traffic will account for more than 92% of total data center traffic by 2020. You don’t have to squint too hard to make that blur work.
For a spoiler alert preview, we can look at the impact of this consolidation on IT hardware. In general, the increase in equipment purchases is lagging well behind the increase in stored and transacted data. A 2013 study conducted by the Lawrence Berkeley National Labs concluded that if enterprises moved their email and productivity applications to the cloud, it would reduce their need for servers from 3.5 million to less than 50,000, representing something like a 98% reduction. So a reasonable conclusion might be that we have just moved the point of acquisition. Au contraire, mon ami. While most enterprise data centers may operate servers at less than 20% utilization, cloud data centers will typically operate around 40% or higher, which means the demand for servers would decrease by around half. However, it is not quite so straightforward. Major Cloud service providers such as Amazon, Facebook, Google and Rackspace are now designing their own servers, and the Open Compute Project has created a specification process that bypasses the major OEMs. In the spirit of this cost-containment vertical integration approach, Google has developed their own ASIC and Facebook has developed their own switch. Our friends at Amazon Web Services have picked up this ball and run with it in designing their own routers, chips, storage and compute servers, as well as their own high-speed network. The impact of cloud consolidation extends well beyond the behavior of individual enterprises, but truly has the potential to turn this whole industry on its ear.
Similar game changing implications reside in the electrical and mechanical distribution areas. The biggest change will be that, even though the market is growing, the number of customers will actually be shrinking. Figures provided by IDC and Data Center Dynamics indicate that while the number of U.S. data centers has declined every year since 2009, the capacity and square footage for data centers has continued to increase. That decline in the count of data centers is further magnified by merger and acquisition activity contribution to the reduction in the number of customers for data center products and services. The effect of mega consolidation is obvious, but the growth of smaller edge data centers will also contribute to the reduction in customers as we can expect to see a movement away from collocating in independent local colo spaces to either collocating with edge specialists who will copy-and-paste their design around the country in second and third tier markets or to content providers deploying their own cookie-cutter lean design around the country in those previously under-served markets.
The open compute projects will drive toward a commoditization of rack, cooling and power distribution products, lowering costs … and supplier profits … and reducing the served available market of some channels. I have always thought that the data center needed to be regarded as a machine rather than as a room or building with a bunch of tech stuff stuck in it. These consolidated data centers and the open compute project are finally moving us in that direction, and as a result the demand for traditional power and cooling solutions is changing. Traditional 120 or 208 VAC power distribution sometimes looks like 480 VAC straight to the server, and obstacles to DC power distribution may be diminishing as single customers become large enough to support an industry outlier. Power redundancy may become less meaningful when a whole data center becomes the economic unit of redundancy. Likewise, how much sense does it make to assume the costs of raised access floor and precision perimeter cooling units for spaces ranging from 50,000 square feet to approaching a million square feet? Since these data center operators are starting to design their own servers, we may finally find our way to that long-delayed world of data centers with no mechanical plant. As these data centers compete with each other on cost, it is only logical that they will continue to strip out much of the historical capital and operational overhead and become buyers of components (fans, evaporative media, etc.) that fit into their “machines” rather than value-engineered solutions.
Conversely, we can expect the mega data centers and content provider edge date centers to be looking for design partners with whom they can work to establish competitive advantages for their spaces. These two opposing forces will eventually lead to some shake-out in industry segments serving the data center market. The commoditization of power, cooling and racks will create profit problems for suppliers who are not pursuing low cost standard manufacturing strategies. Captive design-partner relationships will leave those outside looking in with weak engineering and service capabilities. Data center suppliers focusing on enterprise market may see some shrinkage in served available market as the cloud removes the risk of taking wait and see attitude about when to expand or add data center capacity.
While we live in a very conservative, change-averse world in the data center industry, if the industry projections are more or less accurate about consolidation migrations into the cloud, what we will really be seeing is the migration of work from part-time amateurs to the full-time experts, and the associated migration from rooms full of IT stuff to fully integrated IT machines. As such, therefore, we may actually start moving fast enough that my dog will want to stick his head out the window of this car and feel some wind blow through his ears.
The continuing trends of consolidation into super mega-cloud data centers, transformation of edge data centers by content providers and growth of open compute space will all conspire to change the playing field for some suppliers into this space. The biggest change will be that, even though the market is growing, the number of customers will actually be shrinking. The effect of mega consolidation is obvious, but the growth of smaller edge data centers will also contribute to the reduction in customers as we can expect to see a movement away from collocating in independent local colo spaces to either collocating with edge specialists who will copy-and-paste their design around the country in second and third tier markets or to content providers deploying their own cookie-cutter lean design around the country in those previously under-served markets. On the one hand, the open compute projects will drive toward a commoditization of rack, cooling and power distribution products, lowering costs … and supplier profits … and reducing the served available market of some channels. Conversely, we can expect the mega data centers and content provider edge date centers to be looking for design partners with whom they can work to establish competitive advantages for their spaces. These two opposing forces will eventually lead to some shake-out in industry segments serving the data center market and we may see that starting in 2017. Data center suppliers focusing on the enterprise market may see some shrinkage in served available market as cloud removes risk of taking wait and see attitude about when to expand or add data center capacity.
Learn how you can improve your airflow management strategy by downloading Upsite’s free white paper: Bypass Airflow Clarified
About the Author
Ian Seaton is an independent Critical Facilities Consultant and serves as a Technical Advisor to Upsite Technologies. He recently retired as the Global Technology Manager of Chatsworth Products, Inc. (CPI).