Top Data Center Trends and Predictions to Watch for in 201617 min read

by | Dec 9, 2015 | Blog

Click here to view our list of data center trends and predictions for 2017.


What changes are in store for the cloud and data center industry in 2016? We set to find out by talking to industry experts Lars StrongBruce Taylor, Vince Renaud, and Ian Seaton, here’s what they had to say:                                                                .

As densities continue increasing and budgets remain tight, the need for getting as much cooling capacity at the lowest possible operating cost remains strong. I see this need driving three trends in data center cooling:

1. Hybridization

There are many components that make up the cooling “system”. And for each of these components, there are many options to choose from. Cooling unit type, fan type, perforated tile design, cabinet design, room layout, IT equipment design, and much more all affect how the system will work. I am seeing a lot more experimentation and options from manufacturers of cooling components and seeing a lot more experimentation with different component combinations by end-users. I am optimistic about this trend because with experimentation comes understanding, both of the nature of cooling dynamics and the methods that work for the unique conditions of every computer room. I expect this trend of experimentation will continue. While there is some cost associated, the potential savings in the long run and the confidence that a solution has been found will far outweigh the cost.

2. More Cooling Education and Persistent Myths

With the growing number of solutions to improve the efficiency and capacity of cooling there is an increase in the education about these methods and cooling science in general. Making a decision about implementing solutions such as indirect or direct evaporative cooling require a greater understanding of thermal dynamics and environmental conditions, as such, there is a growing amount educational material. However, even with an increase in the education available, I am seeing a number of myths about cooling science remain. Embedded in these myths is the misunderstanding and misapplication of sound science. As the possible methods for cooling computer rooms increases so does the possibility for misunderstanding of the science. My recommendation is to seek out independent expertise. If someone, an “expert”, can’t explain a concept in a simple easily understandable way, they themselves may not understand all the interrelated issues.

3. Increased Awareness on the Fundamentals of Airflow Management

While there are many strategies for configuring computer room cooling, the prerequisites of effective and efficient cooling remain the same. These are; sealing cable openings if there is a raised floor, use of blanking panels, and closing spaces in rows of cabinets. While it may seem self-serving for me to say this is a trend. Taking care of these AFM prerequisites certainly has been a large part of what I have been sharing for more than the last decade of my career.  However, as I attend conferences and trade shows around the country and the world I am more frequently seeing session topics on AFM basics. Also, I am often hearing presenters refer to AFM fundamentals. Either because they are required to get the most out of their solution to cooling, or because the presenter is pointing out that as they visit sites they often see room for improvement and know how important the fundamentals are.

– Lars Strong


1. Growth Will Catch us Off-Guard

In the realm of megatrends that will move globally faster than anyone expects. I’ve said it before publicly – the growth in data from all sources will catch us all off-guard. Conservative estimates suggest that we’ll cross the 50zettabytes of data to be trafficked across networks, processed, stored, and used in every realm of human endeavor. At the beginning of the decade, the term hadn’t been coined, by the end of this year, we’re at over 5zb. (Much of this growth is in the IoT/E – by the end of the decade, 25 billion digital devices of all types will be connected to the Internet.)

2. Capacity and Innovation

If we were to be dependent on the existing ICT, data center and cloud infrastructure to handle the growth in data and traffic, we’d use ALL of the world’s power at the end of the decade. That’s not going to happen, is it? We’re going to add to existing capacity in all areas. And we will innovate at an ever increasing rate.

3. Cloud to Impact Data Center Pricing

Cloud is now and will continue to disrupt and disintermediate the pricing models for the data center industry. At the same time the enterprises are shedding themselves of their obsolete data centers, there increasingly often by-passing colo and managed services leasing to hybrid cloud and its advantageous flexibility, agility and price.

4. Cloud Domination

Cloud will be overwhelmingly dominated by AWS, Google, Microsoft, and IBM with everybody else trying to play catch-up. AND, at the same time, cloud services will be offered in all kinds of colo/hosting/hoteling environments. Rackspace intends to be cloud neutral.

5. Megascale Data Centers

Data centers will move increasingly toward the megascale (i.e. Switch’s $5B SUPERNAP Michigan plan on the site of the old Steelcase Pyramid campus), while there’s a countervailing pull to much smaller network-edge data centers to beat the network latency rap. New physical architectures not bounded by old thinking about racks and rows (i.e. Vapor.io’s Vapor Chamber shown for the first time at DCD Europe in London in mid-November) and that turn on its head notions about rack capacity, power effectiveness and thermal management.

6. The Need for DevOps

If your enterprise IT shop does not now have a DevOps team or its equivalent, then you are out of business and just haven’t seen the memo from your customers yet. We’re solidly in the era of the web-facing, hyperscaling data center, the so-called Third Platform. If you don’t see your business through this lens, then you are behind the competitive curve.

7. Design Innovation

In every single arena of data center technology, innovation is proceeding at almost blinding speed. It has to to keep pace with the growth demand. The next decade will see more transformation in data center design than in the past 50 years.

8. Convergence of IT and Facilities

Solid state (flash) storage, silicon photonics, Docker-like containers and microservices-enabled disaggregation, hyper-convergence, smart micro-grids, cooling alternatives, software-defined X. We are moving inexorably toward the full stack “North of the rack” IT and “South of the Rack” facilities infrastructure being truly converged in the data center.

9. Workload Based Architecture

Systems architectures based on workload. Cloud orchestration and management becomes the next really big deal in cloud; private, hybrids, public and multi-cloud architectures.

10. Uptime Availability in the SDDC

Network performance and cloud now enable disaster recovery, failover, business continuity, and full back-up to now be abstracted on a plane above the physical data center. Uptime availability will be as much or more a matter of software-defined networking (SDN) and the SDDC, as it will physical facilities redundancy or operational resilience.

– Bruce Taylor


We see a general growth in automation of the data center infrastructure and critical environment. This comes in the form of SCADA, BMS, EMDS, and other control systems. Some items to know about deploying such control systems:

1. Threat of Hackers

Many are designed to “phone home” either by an internet connection or by a technician interfacing with the system via a laptop. This ability to connect to the system from external sources is a vulnerability that opens the door for hackers to infiltrate these control systems and potentially cause damage.

2. Plan for Outages

Although the deployment of automation in the data center enables simple tasks to be accomplished via a push of a button, many operators do not understand the sequence of operation behind them and what to do should the system fail or hang up. This leaves the data center vulnerable to outages. Many operators rely too heavily on the vendors that come into maintain or make changes to these systems and become complacent. Much like a pilot of an airplane needs to be able to land a plane in an emergency in manual mode, so should data center operators be able to “land” their infrastructure during an emergency or unplanned event.

3. Managing Change to the Infrastructure

Just because a data center is automated doesn’t negate the need to develop, test, and implement a rigorous set of policies and procedures for deviations from the standard configuration (this should also be documented). Managing change to the infrastructure is important as many upsets occur when the site is in an abnormal configuration.

– Vince Renaud


1. Liquid Cooling Becoming a Bigger Player

Intel has released a new chipset that can build 50-60kW of compute power into a single 40-50U footprint. Air cooling such a rack with standard CFM:kW ratios would require anywhere from 6000 CFM up to 9400 CFM of chilled air per rack. That airflow requirement would necessitate one standard data center precision cooling unit per every 2-3 racks – not impossible, but not particularly practical. This may be the year where we see the high performance computing market niche more fully invested in some form of direct contact liquid cooling. We shouldn’t be too dismissive of these niche applications for research and development labs; after all, it was only 10 years ago that chimney cabinets were being deployed to enable 15-20kW loads in industrial and university high-density research labs. After just a few years of incubation, this technology became ubiquitous and morphed from being a density solution to an efficiency solution.

2. Mechanical Cooling Phasing Out

In Uptime Institute’s 2014 data center survey, 24% of respondents said they would consider a data center without mechanical cooling. Dell’s new servers are touted as meeting ASHRAE Class A3 (up to 104˚F server inlet temperature) for 10% of annual hours and being compliant with Class A4 (up to 113˚F) for 1% of the year. While those may seem like rather limited proportions of the year, it turns out that just about any habitable location in the United States where a data center could actually be built will fall within those limits. 2016 may not be the tipping point yet on building new data centers without mechanical cooling, especially since so much of new construction is for cloud, hosting and colocation, all of which will continue to be driven more by marketing perception creation than reality, but the stars appear to be slowly aligning for movement in that direction.

3. Data Center Industrialization

The trend toward the industrialization of the data center will continue to gain traction. While the more traditional view of the computer room as another room in the building with a bunch of computers in it will continue to hold some sway for smaller, enterprise data centers, the IT and facilities viewpoints for larger commercial data centers continue to converge. As these disciplines recognize their mutual interdependence, the evolving integration of mechanical, architectural, electrical and electronic elements into a cohesive machine promotes the passage from the provincial computer room to the industrialized data center. The growing robustness of DCIM tools and the ability to easily move whole businesses around the cloud and all the new physical and virtual security protections all contribute to this ongoing development.

4. “Sustainability” Everywhere

2016 may be the year where the word “sustainability” passes the tipping point of “used to meaninglessness.” On the plus side of the ledger, we have some examples like Google buying 61MW of solar power from Duke Energy and Amazon’s 100MW wind energy purchase agreement in Ohio and Equinix signing power purchase agreements to buy enough wind energy in Texas and Oklahoma to power all its North American sites. In addition, Apple is supporting some research in Ireland on using tidal flow or wave energy to power data centers. (Remember stories on Google’s patent on harnessing tidal energy?) And that opens the door to some spurious use of the “sustainability” self-congratulatory label. For example, Nautilus has touted its Waterborne Data Center solution as “the most environmentally sustainable data center on the market,” while it is using ocean water for cooling, but whatever commercial power is available for powering IT. When we juxtapose some creative marketing against this context and the Obama administration’s Clean Power Plan focus, we can expect a much higher ratio of “sustainability” to “other morphemes” in press releases and pseudo-technical publications in the coming year.

5. Do or Die for DCIM

Will 2016 be the year DCIM reaches its tipping point, or will it be the year it just tips over from its own bloat? That may be a more reasonable question than vendors and market prognosticators are letting on. Consider that CA, one of the big four DCIM solution providers, recently decided to throw in the towel and the Uptime Institute’s 2015 data center survey revealed that nearly half of their respondents either had no plans to pursue DCIM or had evaluated it and turned back. In addition, a quarter of respondents who had successfully implemented DCIM reported it took over a year to complete the deployment and about half of enterprise respondents said they could not justify the poor ROI. Pundits will remain infatuated with the idea, but we will see this coming year if it really gets over the hump.

6. IoT Will Continue to Grow

The Internet of Things has been a buzzword for a while now and the trend of connecting more things to the internet will continue. The folks at Gartner and IDC are predicting that there will be between 25 billion and 30 billion things connected to the internet by 2020, which has significant implications on data center scale and functionality as well as storage capacity. That growth slope is probably more logarithmic than it is arithmetic and chances are our lack of imagination is under-estimating both its scope as well as its trajectory.

7. Increased Server Workloads

According to Cisco’s Global Cloud Index, cloud server workloads will increase 64.7% and enterprise server workloads will increase 60% by 2019 over last year. It will be interesting to see what correlation there might be between the server density ramp-up versus rack and real estate density ramp-ups.

8. Enter, the Edge Data Center

As traffic increases where latency is of primary importance, such as any kind of streaming and more life-critical applications such as driverless cars gain some traction, edge networks or edge data centers will gain some attention in the coming year. The most important development might actually be that a significant proportion of our industry will actually know what the edge is. The first important development should be some kind of shake-out to help us understand if an edge data center is a small data center in a second or third tier market or is it some kind of plug-and-play module. The question will be if proximity to the user is proximity to a town or remote region or is it proximity to a building or floor within a larger building. How that question is answered will ultimately determine if the edge represents a significant hardware market or a significant software market.

2 Comments

  1. Richard Miller

    All the above trends are probably correct, but we also look to the bigger picture. If we keep our DC clean, I mean technically clean on an on going basis and free of on going contaminate this helps, airflow, heating, cooling therefore capacities but we never talk about these aspects. If we destroy our legacy kit in the correct manner, we lessen the hacker ability and adhere to compliance and capacity. If we look to tape back up as a longer term solution we lessen the reliance on cloud and the security aspects. These trends are not always glamorous but vital in the on going demands within the DC market.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month

Free Informative webinars every Tuesday in June.

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest