Top 20 Data Center Trends and Predictions to Watch for in 201925 min read

by | Jan 16, 2019 | Blog

What changes are in store for the cloud and data center industry in 2019? We set to find out by talking to industry experts Lars Strong (LS), Ian Seaton (IS), Bill Kleyman (BK), and Bob Bolz (BB). Here’s what they had to say:    

1. Climate change will increasingly reduce redundant cooling capacity and free cooling hours – LS

Cooling systems are designed for the hottest day of the year. The maximum and therefore the redundant cooling capacity of data center cooling is based on the ability of the cooling infrastructure to remove heat on the hottest day of the year. Numerous studies conducted by organizations such as the National Center for Atmospheric Research (NCAR) and the National Oceanic and Atmospheric Administration (NOAA) have determined that over the last decade the number of record high temperatures are out pacing the number of record low temperatures by a ratio of 2:1. One model predicts the ratio to increase to 20:1 by 2050. In the Uptime Institute’s Top 10 Data Center Industry Trends for 2019 they recommend conducting frequent availability assessments. Analysis such as these should include calculation of total and redundant cooling capacity for record breaking high temperatures. The way to offset the loss of cooling capacity is to improve airflow management in the computer room so that conditioned air supply temperature and other mechanical system set points can be raised to regain or at least mitigate the loss of redundant cooling capacity and free cooling hours.

2. Growth in the data center industry will continue to create staffing challenges – LS

The growth in the data center industry is driving both enterprise and colocation organizations to hire less qualified staff. I see this condition when I am on site for cooling optimization consulting services. While some individuals in an organization may be aware of best practices, and even understand the science of computer room cooling, they are often so over-worked that sharing that knowledge becomes low on the priority list. Recognizing the deficit in operational knowledge organizations are out-sourcing management of the data center. However, this does not necessarily solve the problem. The same dynamic is occurring within many managed service providers as they are also having a difficult time hiring knowledgeable staff. The lack of knowledge is resulting in inefficient cooling configurations that increase operating cost and strand both power and cooling capacity. This problem will intensify throughout 2019.

3. AFM challenges in multi-tenant rooms will increase – LS

The growth of the colocation segment is resulting in a higher percentage of IT compute to be housed in multi-tenant rooms vs. single tenant rooms. Multi-tenant rooms face numerous cooling efficiency challenges. And, with cooling being the largest consumer of power after the IT equipment, cooling inefficiencies have a significant impact on PUE and OPEX. Cages and the often marginal control over cooling best practices are the most common causes of cooling inefficiencies. Colocation providers that don’t resolve these challenges have higher overhead and won’t be successful in the highly competitive market. Everything else being equal, colocation providers that develop service level agreements with well-defined and enforceable cooling best practice requirements will end up having a significant advantage.

4. More integration of liquid cooling – LS

High density liquid cooled IT equipment is coming, if a data center doesn’t already have liquid cooled cabinets on their floor, they will soon. There are two drivers for liquid cooling—improved efficiency and high density. However, most liquid cooling adoption is being driven by high density and either the challenges or impossibility of cooling with air. Last year Google revealed a liquid cooled solution for their AI processors. Meanwhile, some organizations are designing new facilities around liquid cooling for the efficiency benefits. A federal laboratory in New Mexico, for example, is designing a new liquid cooled data center. In addition to the design OPEX savings, the solution is resulting in significant construction and infrastructure cost savings as well as a smaller overall building footprint. There is a great deal of confusion regarding how to implement liquid cooling and what it means for the overall cooling system. This is particularly true for colocation providers. I have spoken to both colocation providers that have not yet developed a solution for customers who ask for space for their liquid cooled IT and providers that are ready for such customers. The mechanical side of the data center industry has not had a significant shakeup in many years, the growing implementation of liquid cooling is poised to change that.

5. The global data center industry will waste over $7 billion due to ineffective airflow management – IS

According to the latest Uptime Institute data center survey, the average global data center PUE in 2018 was 1.58. Based on the flattened improvement trend of the past five years, the forecast for 2019 will be something like 1.567. With total global data center energy estimated at 416 billion kW hours, at a 1.567 PUE, close to 150 billion kW hours will be used for something other than IT equipment. Research indicates a Mechanical Load Component of 1.25 (MCL = cooling-only portion of PUE) is realistically achievable with best practices airflow management and resultant temperature and airflow volume adjustments, without the additional economies of any variety of free cooling. Assuming another 10% of energy losses through power conversion, lights, coffee pots, etc, a 1.35 PUE seems to be a reasonable baseline target. The difference of energy use for the global data center market at the actual 1.567 PUE versus the target 1.35 PUE is about 57 billion kW hours, or $7.5 billion at $0.13 per kW hour. In the U.S., with data centers consuming roughly 90 billion kW hours annually, the higher PUE resulting from failure to implement airflow management best practices will cost U.S. data center owners nearly $1 billion dollars for 12.5 billion kW hours of wasted energy at an average $0.08 per kW hour.

6. We should expect to see an upsurge in litigation within the data center community – IS 

Rack densities are finally increasing after years of falling short of enthusiastic projections. Hybrid operations combining enterprise premise facilities with a wide range of digital infrastructure (cloud, colo, hosting, edge, et.al.) will continue to proliferate as both a proportion of the data center population as well as the quantity of digital tentacles emanating from any particular business. All the added complexity has resulted in an upswing in outages and severe service degradations as reported in the Uptime Institute’s 2018 data center survey. Rack densities will continue to increase. Migration from physical to digital infrastructure will intensify. Inexperienced migrants will struggle with meaningful and enforceable service level agreements. This all adds up to trouble for data center owners and customers and income opportunities for lawyers. 

7. The migration away from enterprise data centers will continue – IS

Colocation service providers have been expanding their service offerings beyond space, power and cooling that defined their origination to include secure access to multiple cloud services, cross-connect networks to other sites and services, enhanced pathways to seamlessly migrating among providers and cloud services. Availability of these services combined with the struggle many organizations are having securing technical talent and planning infrastructure capacity beyond actual construction and commissioning horizons will continue to push the boundaries of the enterprise data center beyond the enterprise’s physical walls.

8. A consensus definition of “The Edge” will remain elusive, but it will not matter – IS

In the beginning, the edge was a content distribution network, necessitated by the high proportion of internet bandwidth consumed by video streaming traffic. While much of the world remained somewhat perplexed, those actually in the business knew exactly what that was. Now, content distribution networks are a subset of the edge, rather than the other way around, and along with that change of scales comes the ubiquitous parade of competing definitions attempting to secure some market positioning advantage. Some of these definitions insist that an edge data center must serve a majority of a service area and must carry some high percentage of desired content. Other definitions require the full catalog of cloud services and connectivity, and mechanical and electrical redundancy to guarantee a robust resilient uptime. These debates will continue and will be further enflamed by new entrants trying to get a piece of the action. Rich Miller and Cole Crawford have both written that the edge is simply a place and not a thing. That spark of wisdom will not silence the cacophony of edge definers, but in the final analysis none of that will matter. Increased video streaming, doubling down on gaming competiveness, all the various manifestations of artificial intelligence from driverless cars to surgical assistance, reliance on natural language processing, and many other applications and services that are either bandwidth hogs or latency sensitive, or both, will conspire to drive edge annual business growth to a 40% increase and beyond, beginning this year.

9. The edge will be a significant innovation driver – IS

The edge will create a whole new market for computing power. The edge does not replace the need for any of the currently existing computing, processing or storage capacity, but rather creates a whole new place where all those functions are replicated. The rate of the edge living up to all the expectations we have heaped on it will be dependent on some significant innovations in the industry. The general requirements are lower costs and increased robustness. These requirements apply equally to the computing and networking equipment as well as to the physical infrastructures, potentially unmanned, erected at the edge. High costs and susceptibility to whatever nature heaps on unwatched IT would depress the rate of populating the edge. The train has already left the station and innovators are sure to keep up.

10. Cloud will be the number one way to compliment data center service needs and requirements – BK 

According to the latest AFCOM State of the Data Center report, within the next 12 months, respondents are most likely to meet data center service needs via the Cloud (58%). Within the next three years, respondents are equally likely to meet their needs via cloud (48%). Furthermore, two cloud trends are seen as having the largest effect on respondent companies: integration with AI, (data-driven services and machine learning) (40%) and IoT growth resulting in more big data (37%). The most sought-after cloud competencies are Data Center, Cloud, Colocation Connectivity expertise (53%), followed by Cloud Architects (41%) and Cloud Security professionals (40%).

11. Renewable Energy will see a big jump in adoption – BK

While just 18% of respondents are actively involved in deploying renewable energy, an additional 50% are considering it within the next three years. The primary perceived benefit of investing in renewable energy is the role doing so will play in helping organizations achieve green initiatives (65%).

12. Good help will be hard to find – BK

Four in five respondents (78%) report challenges in at least some personnel types, most commonly IT Security Personnel (31%), Cloud Architects (28%) and IT Systems and/or Applications Personnel (27%). Furthermore, two in three respondents (65%) report their companies have had to increase investment in IT and/or Data Center Facility personnel over the past three years. The most common driver of this increased investment is increased demand for on-site coverage (53%), followed by retention costs for existing staff (41%) and increased training and certification requirements (40%).

13. You’ll see more young people enter into the data center field – BK 

A majority of respondents (70%) are seeing more young data center professionals enter the workforce. A third find it difficult to recruit qualified, young candidates (34%). 

14. Data will drive new services – BK

Respondents are most likely to currently leverage data analytics (52%), followed by big data (37%). While just 23% have currently implemented edge compute capacity, an additional 34% plan to do so within the next three years.  The typical respondent expects fewer than six edge locations (61%).

15. Automation might start to impact some roles – BK

Most respondents (70%) are currently leveraging data center automation and control, most commonly for smaller tasks (41%). Over half believe automation and control will eliminate some roles in the data center (57%).

16. Security will continue to be a top concern across a number of data center operations – BK

The primary threat to respondent security and infrastructure is ransomware (56%), followed by outside human threats (48%), advanced persistent threats (44%), inside human threats (42%), loss if PII (40%) and DDoS (37%). According to the report, the most important security service or process to implement or improve is incident detection (59%), followed closely by data loss prevention (54%). When implementing a cloud solution, respondents cite three primary concerns: security of company data (48%), TCO (47%) and network reliability (42%). Finally, with regard to DCIM, respondents are most likely to have implemented security (64%), followed closely by environmental (60%), power/energy management (58%), facility management (585) and cable management (57%).

17. Liquid cooling will be a primary cooling technology in 5 years, or less – BB

With the rise in Machine Learning, Artificial Intelligence and 4k/8k content, has come the exponential growth in the use of GPU and Tensor processor technology in the Hyperscale data center. These are the hottest of any Processor, drawing upwards of 350W at full load in order to achieve optimal performance. According to Data Center Knowledge, DCD, Intersect 360, HPCWire and many other industry analysts, this will require adoption of liquid cooling in order to effectively utilize these devices in an efficient mode. If you take a deep dive into the GPU cooling parameters, you can see air cooled units are actively throttled back by the manufacturer in order continue to work when using air as the primary cooling methodology. Data centers want to maximize their investment in this very expensive Processor technology, and 50% will adopt liquid in order to maximize their GPU processing investment, enhance reliability and save big bucks on OpEx as GPU adoption does not seem to be slowing anytime soon.

18. Autonomous vehicle R&D will continue – BB

A brain simulation neural on-board computer will effectively become the ‘eyes’, moving collision detection, avoidance and physical reaction algorithms (decisions) to the self-driving vehicle and away from the data center avoiding any network lag time. Though we are a long ways away from computers that can actually work like the human brain, synaptic processors that mimic the organic neurons are being developed at MIT. Small neuromorphic computing systems will likely become part of a self-driving car’s on-board processing. These computers will be the first line interpreter of hi-resolution images, effectively becoming the “eyes/brain connection” for the vehicle. They will enhance real time ability to recognize new and unexpected patterns, such as a human crossing unexpectedly in traffic. On board neuromorphic processing will remove the network lag time from the vehicle’s decision equation, giving the autonomous vehicle rapid on-board pattern recognition analysis and decision making for accident avoidance. Keep an eye out for developments here.

19. High performance computing has firmly moved into the hyperscale data center in the form of machine learning, artificial intelligence and large data set parallel analytics – BB

For some customers, the Cloud has been daunting, but in theory should be a good place to do HPC computing. The rise of Dockers & Containers added greater accessibility for HPC done in the Cloud, but one particular Container advancement is leading the way toward greater adoption. Singularity is a leading Container software developed at Lawrence Berkeley Labs and will assist in making HPC in the Cloud more useful. Containers are used to package scientific workflows, software, libraries and data into a ready to digest format without having to ask cluster administrators to install anything on the server for enabling your runtime. Singularity makes for an ideal way to SECURELY protect HPC workloads for running at your friendly neighborhood “Cloud provider”. Not having to build up expensive HPC resources of your own is especially advantageous for innovative technology startups. Singularity is Open Source software that is being commercially supported by Sylabs. However, Sylabs has a freely available and downloadable Community version available as a starting point for development. HPC in the Cloud is not for every user application, but the security concerns have been adequately addressed for most cases, and software like Singularity will greatly expand the use of HPC for most commercial business cases.

20. Bitcoin and other cryptocurrencies may die in 2019, as the use of blockchain distributed ledger technology transactional processing grows rapidly in the financial & pharmaceutical marketplace – BB

If you are like me, cryptocurrency seems a pointless waste of perfectly good electrical energy. But, if you are like me, you could also see the value of blockchain ledger technology for secure transactional processing to prevent data manipulation. Blockchain in the pharmaceutical industry can guarantee a unified solution for efficient and secure management of the global supply chain. Blockchain tech can provide complete end-to-end supply chain management. By allowing multiple stakeholders to participate in distributing the network, it creates an incredibly secure database, without the multi-homed configuration of previous supply chain technologies. Blockchain ledger can be applied to: 1) the manufacturing supply chain, 2) drug manufacturing safety, 3) inventory management, 4) public safety and drug awareness, and 5) clinical trial management. Distributed Ledger Technology (DLT) can greatly enhance security of domestic and foreign exchange in the financial sector providing a “shared single version of the truth” ranging across different balance sheets in different countries. Keep your eyes on HSBC, as they have recently announced settling $250 Billion in trades with DLT.

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month 2019

Authors:

Lars Strong

Senior Engineer, Upsite Technologies

Ian Seaton

Data Center Consultant

Bill Kleyman

Director of Technology Solutions, EPAM

Bob Bolz

HPC and Data Center Business Development, Aquila

Airflow Management Awareness Month 2019

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest