The Cost and Ramifications of a Data Center Outage10 min read

by | Oct 11, 2017 | Blog

The ability to quickly provision a data center environment doesn’t only revolve around the ability to scale. There must also be considerations regarding agility, security and resiliency.  This may mean providing a new data center service for the reducing downtime – or deploying modular data centers which are directly designed for business and infrastructure agility. Beyond the rapid provisioning of resources, today’s data center is the center of new demands that revolve around data, cloud and resiliency.

But what happens if and when it all comes crashing down? What if a worst case scenario occurs and you are facing a data center outage? Obviously, it’s a scenario that none of us want to live through. But, it could very well happen. At the very least, the stakes are certainly high. Consider this – only 27% of companies received a passing grade for disaster readiness, according to a survey by the Disaster Recovery Preparedness Council. At the same time, increased dependency on the data center and cloud providers means that overall outages and downtime are growing costlier over time. Ponemon Institute has just released the results of the latest Cost of Data Center Outages study. Previously published in 2010 and 2013, the purpose of this third study is to continue to analyze the cost behavior of unplanned data center outages. According to the new study, the average cost of a data center outage has steadily increased from $505,502 in 2010 to $740,357 today (or a 38 percent net increase).

Throughout their research of 63 data center environments, the study found that:

  • The cost of data center downtime has increased 38 percent since the first study in 2010.
  • Downtime costs for the most data center-dependent businesses are rising faster than average.
  • Maximum downtime costs increased 32 percent since 2013 and 81 percent since 2010.
  • Maximum downtime costs for 2016 were $2,409,991.

In a related finding, an Avaya survey of mid-to-large companies in the US, Canada, and the UK found that 80% of companies experiencing data center outages from core network errors lost revenue. The average company lost $140,003 per incident. Financial sector enterprises lost an average of $540,358 per incident.

In understanding these new demands, it’s important to look at the three main challenges facing the modern data center as they become the central hub for all emerging technologies.

Overload Challenge

What happens when data and usage spikes?

There is more data coming online and organizations are moving towards digitizing as much as they can. This involves everything from big data to more cloud computing services. Systems that support advanced levels of utilization and high-density computing create environments for true user multi-tenancy. But what happens when these systems spike? Is there enough visibility into all the layers to intelligently provision new resources? In working with the modern data center, it’s important to have complete visibility into both the physical and logical layer. This doesn’t only mean security. By having the ability to control resources at the application-layer – administrators have many more proactive options for what they can do to keep their data centers resilient.

Physical and Logical Challenges

The disconnect between logical and physical convergence.

With so much reliance on the modern data center, administrators and managers must see the entire environment as one logical unit. This means that there can no longer be a separation between the hardware and the services hosted on top of that hardware. Data center operating systems create direct visibility into both the logical and physical layer. This is the logical progression for data center management. Not only are administrators given an extra layer of security – they are able to automate and build a platform ready to scale. By joining these two data center functions, there is immediately more standardization that can occur. Furthermore, this single pane of glass creates the needed link between all data center operations and links them with the necessary business units.

Deployment challenges.

In deploying a new data center (or when you’re renovating an older one) – it’s vital to approach the project from a new perspective. Although there are individual components to the data center deployment process, it’s crucial to see the data center as one logical and physical unit. In many cases, using modular or pre-fabricated systems is the right move. The Uptime Institute Survey already shows us that 60% of their respondents either have an entire data center built from pre-fabricated components or are supplementing their existing environment with pre-fabricated systems. In working with the next-gen data center model, modularity and component-based deployments can save money and ease management of the overall infrastructure.

Data center environmental design from a “bottom up” perspective.

Creating architecture around environmental controls must be a truly scientific approach. This means understanding the requirements for your data center and delivering the appropriate mechanisms to manage the environment. Let me give you an example – for modular data centers – a modular containment system might make a lot of sense. It can be installed directly out-of-the-box without any tools, allows for quick and easy mounting for minimal disruption to the data center, and it fits common rack widths, allowing for easy, off-the-shelf ordering. These types of modular containment solutions and line of aisle airflow management products are specifically designed to block airflow, ensuring hot and cold aisle separation. Most of all, they can work with next-generation data center designs and make the deployment process a lot easier.

Too often we see data center designs done in fragmented steps. This methodology of design actually increases the risk of a data center outage or downtime. No one wants to experience this type of stress or loss in revenue. The good news is that you can design a data center platform that is flexible and architecturally very resilient. Remember, your data center architecture will involve your workloads, the design of the data center, and the environmental variables therein. Working with the right solutions not only makes the data center design process easier, it also reduces the chances of catastrophic downtime.

Bill Kleyman

Bill Kleyman

Industry Analyst | Board Advisory Member | Writer/Blogger/Speaker | Contributing Editor | Executive | Millennial

Bill Kleyman is an award-winning data center, cloud, and digital infrastructure leader. He was ranked globally by an Onalytica Study as one of the leading executives in cloud computing and data security. He has spent more than 15 years specializing in the cybersecurity, virtualization, cloud, and data center industry. As an award-winning technologist, his most recent efforts with the Infrastructure Masons were recognized when he received the 2020 IM100 Award and the 2021 iMasons Education Champion Award for his work with numerous HBCUs and for helping diversify the digital infrastructure talent pool.

As an industry analyst, speaker, and author, Bill helps the digital infrastructure teams develop new ways to impact data center design, cloud architecture, security models (both physical and software), and how to work with new and emerging technologies.

Let's keep in touch!

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite

Archives

Airflow Management Awareness Month

Free Informative webinars every Tuesday in June.

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest