Data Center Security Means Preventing Breaches and Outages10 min read
We’re going to take a couple of interesting approaches when it comes to keeping our data centers healthy. Throughout my experience in working with a variety of data center and cloud environments, outages and data center security events (such as a breach) can have very similar impacts on the business and the data center — mainly that it’s very costly.
In the 2017 Ponemon Cost of Data Breach Study, researchers found that the global average cost of a data breach is down 10 percent over previous year’s to $3.62 million. The average cost for each lost or stolen record containing sensitive and conﬁdential information also signiﬁcantly decreased from $158 in 2016 to $141 in 2017. Healthcare, because of the particularly sensitive nature of the data, was hit the hardest.
The study reported that Ponemon Institute calculated the average healthcare data breach costs to be $380 per record. While the average global cost per record for all industries is $141, healthcare data breach costs are more than 2.5 times that global average. Financial services came in second with a $336 cost per record.
In another study conducted in 2016, Ponemon Institute released the results of the latest Cost of Data Center Outages survey. Previously published in 2010 and 2013, the purpose of this third study was to continue to analyze the cost behavior of unplanned data center outages. According to the new study, the average cost of a data center outage has steadily increased from $505,502 in 2010 to $740,357 today (or a 38 percent net change).
Throughout their research of 63 data center environments, the study found that maximum downtime costs for 2016 are $2,409,991.
So, to that extent, we know two truths: We really don’t want to experience either a data center outage or a data center breach. Interestingly enough, data center monitoring solutions, security, and DCIM have become a lot smarter over the years. In working to create a healthier and more secure data center, consider the following.
- Get rid of legacy gear. Working with legacy gear within a data center is one of the few places where both security and efficiency are directly impacted. Too often we’ll find older servers or even networking gear that haven’t been patched. Although it continues to “work,” it’s really not providing any value to the business. And, in fact, is bringing in unnecessary risk. I’ve seen both security incidents as well as data center failures happen because of older gear. To that extent, check your environment, look over your remote locations, and scan your closets for older gear. Make sure it’s either patched or ready to be on its way out. This is really a great way to improve both efficiency and security.
- Invest in good monitoring systems. Modern DCIM and data center management platforms can integrate the physical nature of the data center and environmental monitoring. You can check on doors, locked cages, secure areas, isolated locations, and so much more. Not only can you tell if your data center is operating optimally, you can also see if a cage door is left unlocked and who accessed a secure portion of your data center last. Let me be clear – laptops, hard drives, and even entire servers have been known to “walk off.” Similarly, poorly monitored data center will lead to serious environmental challenges. Investing in a good monitoring platform will help alleviate both of those challenges.
- Understand your workloads, and design around what you require. Your data center isn’t just one giant block of resources. Rather, it’s a machine which can be carved up as needed for various use cases. So within your data center walls you may very well have a section requiring high-performance compute resources. You may have that area isolated, leveraging solutions like modular containment. From there you may also have independent monitoring of the area for sensitive data or information processing. Finally, you may require different airflow and power management because of the types of workloads being processed. The point here is to make sure your workloads are running on the right type of gear with the right level of monitoring. This all helps with efficiency of the overall data center; it also helps isolate sensitive workloads.
- Test, audit, and report. The data center isn’t a stationary entity. It’ll constantly evolve and change with business requirements. This is why you’ll need to make sure it can be flexible for your business requirements. Whether testing security or efficiency, the point is that both are important, and both need to be done regularly. Efficiency must become a science where you constantly measure the performance of your data center and find ways to improve. Similarly, security is a measured process to ensure physical as well as virtual infrastructure security. It’ll be important to work with good systems that can help you test your airflow and power utilization, run CFD analysis, and improve overall data center efficiency. It’ll be just as critical to work with good security auditing solutions, ones that are able to look into physical access, asset locations, contextual user permissions, and more.
The big point around all of this is that losing data and experiencing data center outages can be very costly. It’s important to work with good solutions that help you increase visibility into your most critical resources from both a security perspective and how well they’re running. Although there are great solutions for security around cloud and virtualization, we must never forget the physical side of security. And so, to keep your data centers more secure in 2018, make sure to invest in good technologies that help keep your physical data center safe and your environment running optimally.
Executive Vice President of Digital Solutions, Switch | Industry Analyst | Board Advisory Member | Writer/Blogger/Speaker | Executive | Millennial | Techie
Bill Kleyman brings more than 15 years of experience to his role as Executive Vice President of Digital Solutions at Switch. Using the latest innovations, such as AI, machine learning, data center design, DevOps, cloud and advanced technologies, Mr. Kleyman delivers solutions to customers that help them achieve their business goals and remain competitive in their market. An active member in the technology industry, he was ranked #16 globally in the Onalytica study that reviewed the top 100 most influential individuals in the cloud landscape; and #4 in another Onalytica study that reviewed the industry’s top Data Security Experts.
Mr. Kleyman enjoys writing, blogging and educating colleagues about everything related to technology. His published and referenced work can be found on WindowsITPro, Data Center Knowledge, InformationWeek, NetworkComputing, AFCOM, TechTarget, DarkReading, Forbes, CBS Interactive, Slashdot and more.
Airflow Management Awareness Month
Free Informative webinars every Tuesday in June.
Cooling Capacity Factor (CCF) Reveals Data Center Savings
Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.