Are You Prepared for Hyperscale Data Centers?
More organizations are looking for better ways to scale their businesses. Moreover, a big part of their strategy will entirely revolve around the data center. With that in mind – organizations have only a few options.
- Leverage a colo
- Build your own
- Move to the cloud
However, not every cloud or colocation partner can offer data center capabilities at the levels of hyper-scale. Let me give you an example using a case study I recently did with Groupon. As a growing organization, Groupon knew it needed to partner with the right type of data center provider to align technology strategies with their evolving business goals. Their provider was required to be customer-focused, with capabilities around scale, security, agility, and support. By leveraging the right type of data center partner, Groupon was able to go from deployment to service delivery in an incredibly short period. Their hyper-scale platform was able to provide the right service and infrastructure that allowed Groupon to set up their entire ecosystem in 41 days – as in serving users and delivering the application.
If you are looking to leverage hyper-scale data centers, there are some excellent options out there. However, you need to look for the right technologies and partnerships to help you.
The need for data center partners who are capable of delivering rapid growth capabilities
Remember, it is not just about providing resources and access to data center space. Hyperscale data centers offer unique competitive advantages in their ability to support advanced delivery mechanisms and resource controls.
If you are working on a cloud or big data solution which requires a data center ecosystem, make sure you understand the difference between traditional and hyper-scale capabilities. First of all, an excellent hyper-scale data center partner will appreciate your technology requirements and business model. Hyperscale companies that are using cloud-like concepts to solve big data problems must use hyperscale data centers capable of keeping up.
Consider this – back in the day, with simple apps; you would have a tiny team working to solve an algorithm problem within an app whose same output could be used for everyone in a region. Now, you have issues that take mass amounts of data; you have a problem that affects everyone differently, and some of the data is only good for that moment in real time. Take the Uber app as an example. It must update continuously with you and your driver’s position, it changes continually and is different for everyone. All of this requires vast amounts of data and highly scalable resources. Can your traditional data center keep up? Realistically, for your bigger cloud and data requirements, it is probably time to look for a data center that brings hyperscale capabilities.
The future of data center and why hyperscale will disrupt the market
Cisco recently pointed out that hyperscale data centers represent a significant portion of overall data, traffic, and processing power in data centers. Traffic within hyperscale data centers will quintuple by 2020. Hyperscale data centers already account for 34% of total traffic within all data centers and will account for 53% by 2020. Hyperscale data centers will also represent 57% of all data stored in data centers and 68% of total data center processing power.
From server closets to large hyperscale deployments, data centers are at the crux of delivering IT services and providing storage, communications, and networking to the growing number of networked devices, users, and business processes in general. The future of the data center will revolve around growing trends and genuinely unique use-cases. However, one fact remains that we will continue to see more data, more cloud utilization, and a lot more requirements around data processing. Traditional data centers will undoubtedly have their place. However, the rise of cloud and big data will also mean new business requirements and higher utilization of hyperscale systems.
Managing Hyperscale Design
Hyperscale data centers are designed to be massively scalable computer architectures. To accomplish this level of scale and density, hyperscale data centers create optimizations around server utilization, energy efficiency, cooling, and their space footprint. One way they do this is by automating and controlling the delivery of critical resources, all the way from servers down to the racks.
Within the hyperscale data center, look for new solutions regarding infrastructure segmentation, improved airflow controls, and the ability to scale rapidly. Let me give you a specific example – Modular Containment. Within the hyperscale data center, modular containment solutions and aisle airflow management technologies are specifically designed to block airflow, ensuring hot and cold aisle separation. The cool part here [pun very much intended] is the ease with which you can apply a variety of modular containment configurations to the hyperscale framework.
Modular containment allows your hyperscale data center to continue to perform optimally while taking on new use-cases and business initiatives.
Remember, the entire idea behind hyperscale capabilities revolves around rapid, but very efficient, resource provisioning. Efficient hyperscale data centers will control power, floor space, airflow, and much more. It is crucial to leverage sound technologies that will evolve your ecosystem from traditional to hyperscale.
Furthermore, if you are working with a hyperscale data center, make sure to ask them the all-important questions concerning how they manage their critical environmental variables. Quickly deploying an application is excellent, but what does the environment underneath look like? Is it efficient? Is it truly hyperscale-enabled? Whether you are building, partnering, or buying into the hyperscale market – know that this segment continues to grow. It will be up to you to get the best kinds of technologies and partnerships to enable your business.
Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.
About the Author
Bill Kleyman, CTO, MTM Technologies Bill is an enthusiastic technologist with experience in data center design, management, and deployment. His architecture work includes large virtualization and cloud deployments as well as business network design and implementation. Bill enjoys writing, blogging, and educating colleagues on everything that is technology. During the day, Bill is the CTO at MTM Technologies, where he interacts with enterprise organizations and helps align IT strategies with direct business goals. Bill’s white papers, articles, video blogs, and podcasts have been published on InformationWeek, NetworkComputing, TechTarget, Wall Street Journal, ZDNet, Slashdot, and many others.