The Power of a Private Cloud: Understanding Deployment Requirements10 min read

by | Apr 13, 2016 | Blog

The ability to use the Internet to help distribute data over vast distances has been around for some time. However, the idea around cloud computing has only become a reality over the past few years. Now, organizations are able to develop an infrastructure capable of great scale. By deploying a cloud infrastructure, many environments are supporting more users, more functions and adding more business value. In some cases, working with a public or hybrid cloud platform is the right way to go. Still, many other organizations want to retain control of their data and their cloud environment. This is where the power of the private cloud can really show serious benefit. Still, to create a robust and agile cloud infrastructure – administrators must surround their IT platform with technologies which are capable of supporting this type of a solution.

Public vs. Private

The two biggest cloud models in the industry also offer very specific benefits. While some organizations want to outsource their cloud environment – many others still want to retain control. Working with a private cloud model has direct benefits in that your organization can continue to expand while controlling the data that flows through the data center. Private cloud technologies are being used to deliver many different types of services including:

  • Virtual desktops and applications.
  • Files and data services.
  • Private cloud portals and collaboration spaces.
  • Disaster recovery functions.
  • Branch office extensions.
  • Compliance or regulatory-based data delivery.

Whatever the reason to work with a private cloud model – this design has helped evolve the data center to what it is today. Remember, with a public cloud computing, there are certain elements to be aware of:

  • Unknown cost structures.
  • Relinquishing control.
  • Public data center lock down.
  • Regional site resiliency issues.

In selecting the right model, always make sure to understand what you are trying to deliver. Then, plan around growing your environment and ensure that your platform is capable of scaling with your organizational needs. In a public cloud environment, such scale can be very costly. However, when sized properly, private cloud computing models can have a lot of room for growth.

Three Big Considerations around Private Cloud Deployment

  • Sizing your cloud. In creating a robust cloud environment, you must first understand what it is that you are trying to deliver. Simple application virtualization may not require as much horsepower power. However, virtual desktop infrastructure (VDI) does. Depending on your workload, where your user is located and they types of servers you use – your cloud model can be capable of great scale and agility. Remember, with each type of workload, there will be resource requirements that will need to be met. For example, VDI will require a lot of dedicated RAM and a solid shared storage infrastructure. Furthermore, it’s highly recommended that for highest amounts of density that you work with a scalable blade infrastructure. These blades must be able to handle user loads, applications and maintain high amounts of uptime. This new requirement show just how much need there is for blade systems to be highly resilient.
  • High-density computing. As mentioned earlier, a good private cloud design will be able to fit numerous users – efficiently – over a set of high-density servers. For almost any cloud deployment looking to house a large number of users and workloads – working with a blade environment is usually the right way to go. Now, the important part here is working with the right server technology that can best align with your organization’s IT needs. Any blade server which will strive to house a private cloud component will have to be built around the latest multi-core process platform and support a large amount of RAM. Furthermore, these machines must be easy to manage and administer. Why? With such a large amount of users, blades must have the capability to be swapped out quickly or repaired on the spot. This is why using a blade environment with a “tool-free” design is a critical consideration. This means using technologies which allow for tool free access to system components, deliver power reduction software to limit power utilization, and provide remote log in support directly from the manufacturer.
  • Powering and cooling your cloud. In the latest State of the Data Center Survey, respondents were asked about their data centers and how they were managing their most critical components. The growth of data within the data center has created a lot more requirements around power and cooling. Not just that, density is increasing as well. Consider this, 70% of respondents indicated that power density (per rack) has increased over the past 3 years.

Because of the increasing dependency around data center services, redundancy and uptime are big concerns. We saw fairly steady trends around redundant power levels spanning today and the next three years. For example, at least 55% already have – and will continue to have – N+1 redundancy levels. Similarly, no more than 5% of respondents either currently have, or will have, 2(N+1) redundant power systems. For the most part, data center managers are using at least 1 level of redundancy for power. Like power, cooling must be a big consideration in the cloud and digital age. Data centers are increasing density, and cooling is critical to keep operations running efficiently. When we look at cooling, more than 58% indicated that the currently run, and will continue to run at least N+1 redundant cooling systems. Both today, and three years from now – 18% will operate an N+2 cooling redundancy architecture.

The cloud revolution will only continue to expand. As more organizations jump on the cloud computing bandwagon, they’ll be able to leverage even more benefits of a widely distributed, highly-connected environment. They key point to understand is this: Build around intelligent cloud control and scalability. By planning around needs for both today and the future – your organization can continue to leverage the full power of the cloud. As mentioned earlier, working with efficient and highly scalable computing systems can only help your environment better meet the needs of your organization.

Bill Kleyman

Bill Kleyman

CTO, MTM Technologies


Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to the Upsite Blog

Follow Upsite


Airflow Management Awareness Month

Free Informative webinars every Tuesday in June.

Cooling Capacity Factor (CCF) Reveals Data Center Savings

Learn the importance of calculating your computer room’s CCF by downloading our free Cooling Capacity Factor white paper.

Pin It on Pinterest