Covering Disruptive Technology Powering Business in The Digital Age

image
Data center analytics to play central role in the future of operations
image
February 12, 2016 Blogs big data

This article was originally published by techrepublic.com and can be viewed in full here

One of the holy grails of data center operations is to take the metadata from the physical data center components and provide application level integration. Data center operation teams keep massive amounts of performance data, hoping to improve problem resolution and increase data center efficiency.

In concept, metadata from the data center is fed into a big data platform. The big data platform can analyze the data and make recommendations to application or resource clustering software. In this blog post, I’ll take a look at two different approaches by companies that appeared on a podcast I host.

CloudPhysics

CloudPhysics CTO Irfan Ahmad recently joined me to discuss their SaaS-based solution focused on VMware vSphere. Even with all of the advanced resource management features in vSphere, customers are sometimes slow to implement the automated resource placement. On paper, vSphere Distributed Resource Scheduler (DRS) can ensure workloads have the physical resources needed to ensure application performance.

When implemented according to best practices, these features work fairly well. But, setting up these features so that they don’t compound issues is a challenge. CloudPhysics uses detailed analysis of vCenter data to identify performance trends. Comparing anonymized performance data to CloudPhysics best practices simplifies the task of configuring DRS. CloudPhysics can also provide custom reports that help customers understand the performance of the physical infrastructure based on data from CloudPhysics global clients.

For example, CloudPhysics can help identify outliers for VM I/O on Cisco UCS blades for one customer against their global customer base. Based on the data, customers can create DRS rules or manual changes to their VMware cluster.

Data center telemetry

CloudPhysics focuses on the cluster provider. Currently, the only supported cluster provider is vCenter. Intel recently announced a new open source project named Snap. The project has a broader focus. Intel would like to provide telemetry data to cluster controllers such as Kubernetes or Mesos, or to an application directly.

I spoke with Matthew Brender, an Intel developer advocate based in Intel’s data center practice. Brender describes Snap as a telemetry framework more than an analytics platform. Snap provides a universal framework to provide low-level raw data to be analyzed.

An example of telemetry data is the amount of L2/L3 cache consumed on a processor. Cache consumption is much more detailed information relative to data such as CPU utilization. The data is fed to a cluster management solution and, based on the telemetry data, the cluster manager can decide where to place workloads for the best performance.

Intel is hoping that Snap takes off as the primary framework. For Snap to add value to cloud infrastructures, immense vendor contribution is required. Interfaces to systems such as network and storage require contributions from technical teams with in-depth knowledge of providing telemetry data from these systems.

CloudPhysics and Intel are just two examples of the wide array of vendors aggressively going after this space. As data centers move to the cloud, analytics will play a significant role in maintaining data center efficiency. Next generation infrastructures require data to reliably place workloads with a guaranteed level of performance, and data center analytics will be an important trend going into 2016.

 

(0)(0)

Archive