Covering Disruptive Technology Powering Business in The Digital Age

image
IT teams hone Hadoop monitoring, governance to boost big data value
image
July 8, 2016 News

The IT team at Comcast Corp. leaves nothing to chance when it comes to managing the performance of its Hadoop data lake.

The data lake is a big body of information, with tens of thousands of CPUs and more than 30 petabytes of storage capacity. To keep it all running smoothly, IT has implemented proactive Hadoop monitoring and governance processes and a mix of cluster management tools.

Ensuring Comcast’s ranks of Hadoop users can run their applications in harmony “all starts with governance,” said Michael Fagan, principal big data architect at the Philadelphia-based TV and movie conglomerate. The management effort includes service-level agreements that set limits on Hadoop resource utilization by business units, plus automated enforcement mechanisms and monthly review meetings to assess performance and where improvements could be made.

The importance of properly managing Hadoop clusters and governing both their usage and the data stored in them was a big topic of discussion at Hadoop Summit 2016 in San Jose, Calif., last week. Fagan and other speakers described effective Hadoop management as a must to get the kind of big data benefits organizations are looking for. And several vendors released new technologies designed to help automate cluster monitoring, management and governance tasks.

For example, conference co-organizer Hortonworks released a technical preview of an updated Hadoop distribution that integrates Atlas and Ranger, Apache open source technologies that can be used in tandem to assign metadata tags to data and enforce user-access policies. Hortonworks Data Platform (HDP) 2.5, slated for general release later this month, also adds capabilities for searching system logs and setting role-based access controls via Apache Ambari, an open source Hadoop management tool.

Hortonworks rival MapR Technologies launched the first component of its Spyglass Initiative — a project to create customizable dashboards for monitoring clusters built on its big data technology platform. In addition, MapR will now release updates of various open source tools it offers as part of the platform in quarterly “packs” to ease deployment; the first MapR Ecosystem Pack and the MapR Monitoring dashboards are both due this quarter. Meanwhile, data integration and analytics software vendor Pentaho published a reference-architecture blueprint for setting up connections to pull data into Hadoop data lakes.

Multiple sides to Hadoop management

Comcast runs both HDP and Cloudera’s Hadoop distribution in its data lake — for cluster management, it uses a combination of the Hortonworks-fueled Ambari, Cloudera Manager and software from Hadoop performance management startup Pepperdata. To pull together Hadoop monitoring data at a higher level, though, the company also built a homegrown management console called the Comcast Command Center.

“While we could get answers from the different tools, it was very difficult to get consistent answers,” said Ray Harrison, a member of Comcast’s Hadoop platform team.

The data lake is a multi-tenant environment, with various users “coming together to play nicely in the same sandbox,” Harrison said. But that approach presents some performance management challenges. The Hadoop team this year deployed a 500-node cluster dedicated to advanced analytics applications for the company’s data scientists — an addition that became necessary because their efforts to find “unknown unknowns” in large data sets were driving hard-to-handle spikes in use of the existing cluster resources, according to Harrison.

To keep up with changes in cluster usage, Comcast typically updates its governance policies on resource utilization “many times over the course of a year,” Fagan said. He added that data governance is the next step: The Hadoop team is starting to move forward on a data governance programthat will lean on the Atlas technology to help ensure different users work with consistent information.

Governance first, tech glory later

Data governance was the first priority for Blue Cross Blue Shield of Michigan in its development of a big data platform that went live in May. In another session at the conference, Beata Puncevic, director of analytics, data engineering and data management at the Detroit-based medical insurer, said her team began working on new data governance procedures and policies at the outset of the project in April 2015 — five months before it started implementing any technologies.

“If you don’t have a strong data governance process before deploying big data tools, it’s hard to catch up,” Puncevic said. The effort involved steps such as creating a business glossary with common data definitions, devising new rules on data usage, and addressing data quality and metadata management issues. “All the boring stuff,” she joked. “What we did at first had nothing to do with technology.”

Raw data is pulled into a Hortonworks-based Hadoop cluster and then refined for analysis through the data governance mechanisms. The system is initially being used to support analytics applications that involve pharmacy and clinical medical records, Puncevic said, adding that it likely will take another three to five years to fully build out the big data architecture.

Hadoop monitoring and governance are also high on the big data to-do list at the University of Texas MD Anderson Cancer Center, which put a Hadoop cluster running HDP into production in March. The Houston-based cancer treatment and research organization uses the cluster to store vital-statistics data on patients collected from bedside sensors; additional uses being considered include integrating data from different laboratory systems that aren’t connected to one another.

Traditional IT management, governance and security practices still apply in the big data environment, according to Vamshi Punugoti, associate director of research information systems at MD Anderson.

“From our perspective, there’s no reason to do anything differently,” he said. “Just because we’re starting the [big data] journey doesn’t mean we have to do it in a haphazard manner.”

This article was originally published on www.searchdatamanagement.techtarget.com and can be viewed in full

(0)(0)

Archive