Covering Disruptive Technology Powering Business in The Digital Age

image
Big data projects shake up the storage status quo
image
May 20, 2016 News

The entry of big data into enterprise data centers and business units changes that — a single big data file can be measured in terabytes or even petabytes. Big data on analytics platforms like Hadoop is processed in parallel, a distinct difference from the sequential processing of transactional data. Unsurprisingly, the storage considerations for big data also change.

Nowhere is the change more noticeable than in the data analytics/high-performance computing (HPC) space dominated by Hadoop applications that parallel process petabytes of big data against algorithmic analytics for the purpose of data science and other complex inquiries. For HPC applications, it is difficult to consider concepts like virtualized or cloud-based storage because you need physical processors and storage platforms right there in the data center to directly process and store data and the results of queries.

Consequently, the compute- and storage-intensive nature of the work precludes the economics of virtualization or the cloud that data center managers, including storage professionals, have so keenly sought for the past decade. So, too, do the large sizes of single data sets that are characteristic of big data object storage, which uses metadata tags to describe non-traditional data images such as photos, videos, audio records, document images, etc.

In addition, the ownership of big data projects also changes the storage calculus. If business units within the company are running big data projects, the move will be towards pockets of distributed, physical storage that are network attached (NAS) and that can be scaled out to multiple storage devices as workloads dictate. Distributed, scaleout NAS is an alternative to cloud-based or virtual storage, so it moves in contrast to these popular IT trends.

Given these developments, what role does cloud play in big data?

The answer is cold storage, an area that is still underexploited by enterprises. Cold storage is extremely cheap, very slow, disk-resident data that is filed off into archives and kept for safekeeping. In daily IT, there is small chance that this data will be needed, so it is convenient if you can move it to an offsite data repository and not have it take up space in your data center or its operations. If that data repository is in the cloud, you have the ability to remotely access it from the data center — without having to physically travel to an offsite area to pick up disks or tapes.

Cloud and virtual storage also have a potential role in the data marts that many company departments now use to run batch queries for different departments and business units. The data used in most of these data marts is batch created and is traditional data that departments have run for queries in the past. What is different is that users now have more analytics report creation tools and options for queries than they had in the past, and there is more ability for data administrators to generate data that is aggregated from different sources. In this batch environment, disk storage solutions work as effectively as they have in the past.

As storage administrators react to the changes brought on by big data, the most significant change impact is accommodating the sheer size of extremely large big data files. This requires specialized disk and processing and, in most cases, on-premises storage that runs counter to cloud and virtualization initiatives. At the other end of the spectrum, commercial cold storage solutions can finally put an end to the dilemma of seldom-used data sitting unattended on outdated disk and tape drives in back rooms.

This article was originally published on www.techrepublic.com and can be viewed in full

 

(0)(0)

Archive