Is Apache Hadoop the ‘Holy Grail’ for unstructured data?

Big Data will mean big year for Hadoop

FRAMINGHAM, MASS – You can’t have a conversation in today’s business technology world without touching on the topic of big data.

Simply put, it’s about data sets so large-in volume, velocity and variety-that they’re impossible to manage with conventional database tools. In 2011, our global output of data was estimated at 1.8 zettabytes (each zettabyte equals 1 billion terabytes). Even more staggering is the widely quoted estimate that 90 percent of the data in the world was created within the past two years.

Behind this explosive growth in data, of course, is the world of unstructured data. At last year’s HP Discover Conference, Mike Lynch, executive vice president of information management and CEO of Autonomy, talked about the huge spike in the generation of unstructured data. He said the IT world is moving away from structured, machine-friendly information (managed in rows and columns) and toward the more human-friendly, unstructured data that originates from sources as varied as e-mail and social media and that includes not just words and numbers but also video, audio and images.

Given the rise of big data, I’m sure you’re hearing the buzz around Apache Hadoop, the software framework that supports data-intensive distributed applications under a free license. It enables applications to work with thousands of nodes and petabytes (a thousand terabytes) of data. It certainly looks like the Holy Grail for organizing unstructured data, so it’s no wonder everyone is jumping on this bandwagon. A quick Web search will show you that in just the past few months, companies including EMC, Microsoft, IBM, Oracle, Informatica, HP, Dell and Cloudera (to name a few) have adopted this software framework.

What I find even more notable is that companies such as Yahoo, Amazon, comScore and AOL have turned to Hadoop to both scale their businesses and lower storage costs.

According to some recent research from Infineta Systems, a WAN optimization startup, traditional data storage runs $5 per gigabyte, but storing the same data costs about 25 cents per gigabyte using Hadoop.

That’s one number any CEO will remember.

Related Download
Customs 2015: The Smarter Planet strategy for customs administration Sponsor: IBM Canada Ltd
Customs 2015: The Smarter Planet strategy for customs administration
Download Customs 2015: The Smarter Planet strategy for customs administration to find how why modernization of the paper information channel can yield significant benefits and how adopting a Smarter Planet customs strategy works to improve the efficiency and effectiveness of customs operations
Register Now
Share on LinkedIn Share with Google+ Comment on this article