Cloudera expands Hadoop ecosystem

With the new release of its Hadoop distribution, Cloudera has radically expanded the set of supporting tools for the data processing framework.

“What we saw was that most organizations deploy quite a bit more than just Hadoop. There is this whole ecosystem of other open source components that collectively make up the whole Hadoop ecosystem that people run in production today,” said Cloudera vice president of products Charles Zedlewski.

With version 3 of its Hadoop package, CHD3, Cloudera has added and integrated seven additional programs, all of which should help smooth the process of setting up and running Hadoop jobs, Zedlewski asserted.

“People will want to consume a complete system that has all been tested and integrated together,” Zedlewski said.

The prior version of Cloudera’s package included the core Hadoop programs, the Hive data warehouse software and the Pig data flow scripting language. The core Hadoop package itself contains the MapReduce distributed workforce engine, the Hadoop Distributed File System (HDFS), and a set of assorted tools called Hadoop Commons.

The new package includes additional programs such as a data aggregation tool called Flume, the data format converter called Sqoop, a Hadoop graphical user interface called Hue, and a configuration tool called the Zookeeper. All the tools are open source, under the Apache Foundation license.

First developed as an offshoot of the Apache Lucene search engine, Hadoop is a framework for processing large amounts of data scattered across multiple nodes. It is particularly well-suited for processing and analyzing vast amounts machine-generated data that can’t fit into standard relational databases.

The new distribution can streamline a lot of the work required to set up Hadoop jobs, Zedlewski said. He offered an example of how these additional tools could help speed clickstream analysis, which involves building records of how users click through different Web sites.

The source data for clickstream tracking comes from activity logs of multiple servers. “Collecting clickstream data from 2,000 servers is not trivial,” he said. The data must be put into the Hadoop file system, and then reorganized by each individual’s session. This “sessionization process,” can involve 40 or more steps. After the material is organized, it then must be exported back out to a data warehouse or database in a easily-accessible format.

This new version eliminates a lot of the scripting work needed by providing tools to get the data into Hadoop, to reorganize the data once in Hadoop, and to export the resulting data set back out again.

The freely downloadable CHD3 package is compatible with the Red Hat, CentOS, SuSE and Ubuntu Linux distributions. It can also be run on the Amazon and Rackspace cloud services, and has been integrated with business intelligence and ETL (extract load and transform) vendor tools, such as those offered by Informatica, Jaspersoft, Microstrategy, Neteeza and Teradata.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now