Cluster computing rocks … when used to analyze them

Analyzing rocks may not be everybody’s idea of a fascinating avocation. But such analysis can translate into huge practical benefits, especially in the area of seismology – the science of earthquakes and their phenomena.

And now, thanks to high-performance computing clusters (HPCC), rock analysis just got a whole lot easier and faster for researchers at the University of Toronto’s civil engineering department.

This technology has speeded up and simplified tasks such as listening for tremors in granite samples, and collecting and analyzing that data.

The department’s rock fracture dynamics facility conducts research around how rock might behave when subjected to pressure, liquids, and changes in temperature.

The research is applied to understanding the structure and stability of buildings during an earthquake, as well as bridges, dams, and mines, explained Paul Ruppert, director of strategic research systems at the University’s civil engineering department.

“The more we can understand the nature of the rock that you’re digging through, or drilling through, the better we can predict whether or not that structure is going to fail.”

Ruppert said the facility collects an enormous amount of data, specifically 400 megabytes of data per second. “So we fill up our 6-terabyte array of data in about four hours.”

The problem is when they want to perform observations over a period of weeks to see the impact of prolonged pressure, said Ruppert.

They could perform such observations prior to installing HPCC, he said, however, the observed events were not only limited in number – given the amount of data that had to be collected – but data analysis couldn’t be performed in real time.

“Until now, the best we’ve been able to do is grab the data stream and then do post processing because we didn’t have that kind of processing power available,” said Ruppert.

The facility’s HPCC is configured with 64 Dell PowerEdge 1950 2-socket servers equipped with 64-bit Dual-Core Intel Xeon processors, for a total of 256 processing cores. Running on Red Hat Linux and Microsoft operating systems, the cluster that’s based on Windows Server 2003, provides 18.9 terabytes of disk storage and 320 Gigabytes of overall memory.

Dell Canada provided in-kind funding that was matched financially by Ottawa, Ont.-based Canada Foundation for Innovation (CFI), a government agency that invests in research infrastructure.

Besides assisting the customer, Dell’s contribution to this research also benefits the community as a whole, said Debora Jensen, vice-president of Canadian advanced systems group at Dell.

With cluster computing, researchers can do near real time testing (it’s about six seconds behind real time) on samples, and actually visualize what’s going on inside the rock, said Ruppert. “The better model we have, the better we can predict what’s going on.”

He said graduate students are “running code that takes one day to execute on a regular desktop computer. That can cut almost a year off their masters program for example by doing it 256 times on the cluster.”

Dell’s involvement in the HPCC space goes beyond merely providing customers with technology and services, said the company’s director of enterprise solutions, Reza Rooholamini. “We are using HPCC to assess technologies and architectures to help us realize this vision of a data centre of the future.”

That data centre of the future, he said, will be one where scalability, manageability, and efficient utilization are non-issues – and those desired attributes can be tested in an HPCC environment.

Dell promotes scaling out over scaling up when it comes to expanding IT infrastructure to meet mounting demands, Rooholamini said. With scaling out, a company can capitalize on existing investments in infrastructure, and not fear interrupted service if one machine goes down because “with the scale out model, comes redundancy.”

HPCC may give the University of Toronto better real-time data analysis, but it’s not the end of the road just yet, said Ruppert. “Now we’re starting to see that we do have the computing capability to monitor in real time, but still can’t do it economically because we can’t put a supercomputer on every bridge in the world.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now