IBM Canada and the University of British Columbia (UBC) recently turned loose a new supercomputer – code named “Monster” – that will help researchers predict and plan for avalanches, forest fires, earthquakes, cyclones and other natural disasters.
Located at UBC’s Geophysical Disaster Computational Fluid Dynamics (GeoDisaster) Centre in Vancouver, Monster is an IBM eServer xSeries-based Linux cluster that will deliver the most detailed weather mapping forecasts available to Canadian academic researchers. With a peak speed of 170 billion calculations per second, Top 500 – the organization that ranks the world’s supercomputers – lists this installation as the fourth most powerful computer in Canada, and 255th in the world.
Rather than following the more standard supercomputing architecture of a small number of extremely powerful processors, Monster is powered by 264 Intel Pentium 1GHz processors, running very fast Myrinet 2000 Interconnect and Red Hat Linux version 6.2. The system also employs an IBMFAStT500 storage server that provides 1 TB of fibre-attached storage. In addition, Monster’s 264 pizza box-shaped servers are controlled with sophisticated networking technology that eliminates the complex wiring found in many supercomputer complexes, and allows entire server farms and clusters to be managed by one console connection, while the system may be managed as a single unit.
IBM Canada’s Dennis Staples, client manager for B.C. higher education and research, said moving to a Linux-based supercomputer was a natural extension of UBC’s existing technology.
“Most recently (the GeoDisaster Centre) had some smaller clusters to cut their teeth in this space and make sure that their applications ran the way they wanted them to in the Linux-Intel world. And they believed that (this configuration) was a cost-effective alternative for growing into this larger machine,” Staples said.
“The trend of supercomputing systems built on Intel or other commodity processors wired together and managed as one big machine is a growing one, so there are lots of other examples of these types of systems that have sprung up. If you take a look, for example, at the (last few) Top 500 lists there is a huge increase in the number of Intel-based Linux clusters on the list,” he said.
The GeoDisaster Centre has a very tight timeframe in which it must compile and analyze vast amounts of climactic and geographic information, and Staples said the Centre’s previous system offered a less-than-complete picture of evolving phenomena.
“Before Monster, with only 15 hours to run weather benchmarks based on daily data from Environment Canada and other sources (the researchers) were only able to look at B.C. from very high in the sky. That is, they could only do a forecast of B.C. at a five or 10 km grid level. So they needed heavy supercomputing to get down to much smaller grid points every day – a small enough grid that it actually gives them something useful about a region,” Staples said.
The biggest supercomputer in Canada is run by the National Meteorological Services of Canada (MSC) in Dorval, Que., which ranks in the global top 100.