Site icon IT World Canada

Amazon unveils cluster computing service for HPC apps

Amazon Web Services says its latest cluster computing service, which it announced Tuesday, can provide the same results as custom-built infrastructures for high-performance applications at organizations that don’t want to build their own.

The service, called Cluster Compute Instances for Amazon’s EC2 cloud computing infrastructure, can deliver up to 10 times the network throughput of current instance types on its EC2 service depending on the usage pattern, the company said.

EC2 is already used for large computing jobs such as genomics sequence analysis, vehicle design and financial modeling. But Amazon said its customers were asking for better network performance.

The service is appropriate for tightly coupled parallel processes or applications sensitive to network performance that use node-to-node communication, typically high-performance computing applications, Amazon said. Cluster Compute Instances can be paid for on a per-hour basis.

The specific processor architecture is included in the Cluster Compute Instances, which allows developers to fine tune their applications by compiling them for specific processors for better performance, according to additional information on an Amazon Web Services Web page.

As far as specifications, Amazon said its Cluster Compute Quadruple Extra Large is a 64-bit platform with 23GB of memory, 1690GB instance storage and 10 gigabit ethernet input/output performance. It is composed of 33.5 EC2 compute units.

The default usage limit for the Cluster Compute instance type is eight instances, or 64 cores, although customers can request more.

Amazon has tested Cluster Computer Instances with the National Energy Research Scientific Computing Center (NERSC) at the Lawrence Berkeley National Laboratory. The lab found their HPC applications ran 8.5 times faster than on previous Amazon EC2 instance types, according to an Amazon news release.

Exit mobile version