Site icon IT World Canada

Big data: Will your processors perform when everything’s on the line?

The reliability of critical systems can literally be a matter of life or death.

At Memorial University in Newfoundland, doctors use high performance servers and analytics to speed up the turnaround time for testing and diagnosis of genetic diseases in high risk patients.

It’s reduced the time to get test results from one year to less than 12 weeks, an improvement of 83 per cent. It means that patients receive the right treatment earlier, increasing their chances of recovery.

This type of detailed data analysis depends on high capacity processing power that is always available. As a recent IDC white paper points out, organizations must ensure their systems are prepared to handle the rapidly growing and increasingly mission-critical demands of the future.

The backbone of digital transformation
Data is considered to be the new natural resource, renewing at an exponential rate every minute. According to IBM estimates, over 2.5 quintillion bytes of data are generated globally every day. The data-driven nature of digital transformation is creating demand for systems that can take in massive amounts of data quickly and reliably.

Organizations that can effectively mine their data resources to improve service will have an advantage in the digital era. A Gartner study shows that 89 per cent of companies expect to be competing on customer experience. But, according to the IDC paper, it’s not enough to be able to react in real-time.

“The value has now migrated to the ability to rapidly gather large amounts of data, quickly crunch and predict what’s likely to happen next,” says Peter Rutten, research manager, server solutions, IDC. “This is the start of the insight economy.”

Servers will do a lot of the heavy lifting to support digital transformation by intelligently leveraging massive amounts of data, says Rutten. The servers must also support the next generation applications that will be the engines of the organization’s transformation. These applications require extremely flexible compute capacity, scalability, redundancy and high availability. The challenge for enterprises is that it’s difficult to predict how much processing power will be needed to manage peak requirements at crucial moments.


Planning ahead

As an organization starts its digital transformation, Ruttan says there are key considerations to building a server backbone that will be strong enough to support your evolution over the next three to five years.

As you move workloads to the cloud, keep in mind that you may need the ability to run multiple operating systems. It will be necessary to operate in an open and hybrid environment as part of the journey. It’s equally important to have the ability to shift workloads between the cloud and on-premises equipment to manage spikes in traffic. In a fast-changing environment, the service needs to be elastic to easily scale up or down as needed.

To counter runaway expenses, organizations should deploy scale-out systems with very high utilization rates. “The servers need to be easy to manage and have to perform extremely well with data intense workloads,” says Ruttan. This will require fewer physical systems in the data centre and result in reduced maintenance costs.

Finally, the systems need to be secure and provide high availability across the stack “because downtime is not an option”, says Ruttan. When it comes to mission-critical operations, the stakes are too high.

To learn more about IBM’s high-performance processor systems, download its ebook, “What’s possible with POWER8: Take advantage of world-leading performance, openness, and cost efficiency.”

Exit mobile version