Hospital centralizes its data

As officials debate creating a national database of medical records, The Credit Valley Hospital in Mississauga, Ont., is already taking advantage of its new centralized storage solution.

Though medical professionals at Credit Valley Hospital have long been able to access patient data at their workstations, the hospital’s previous solution was both slow and difficult to manage. Data was pulled from the storage drives on an application’s server, and there was no centralized repository for all medical and corporate data. The result was disparate servers, slow access to patient’s charts and a data management nightmare.

Individual application servers needed to be monitored to make sure there was enough storage space, doctors were finding it took too long to access patient information stored on digital charts, and backups were taking longer than the available window.

By designing a solution around IBM’s Shark storage server, the hospital has solved these problems.

“What this is doing is bringing all of the data storage of the hospital into a single storage device,” said Dan Germain, vice-president and chief financial officer at the Credit Valley Hospital. This includes everything from electronic medical records to office application data. By next year it will also include other diagnostic information, such as CAT scans.

The data is also accessible more quickly, a key requisite for the medical profession. “When you have a paper chart and you flip pages, you can flip them really fast,” Germain explained. Before the Shark solution it was taking medical professionals up to 45 seconds to access each page. Now it is down to 10 seconds, he said. The increased access speed is due, in part, to the fact that most files are sitting in cache and not being pulled from disk. About 85 per cent of the hits are directly to the 8GB Shark cache, explained Leigh Popov, manager of technical services and telecommunications at the hospital.

easier to manage

The management of hospital data has also been dramatically simplified. Prior to the Shark solution each application server had to be closely monitored to make sure there was enough storage space.

“While it seems to be relatively simple to deploy that way (individual servers for each application), it becomes very expensive to manage because you need people to manage each disk pool behind each server,” said Kyle Foster, general manager of storage sales with IBM Canada Ltd. in Markham, Ont. Fifty servers means 50 disk assets to manage, he said.

“Your manpower requirements to manage them goes up linearly, you don’t get economies of scale.”

Popov agrees. Now when a new application server is added, it is just “pointed” to the Shark and a system administrator allocates the amount of disk space it is entitled to. A network card sits in the server and connects it, via Fibre Channel, to the Shark. The server then seamlessly treats the Shark as its own storage disk.

As it stands the Shark has 8TB of storage. This can be upgraded to 56TB, all on the fly – something that’s important in the health care field.

“The zero downtime, that is huge in this field…it is a 24-hour profession [where] any downtime is viewed very negatively,” Popov said. “Being able to service a piece of equipment such as this, without having to disrupt services is a big thing.”

New storage is just slid into the rack. Backup speeds have also been improved. What used to take 18 hours is now down to seven. “It is a big drop…and that certainly helps things…such as mail and file and print that wouldn’t be finished backing up by the time morning came around,” he said. Users were hitting the system with requests when the system was still busy with the backup so queues would build up and people would have to wait, he added. In the morning “IT would get the complaints.”

Popov was also pleased with the ease and speed of the implementation. “It was actually pretty simple, the whole thing was in and up and running within a two week time frame,” he said. “It was one of the easiest ones I have ever done.”

“It has done everything it was supposed to do,” he added.

The implementation included IBM’s Enterprise Storage Server (Shark) Model 800 with 8TB of disk capacity, SAN Fibre Channel switches, IBM NAS 300G model G26 gateway and a 3584 ULTRIUM LTO UltraScalable Tape Library for data backup. For implementations of this size the hardware cost traditionally run over $1 million, Foster said.