If you are a baby boomer, you might appreciate the following thought experiment: Imagine the difficulty in creating a comprehensive archive of all the music you ever listened to over the course of your entire life.
For starters, consider of all the various formats your favourite albums have come in over the decades: vinyl records, cassette tapes, CDs, MP3 players and now iTunes. Then picture these items scattered in locked rooms in various parts of the house. It would be quite a feat to aggregate all this material into a single searchable platform, not to mention have the ability to play a hundred songs at a time while analyzing the collection and categorizing patterns of notes.
This analogy does not even begin to describe the daunting logistical challenges involved in mapping out the new frontier of personalized medicine.
Every single person has their own sheet music written in their DNA, and different cohorts of patients will respond better to certain cues or conductors than others. Scientific advances have recently made it possible to tailor treatment and prevention of disease to individuals’ molecular profiles, behavioural characteristics and environmental exposures. Not only that, but clinical outcomes can now be predicted more accurately and more quickly.
However, many biomedical research and clinical organizations still have a lot of orchestration to do when it comes to accessing, gathering, storing and analyzing the data they require to compile your profile. Not only does health information come from remote and disparate sources — whole genome sequences, biomedical images, electronic medical records, wearable sensors and clinical literature — it grows exponentially. Furthermore, digesting the data is a taxing exercise, requiring intensive computations and the running of hundreds of applications at any given time.
So if someone told you there existed a platform that could cost-effectively address all of these current issues while anticipating future demands, it would probably be music to your ears. The good news is that this is what IBM Systems has done with the creation of a reference architecture for health care and life sciences.
Among its features is spectrum computing, which allows you to draw on underutilized resources to optimize the data-crunching workload and to distribute it seamlessly across multiple servers and cloud environments. Other keys on the IBM pipe organ include software that supports the running of hundreds of genomic workloads or that simplifies the writing and sharing of genomic workflow scripts.
Also noteworthy is the development of PowerAI, a deep learning toolkit. A growing number of organizations conducting biomedical research are applying such techniques to uncover predictive patterns within vast sets of complex, unstructured data.
The accumulation of Big Data is like a cathedral of knowledge under construction. It’s complex, but IBM’s reference architecture can provide the scaffolding, storage space, growth capacity and processing power you need to succeed. It’s highly flexible, scalable — and economical. Simply put, it closes the gap between the scientifically possible, the technically feasible and the financially affordable.
If you are interested in learning more about how IBM Systems can apply savvy solutions to the cutting-edge field of personalized health care, please download this white paper.