Bullet-point brief: Tim Vincent, IBM Canada DB2 chief architect

IBM Fellow and chief architect for DB2 for the Linux, Unix and Windows platforms, Tim Vincent, spoke to ComputerWorld Canada about the evolution of DB2 going back to 1992 (See video).


Vincent also touched on a number of other topics concerning databases, including the latest version of DB2, the impact of emerging technologies, the supposed death of databases, and the future of databases.

-On DB2 version 9.7:

“We focused on several areas. Some of them are continuations of the evolution, like resource optimization is a key thing. If you look at cloud computing, one of the big emphasis is how do I actually utilize my underlying physical resources as optimally as possible. So enhancements in compression is one good example. The other aspect is the optimization of the storage bandwidth. We built in some technologies that are really optimizing the I/O so we actually drive less I/O. The next theme that we talk about is around flexibility, making sure schema evolution is really easy to do, making sure your data lifecycle management is optimal as possible … and that ties to storage costs as well. The next theme that I like to think about is service level confidence … so we put a lot of emphasis there in resilience features, performance, and monitoring of feedback. From an XML perspective, you can evolve your schema without touching the static nature of your relational tables.”

-On databases and new technologies, like Web 2.0, new storage devices, mashups:

“In the area of application development and new interfaces, Web 2.0 is obviously an important area. Now we are evolving more into new Web 2.0 paradigms so we have interfaces from Ruby on Rails, PHP, Pearl as well as some frameworks and O2R mapping technologies.

I think there’s a very exciting change happening in the storage world. Today everybody is still using traditional magnetic spinning discs … in the database world this is a really important aspect because when you are sizing out your system whether it be transactional or warehouse you have to make sure you have very good I/O bandwidth, and the way you get good bandwidth is to make sure you have lots of spindles. So people spend a lot of time worrying about the storage layer, the spindle layout and what’s happening in the storage world. (Flash) is moving into the enterprise world where it’s being used as a storage medium. The benefits of Flash is it gives very good sequential I/O bandwidth improvements.

Mashup technologies – situational applications – is an emerging area. People are trying to build and pull together multiple data sources into a simple application. What’s important from the database perspective, is it could actually put a real explosion of workload onto the system. What’s important is how do you actually deal with the introduction of these types of workloads and manage them but maintain your service level agreements with all your other business users.”

-On the supposed death of databases:

“You can draw an analogy on the death of mainframes. We’ve heard of the death of mainframes many times. With databases, we’ve heard this before back in the late 90s when object databases were going to kill the relational database. That didn’t happen. There is so much legacy these days. The concept of legacy is important. We can draw an analogy, Cobol, there are probably still as many lines of active Cobol code in the enterprise as there is Java code, and it still drives a lot of systems. So, yes, there’s going to be emerging technologies. Yes, there’s going to be open source databases that will commoditize some aspects of this technology. But the reality is, (databases are) still critically important to the enterprise. There is so much of the operational workloads sitting on top of the databases, and as people are dealing with things like regulatory compliance, business optimization, there is more and new and powerful workloads especially in the analytical space that are going to continue to drive database technology very hard. So I think the death of the database is still some way in the distance.”

-On the future of databases:

“In the future I think the transactional world will continue to look somewhat similar to what it does today. The concept of rapid change is going to be important. XML and other technologies are going to really come to play in the area of allowing people to build applications and rapidly changing these applications. Cost is going to always be a factor. But in the analytical space, this is where we are going to see a lot of interesting changes, people are going to have more and more data. We’re going to see databases that are going to be growing up into the petabyte range and I’m talking not only of overall data, but petabytes of active data. People are going to be doing lots of complex workloads on there.

All these technologies and all these trends around compliance and data growth are going to push the envelope from a size perspective and usage patterns around databases.”

Related Download
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center Sponsor: Lenovo
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center
Find out how Hyperconverged systems can help you meet the challenges of the modern IT department. Click here to find out more.
Register Now