Software comes of age

Without software, life as we now know it would grind to a halt. There would be no Web, no e-commerce, and no way to manage today’s incredibly complex business and manufacturing environments. In biological terms, software has become a cornerstone species: everything else benefits from it, everything else depends on it.

As a software designer for over 25 years, I find the growing importance of software immensely gratifying. I also find it an awesome responsibility — a responsibility the development community is failing to embrace.

Software is like a teenager on the cusp of adulthood: it’s grown immensely in the last years, but we are just beginning to get a glimpse of its true potential. And to achieve that potential, it must clean up its act. In practical terms, software developers and managers must ramp up their efforts to ensure the products they unleash to the world aren’t compromised by poor design, bad code or malevolent hackers. Developers must assume that, in today’s increasingly complex and highly connected environments, the unexpected will occur. From day one, they must embed the appropriate safeguards into their applications. The world — with its profound reliance on software — will demand no less.

The situation today is reminiscent of the 1930s, when France completed a masterpiece of military engineering called the Maginot Line. Bristling with over 50 forts, it provided the country’s eastern frontier with a virtually impregnable line of defense until, one day, when the German army simply walked around it.

Sadly, when it comes to software reliability and security, the “Maginot mindset” reigns supreme. Applications, even operating systems, are still being designed with the tacit — and erroneous — assumption that bugs and malware won’t get around the verification efforts, authentication protocols and other protective measures that constitute software’s Maginot Line. (Though, admittedly, even these measures are often employed halfheartedly, if at all. Think log fences rather than stone fortresses.)

The reality on the ground is very different. Hard-to-detect programming errors make their way past test and verification teams and into final product — as anyone who has experienced the blue screen of death will attest. Viruses and hackers, meanwhile, can infiltrate a networked system, using tactics that the system’s designers didn’t, or perhaps couldn’t, anticipate. As systems everywhere become more software-intensive and more connected, the potential for such vulnerabilities will only increase. And not just on desktops and servers, but in billions of embedded devices as well.

What’s at stake here isn’t simply the protection of applications or data. The very ability of software to usher in the next wave of innovation hangs in the balance.

Take the automotive industry, for instance. A new generation of in-car telematics and infotainment devices is hitting the streets, offering everything from CD ripping to 9-1-1 emergency dialing to realtime traffic updates. To succeed, such devices must connect to the outside world, whether to download updated Bluetooth stacks or to access new multimedia codecs. Moreover, consumers will expect these systems to interact with a variety of other personal devices, including MP3 players, USB storage keys and digital media cards.

The question is, how can critical software components in such environment be updated safely? And how can the existing behavior of the telematics device be guaranteed, even when it downloads software or data from a potentially untrusted source?

Similar challenges are being posed by Web services. On the one hand, they hold immense potential for simplifying the task of monitoring, configuring, and provisioning remote devices, from industrial controllers to telematics systems to HVAC control units. At the same time, this connectivity opens the possibility that such devices will be infiltrated by potentially malevolent parties or applications.

Fortunately, solutions are at hand. There are approaches to persistent storage, for instance, that can place “bubbles” around files and memory, thereby preventing unauthorized access by rogue processes. Likewise, there are approaches to partitioned scheduling that can prevent poorly written or malicious code from starving critical tasks of CPU time. Using such approaches, a device can continue to behave correctly, even if it has downloaded code that is trying to launch a denial-of-service attack.

And let’s not forget protected-mode computing. It’s a critical first step to ensuring the reliability of virtually any software-rich device. Yet, many device designers and application developers, especially those in the embedded space, still fail to embrace memory protection, even though it can contain faults and limit errant processes from corrupting the code or data of other processes. With the proliferation of low-cost, MMU (memory management unit)-enabled embedded processors, such protection is becoming increasingly affordable. In fact, developers of connected devices must seriously consider whether they can afford not to use it.

Of course, I don’t think for one minute that the above techniques serve as a substitute for best development practices. Developers must also employ every tool and methodology at their disposal to ensure their code is clean, modular, efficient, thoroughly tested and well-protected. The problem is, no one has developed a method to create a 100% bug-free code. And no test suite can possibly exhaust every scenario that a complex software system may encounter, partially because the number of potential scenarios can, in such systems, be almost limitless.

Thus, despite all reasonable precautions, faulty code or disgruntled hackers can find their way into our systems. Rather than pretend this won’t happen, software developers, designers and managers, should adopt a “mission-critical” mindset and build systems to contain — and intelligently recover from — such problems. Never assume the fortifications will hold.

In short, we must adopt a split-personality. First, do everything possible to ensure problems won’t occur. Then, assume they will occur anyway, and take appropriate measures. As a cornerstone species, software is too important to be created any other way.

Until recently, advancements in computing have been riding on the back of hardware and chip design. But as Moore’s Law slowly loses steam, software is moving to center stage. From here on, it is software that will drive innovation — provided it’s designed with the rigor, forethought and safeguards commensurate with its burgeoning importance.

Goodbye bloatware and blue screen of death. Hello best practices and self-healing systems.

Dan Dodge is the CEO of QNX Software Systems

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now