Colleges and companies fail computer security

The security holes exploited by Code Red and Nimda, worms that experts said had the potential to knock the entire Internet offline, attacked long-standing vulnerabilities in Microsoft Corp.’s IIS (Internet Information Services) Web server software caused by a type of error made through bad code writing: the buffer overflow.

A buffer overflow occurs when the amount of memory assigned to a specific application or task is flooded, often with unpredictable results. Frequently, however, buffer overflows allow attackers to run any code they choose on a target machine.

When Code Red and Nimda struck last year, many security experts were left to wonder why the vulnerabilities hadn’t been patched. A better question to ask might be why buffer overflows, a class of error that has been known and avoidable for at least 30 years, are still cropping up with great regularity in modern software?

Buffer overflows have already made their presence widely felt in 2002. Microsoft issued patches to fix them in February. Database maker Oracle Corp. had multiple buffer overflows identified in its products in early February. Sun Microsystems Inc.’s Solaris operating system had one in January. America Online Inc. had buffer overflows in two of its chat programs in January. Though few of the vulnerabilities amounted to anything serious, they easily could have.

A combination of pressures exerted by companies and consumers, educators and students, merge to create a situation in which the techniques that can be used to stop persistent security holes, like buffer overflows, are known but aren’t used or taught nearly enough. Consumers say they want security, but instead buy cheaper products with more features. As a result, vendors have less incentive to create more secure products and in turn colleges and universities, the bodies that supply vendors with talent, see little demand from companies for more security skills in their students.

This matrix of factors, one that touches on nearly every aspect of the computer industry, makes buffer overflows a common problem. The Software Engineering Institute (SEI), an organization that studies the processes used to build software, maintains a database of security vulnerabilities in which buffer overflows account for more than 15 percent of all vulnerabilities and for 75 percent of the top 10 most serious vulnerabilities, according to Shawn Hernan, SEI team leader for vulnerability handling.

The question of how to create more secure products is one that vexes even the largest software companies. In January, prodded by a memo from co-founder Bill Gates, Microsoft unveiled its Trustworthy Computing initiative designed to make its products more secure.

“We’ve done a terrific job at (adding features), but all those great features won’t matter unless customers trust our software,” Gates wrote in his memo. “So now, when we face a choice between adding features and resolving security issues, we need to choose security.”

One place to look into the question of why buffer overflows are still so common is where programmers are trained — colleges.

“One of the problems is that the educational establishment generally (doesn’t) teach secure programming at the undergraduate, or even graduate, level,” said the Software Engineering Institute’s Hernan.

A number of colleges known for their computer science programs offer, at best, only the most basic security classes, and few on writing secure code.

At Carnegie Mellon University (CMU), “I don’t think security (as) defense from malicious attacks is ever explicitly covered” in the first two years of an undergraduate degree, said Jim Morris, the dean of the School of Computer Science at Pittsburgh’s CMU.

Though CMU offers a “fairly rigorous, but broad” undergraduate computer education, there are currently no security courses at that level, Morris said. Students spend the first three to four semesters working on foundational issues such as data structures and functional programming, he said.

Carnegie Mellon is not alone. Syracuse University offers no specific security courses to undergraduates, though security is a topic covered in four to five courses at that level, according to Steve Chapin, associate professor of electrical engineering and computer science and director of the Center for Systems Assurance at Syracuse, located in Syracuse, New York.

Stanford University is one school that stands out in this regard. Stanford, in Palo Alto, California, offers a program called the Security Lab, in which nine or 10 faculty members work with undergraduates on security issues, said Dan Boneh, assistant professor of computer science and electrical engineering. Part of the lab’s mission is to teach students about secure code and how to maintain it, he said.

Despite the lack of focus on security at the undergraduate level, graduate programs at all three schools address security. In addition, the U.S. National Security Agency runs a program called Centers of Academic Excellence in Information Assurance in Education that helps colleges and universities bolster security curricula through training. Both Stanford and Syracuse are members of that program, along with more than 20 other schools including Princeton University in Princeton, New Jersey, and Purdue University in West Lafayette, Indiana.

The paucity of secure coding education for undergraduates means that colleges and universities bear some responsibility for the persistence of security holes, said Syracuse’s Chapin.

Despite this, students are becoming increasingly drawn to security, said Stanford’s Boneh. Not only are his students starting to see security as an important subject, they are also being asked about security in job interviews, he said.

Shawn Hernan of the Software Engineering Institute sees the matter differently, however. Industry, he said, is pushing colleges to make sure their students can write code to correctly achieve a goal the first time, without a concern for security.

“Rarely are programs examined for their quality,” he said, adding that they are instead judged on what a program does and whether it does it to specification.

The Software Engineering Institute has published a list of software development techniques that, Hernan said, can help developers avoid these mistakes and write more secure code. A more widespread adoption of those techniques will help cut down on common vulnerabilities, he said.

But that may not be enough.

“Technology companies are responding to consumer demand,” he said. “What people say they want is security. What they buy is something that works right out of the box,” whether it’s secure or not.

Security tests and audits that might be performed otherwise are overlooked due to market pressures, the drive to ship and the need to provide features that the competition doesn’t, said Stanford’s Boneh.

Such an auditing process is a crucial step in the development of secure software, according to Izhar Bar-Gad, chief technology officer of application security firm Sanctum Inc., based in Santa Clara, California. Developers ought to go through three security audit phases, he said: developer audits, quality assurance audits and external audits.

The three steps are “a good process for reducing the number of bugs, but this will not eliminate them,” Bar-Gad said.

Developers and the companies they work for aren’t the only groups that have a say in how many security holes, especially well-known ones, make it through development to the finished product, according to Arthur Wong, chief executive officer of SecurityFocus Inc.

“The important people that need to get involved here are consumers,” Wong said.

“Consumers are very ill-informed … when it pertains to the software that they buy (and security),” he said. “I’ve never heard anyone ask a vendor, ‘is your software secure,’ unless you’re selling a firewall.”

“If consumers ask for it, there will be vendors out there who fulfill that need,” he said. “What we shouldn’t accept as consumers … are the standard vulnerabilities being found in software.”

Carnegie Mellon’s Jim Morris agreed, pointing to other industries, such as the automobile industry, in which consumer action led to the government imposing stricter safety standards.

But the government may not solve all the problems that persistent flaws in code pose. There have to be changes to the software development process, Hernan said. The software development industry needs to create feedback loops for developers, so that they learn about and learn from their mistakes and so that they don’t feel the need to constantly invent new ways to perform old tasks, he said.

Civil engineering is one “real world” area Hernan suggests software engineers could turn to learn these lessons. Civil engineers have apprenticeships and take legal responsibilities for their products, he said.

Many security mistakes made in software are “the computer equivalent of forgetting to put the last screw in the hinge (on a door),” he said. Such mistakes would never be acceptable in the physical world and applying disciplines and standards from the physical world would help to cut down on them in software, he added. But the problems run deep, he said.

“There are so many things that need to change about software in order for this problem (of secure software) to substantially improve, real solutions are years off,” he said. “These are deep, systemic problem that don’t lend themselves to trivial solutions.”