The future isn’t what it used to be. In The Jetsons vision of yesteryear, humans lived harmoniously with gleaming technology, which worked perfectly and solved all problems. But today, movies like The Matrix portray humans stripped of all individuality, reduced to existing as batteries plugged into a malevolent techno-infrastructure. These evolving visions reveal deeply-rooted hopes and fears people have about controlling technology. Human factors like these will play a role in shaping information security to come in the future.
Cybercrime in general and identity theft in particular taps into the nameless fears people have about technology. Yet identity theft has existed since biblical times, and deceit, exploitation and betrayal of trust are even older. The wildcard that technology has added is scale. Today, information systems are so pervasive that verification mechanisms must be integrated into most everyday interactions.
“We’re great at handling scale in the digital world in a technical sense, but not human networks. Social networks don’t scale,” says Michael Schrage, senior adviser to MIT’s Security Studies Program. “Most people don’t want to go through life mistrusting others. But the harsh fact is that security systems work only if you institutionalize a certain degree of distrust.” But Schrage points out that institutionalized distrust is already the norm in many scenarios. No one questions showing a passport at the airport or a driver’s licence at the video store. Contract law governs agreements, but a man’s word and a handshake were once enough to seal a deal.
Such social schemes have evolved as an acceptable part of life in many areas. “How to cost-effectively balance digital and social network security is the real question for information security going forward,” says Schrage.
So what is technology’s proper role in this equation? The everyday decisions a system’s users make have an impact on security. People make subjective decisions about a piece of information: whether it should be treated as sensitive or not, if it can be shared.
“In the future, a user’s ability to make these decisions will be restricted with technologies such as digital watermarking and meta-data filters,” says Aron Feuer, President of Cygnos IT Security. “We can’t depend on humans and must provide supplementary controls where subjective decisions are made by individuals. Technology is part of the solution, as there’s a large portion of information that it allows us to address. We need to ensure controls exist so some of the decision points that people make are verified either electronically or manually.”
Industry experts say security will be designed right into systems in the future instead of retro-fitting it as we do today. But how can security be built into new, unknown technologies without observed patterns of use and abuse as a guide? “Security should be control-driven,” says Feuer.
“We should not ask if technology is capable of meeting security but instead flip the question around: Does the system have the preventative, detective and corrective controls needed to ensure when the system is compromised – and we assume it can and will be – so we can at least stop the breach, detect it and know how to correct it afterwards.”
Feuer points out that there are some novel approaches underway that capitalize on human strengths instead of fighting weaknesses. The core of the pesky password problem, for example, is that there are cognitive limitations to people’s ability to remember random alpha-numeric strings. However, people excel at recognizing and remembering faces.
Real User, a Baltimore-based software company, has developed a system of pictorial passwords which assigns a series of faces as passwords to users. At logon, the familiar faces are displayed with eight other randomly-generated faces; if the correct sequence of faces is chosen, the user is granted access.
The beauty of this approach is that users can easily remember their pass-pictures, but a series of faces cannot be written down or communicated to others. “If I were to do a penetration audit, I’d say that’s a pretty good model compared to, say, a strong password system we haven’t been able to crack but where users can easily divulge them if they choose,” says Feuer.
Some industry experts say the biggest driver for better security in the future will come not from technology but economics, which might be described as human factors writ large. Economics ultimately determine the types of investments organizations will make in security. But at present, there is no agreed methodology for valuing the information assets organizations need to secure – and thus no way to hold organizations accountable when there is a breach, or otherwise influence their behaviour.
“Most corporate social responsibility surveys look like something out of the industrial age, not the information society,” says Paul K. Wing, co-author of Protecting Your Money, Privacy & Identity. “The impact of a product is assessed based on something chemical or mechanical, not information. Banks aren’t questioned about their ATM security.”
Hard cash instead of soft persuasion can raise the question. Case in point: In Europe, consumers were originally liable for any ATM fraud unless they could prove their bank was at fault.
As a result, ATM security in Europe was lax compared with North America, where regulators made banks liable for losses. When similar regulation was adopted in Europe, banks had a powerful incentive to improve. “The biggest innovation that will happen in security is assessing the value of information assets and including them in the bottom line,” says Robert Garigue, CISO at the Bank of Montreal. “Regulations like Basel II and SOX are making an impact. Just as subjective evaluations of intangibles like goodwill became part of financial statements over time, the same will happen with information.”
Security experts agree that the current situation – shoddy software, badly-designed systems, untrained users, perverse economic incentives – is simply unsustainable in the long run.
“I believe that the rate of technology innovation and social innovation can be managed in such a way that we may not have the best of both worlds but we don’t have to worry about the worst of both worlds,” says Schrage.