Since the recent Cutter IT Journal issue on outsourcing (“Offshore Outsourcing: No Pain, No Gain?” Vol. 17, No. 10, October 2004), there has been a steady stream of articles on outsourcing, an indicator that this topic is very much on our minds.
The main themes of these articles have not changed very much. When outsourcing is the subject of discussion, issues that surface typically include concerns about employment for US engineers, the lower salaries in developing countries, the adequacy of management, cultural issues, and communications problems. Recent data shows that median salaries for US engineers are down relative to prior years, and this is partly attributed to outsourcing.
The many articles on outsourcing suggest that it continues to be a growing phenomenon. These business issues are indeed important, but we continue to see little attention being given to the potential for information security problems when outsourcing occurs. Some of the issues include:
- How to determine whether the software developed, maintained, or enhanced offshore is trustworthy
- Whether certification of the developers or the companies is an appropriate method for assessing trust
- Whether there are types of software that should not be outsourced (For example, should software that will be used in critical infrastructure or the financial markets ever be outsourced? Alternatively, consider widely used COTS software, such as Windows. If COTS development is outsourced, could a time bomb be embedded in the software that would avoid detection and cause many systems worldwide to crash?)
- Concerns with assuring the privacy of data related to outsourced applications
Can the outsourced software be trusted? There are many opportunities to insert malicious code in software. This could occur during the transfer of the software from the subcontractor to the contracting organization. It could occur as a result of a successful attack on the subcontractor’s system(s). Or it could occur if one of the subcontractor’s employees has malicious intent. This is a continuing theme that I see in my correspondence with staff members in other organizations.
In my earlier Cutter IT Journal article (“Outsourcing and Information Security: What Are the Risks?”), I said that we need to consider the following special situations when outsourcing:
- A cyber attack by terrorists whose aim is to disrupt critical systems
- A financial attack by gangsters who wish to transfer funds to their own accounts
- Attempts to set up software to launch targeted distributed denial of service (DDOS) attacks from large numbers of systems
- Theft of intellectual property (industrial espionage)
Ironically, there has been a recent news account about cyber thieves attempting to transfer hundreds of millions of dollars in funds from Sumitomo Matsui Bank to 10 different bank accounts by taking advantage of the use of spyware to collect bank account information. Details are very sketchy, but this appears to have been more of a hacking/spyware type of attack than one that involved placement of malicious code in the bank’s software. So it looks like an attack of the type described above as a financial attack by gangsters. Nevertheless, the presence of malicious code also seems like a likely scenario. Is the malicious code scenario more likely in outsourced software, or is this a red herring?
In a recent panel session of corporate security officers, questions about outsourcing were raised. The corporate security officers talked about practices such as giving only read access to developers overseas or not giving source code access at all. Even if such practices are in place, is this sufficient protection? You can still learn a lot about a particular piece of code by getting read access only. What mechanisms exist to ensure that malicious code is not being inserted into software that is outsourced? Of course, the counterpart question is: what mechanisms exist to ensure that malicious code is not being inserted into software that is being developed locally? Do we really have more control over code that is developed locally versus elsewhere? It seems that if we take security measures to assure software that is under development as well as software that is in the field, it shouldn’t matter all that much where it is developed.
— Nancy R. Mead