Site icon IT World Canada

What happens when an algorithm is sexist? New guidelines seek accountability

As algorithms make more decisions affecting individuals, the computer industry is taking notice. Last week, an industry association published a list of principles designed to help prevent bias in computer algorithms – and hopefully protect companies from legal action in the process.

The Association for Computing Machinery’s U.S. Public Policy Council has published a set of seven principles to help ensure that decision-making and analytics algorithms remain transparent and accountable. The organization made the move in response to an increasingly programmatic component in institutional decision-making, it said.

As machine learning and other branches of artificial intelligence make their way into everyday business, algorithms are increasingly discovering, interpreting and communicating meaningful patterns in data to help make more efficient decisions.

“There is also growing evidence that some algorithms and analytics can be opaque, making it impossible to determine when the outputs may be biased or erroneous,” the Association said in a statement.

There have been examples of what appears on the surface to be algorithmic bias in the past. Google ads served fewer instances of an ad encouraging high-paying jobs to women than to men, this Carnegie Mellon research study found. Harvard University researchers found that online ads suggesting arrests spiked when searching for names culturally identified with black people.

As researchers push the envelope further, they are applying algorithmic processing to everything from predicting recidivism in felons to predicting criminal behaviour from facial images. The potential for abuse, or for reinforcing social prejudices in code, becomes truly alarming.

Apart from being injurious to certain groups, algorithms that unwittingly discriminate also risk legal blowback. If a company’s credit scoring, resident-matching or career planning application is found to discriminate against someone based on personal characteristics such as gender, race or sexual preference, they could be vulnerable in a lawsuit. This becomes a business risk stemming directly from new technology developments that CIOs should be aware of.

There are seven principles in the ACM USPCC’s new algorithmic transparency and accountability guidelines:

We have already seen similar principles emerging in legal frameworks. For example, in Europe the forthcoming General Data Protection Regulation (GDPR) includes a section on automated decision- making.

Legal analysis suggests that individuals have a right under the GDPR not to be subject to such decisions, adding that appropriate statistical techniques must be used, and transparency must be ensured. Measures must be in place to correct inaccuracies and prevent discriminatory effects, it says, and the person affected by the algorithm must have the right to human intervention so that they can contest the decision.

The big question is, will CIOs be able to explain how their algorithms work? One of the central features of machine learning is that after analysing mounds of empirical data, it eventually produces a statistical model that lets it make decisions on its own. In many cases, operators may not directly understand how it reached the decision it did.

What’s needed, in addition to such principles, is the technology to help enforce them by unpicking machine decisions. Research, such as this project from MIT, is currently underway to help tackle that issue.

Exit mobile version