What happens when an algorithm is sexist? New guidelines seek accountability

As algorithms make more decisions affecting individuals, the computer industry is taking notice. Last week, an industry association published a list of principles designed to help prevent bias in computer algorithms – and hopefully protect companies from legal action in the process.

The Association for Computing Machinery’s U.S. Public Policy Council has published a set of seven principles to help ensure that decision-making and analytics algorithms remain transparent and accountable. The organization made the move in response to an increasingly programmatic component in institutional decision-making, it said.

As machine learning and other branches of artificial intelligence make their way into everyday business, algorithms are increasingly discovering, interpreting and communicating meaningful patterns in data to help make more efficient decisions.

“There is also growing evidence that some algorithms and analytics can be opaque, making it impossible to determine when the outputs may be biased or erroneous,” the Association said in a statement.

There have been examples of what appears on the surface to be algorithmic bias in the past. Google ads served fewer instances of an ad encouraging high-paying jobs to women than to men, this Carnegie Mellon research study found. Harvard University researchers found that online ads suggesting arrests spiked when searching for names culturally identified with black people.

As researchers push the envelope further, they are applying algorithmic processing to everything from predicting recidivism in felons to predicting criminal behaviour from facial images. The potential for abuse, or for reinforcing social prejudices in code, becomes truly alarming.

Apart from being injurious to certain groups, algorithms that unwittingly discriminate also risk legal blowback. If a company’s credit scoring, resident-matching or career planning application is found to discriminate against someone based on personal characteristics such as gender, race or sexual preference, they could be vulnerable in a lawsuit. This becomes a business risk stemming directly from new technology developments that CIOs should be aware of.

There are seven principles in the ACM USPCC’s new algorithmic transparency and accountability guidelines:

  • Awareness. Those designing analytics systems should be aware of possible biases.
  • Access and redress. Regulators should adopt mechanisms that give affected parties a way to question perceived algorithmic bias and remediate the problem.
  • Accountability. Even if an institution can’t explain how an algorithm achieved its results, it should still be held responsible for any decisions the code makes.
  • Explanation. Institutions should explain the procedures that an algorithm follows and specific decisions that are made.
  • Data provenance. The developers of the algorithms should describe how training data was collected and document potential biases that may emerge from that process.
  • Auditability. Models, algorithms, data and decisions should be recorded so that there is a paper trail for auditors to follow in the future.
  • Validation and testing. Algorithm designers should validate their computing models and document the methods and results to see if there is any discrimination in the output. They should also make these results public.

We have already seen similar principles emerging in legal frameworks. For example, in Europe the forthcoming General Data Protection Regulation (GDPR) includes a section on automated decision- making.

Legal analysis suggests that individuals have a right under the GDPR not to be subject to such decisions, adding that appropriate statistical techniques must be used, and transparency must be ensured. Measures must be in place to correct inaccuracies and prevent discriminatory effects, it says, and the person affected by the algorithm must have the right to human intervention so that they can contest the decision.

The big question is, will CIOs be able to explain how their algorithms work? One of the central features of machine learning is that after analysing mounds of empirical data, it eventually produces a statistical model that lets it make decisions on its own. In many cases, operators may not directly understand how it reached the decision it did.

What’s needed, in addition to such principles, is the technology to help enforce them by unpicking machine decisions. Research, such as this project from MIT, is currently underway to help tackle that issue.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Danny Bradbury
Danny Bradburyhttp://www.wordherder.net
Danny Bradbury is a technology journalist with over 20 years' experience writing about security, software development, and networking.

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now