BEST OF THE WEB

Movement to Hold AI Accountable Gains Ground

Efforts to better understand how AI works and hold users accountable are rapidly gaining ground, and a number of initiatives are beginning to unfold.

Last month, the New York City Council passed a new law requiring the testing of algorithms used by employers in hiring or promotion.

The law, the first in the nation, requires employers to include outsiders in assessing whether an algorithm shows bias based on sex, race, or ethnicity. Employers are also required to inform applicants who live in New York when artificial intelligence plays a critical role in deciding whether to promote or hire staff.

Members of Congress are drafting a bill in Washington, DC that would mandate companies to evaluate automated decision-making systems used in health care, housing, the labour market, or education, and to notify the Federal Trade Commission of the findings. Three of the five members of the FTC favour more regulation of the algorithm.

Last month, the White House proposed a Bill of Rights requiring disclosure when AI makes decisions that affect a person’s civil rights, and called, among other things, for greater scrutiny of AI systems to rid them of prejudice.

A forthcoming report by the Algorithmic Justice League (AJL), a private non-profit organization, advocates disclosure when using an AI model and creating a public repository of incidents where AI has caused harm.

The repository will help auditors identify potential problems with algorithms and help regulators sanction repeat offenders. AJL co-founder Joy Buolamwini co-authored a major audit in 2018 that revealed that facial recognition algorithms work best for white men and worst for women with darker skin.

The report underlined the importance of independent auditors and that the results should be publicly available.

Deb Raji is an Audit Evaluator Fellow at AJL and participates in the 2018 review of facial-recognition algorithms. She calls for the creation of an audit oversight body within a federal agency to enforce standards or act as a mediator in disputes between companies and auditors, similar to the Financial Accounting Standards Board or the Food and Drug Administration’s standards for evaluating medical devices.

Cathy O’Neil founded O’Neil Risk Consulting & Algorithmic Auditing (Orcaa), with the aim of assessing artificial intelligence that is inaccessible to the public. An example of this is Orcaa’s cooperation with the attorneys general of several US states in assessing financial or consumer algorithms. O’Neil admitted to losing potential customers because companies still maintain deniability and choose not to be informed about how their artificial intelligence potentially harms people.

In a forthcoming paper in the Harvard Journal of Law & Technology by UCLA professor Andrew Selbst, he argues for documentation to help people fully understand how AI harms people. Documentation of impact assessments, he mentioned, will be vital for people interested in to file a lawsuit.

First brought to the fore in 2019, a revised version of the Algorithmic Accountability Act is now being deliberated in Congress. The bill seeks to require companies that use automated decision-making systems in health care, housing, employment, or education to conduct regular impact assessments and report their findings to the FTC.

As recently as August this year, the Center for Long-Term Cybersecurity at the University of Berkeley declared that a tool developed by the federal government to assess the risk of AI should include factors such as a system’s carbon footprint and the potential to perpetuate inequality, and called on the government to take stronger action against AI than has been the case with cybersecurity.

In 2020, users uncovered bias that discriminates against people with dark skin on Twitter and Zoom. These resulted in Zoom tweaking its algorithm and Twitter ending its use of AI for cropping photos.

Another report by Data & Society’s AI on the Ground team, published in June this year, explains why community activists, critical scientists, politicians, and technicians working for the public interest should be included in the assessment of algorithms. The report claims that what counts as an impact often is a reflection of the wants and needs of people in power. When done incorrectly, impact assessments can perpetuate existing power structures while making businesses and governments appear accountable, instead of enabling and empowering regular people to act when things go wrong.

IT World Canada Staff
IT World Canada Staffhttp://www.itworldcanada.com/
The online resource for Canadian Information Technology professionals.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

ITW in your inbox

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

More Best of The Web