What tech leaders need to know about The Algorithmic Accountability Act

The Algorithmic Accountability Act requires all companies that use AI to conduct critical impact assessments of the automated systems they use and sell under the terms of the Federal Trade Commission (FTC).

In particular, the draft law aims to eliminate biases caused intentionally by companies and by incomplete data from a sufficiently diverse data source.

The Algorithmic Accountability Act, reintroduced in April 2022 after several amendments in both the House and Senate, is likely to trigger a vendor-level review of AI systems once it is passed, and will also trigger the same initiative within companies that use AI in their decision-making.

Once it becomes law, the FTC will also have the authority to conduct an assessment of the impact of AI bias within two years of adoption.

While AI prejudice remains part of society, organizations can take steps to minimize it, including deploying different AI teams that bring many views and perspectives on AI and data, and developing internal methods to test AI for bias.

Others include calling for bias assessment results from third-party AI systems and data providers from whom they source services and emphasizing data quality and processing in their day-to-day AI work.

The sources for this piece include an article in TechRepublic.

IT World Canada Staff
IT World Canada Staff
The online resource for Canadian Information Technology professionals.

Would you recommend this article?


Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.

Jim Love, Chief Content Officer, IT World Canada

Featured Download

ITW in your inbox

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

More Best of The Web