Site icon IT World Canada

Law needed requiring AI to be fair, accountable and transparent

Source: Irina_Strelnikova | Getty Images

Artificial Intelligence (AI) is already being used as an explanation to avoid accountability.  In the UK, Prime Minister Boris Johnson tried to blame a ‘mutant algorithm‘ for exam chaos. When exams were cancelled due to coronavirus, the government used a computer algorithm to assign scores, and thus downgraded many of the teacher’s predictions for A-levels – especially for pupils in poorer areas.  The reaction was chaos and outrage.

Understanding how AI works and what concerns we should have about its use is quite complicated. Sabrina Cruz from YouTube channel “Answer in Progress” offers a very entertaining YouTube video about the ethical trolley problem, and how an AI can get things wrong. In the process, she also talks to Dr. Tom Williams at Colorado School of Mines, and he outlines different ways of thinking about the ethics of artificial intelligence. He says the current thinking is that we need to ensure those creating the AI are fair, accountable, and transparent. He even goes as far as to ask that we take a step back and ask each time if we really should be automating this at all.

In Canada we have CIPS (Canadian Information Processing Society), an organization that helps advance Canada’s IT profession by fostering standards, best practices, and integrity for the benefit of IT professionals and the public interest. I was disappointed when CIPS replied in a position paper for the Canadian government to say that “CIPS does not support the concept of technology-specific safeguards,” and “there should be no rules that apply to AI only”.

They were talking about privacy rules, but perhaps there needs to be rules for other kinds of computer concerns as well. The rules Dr. Williams recited certainly do not have to apply only to AI.

All software should be:

I think some laws should address the issues at this level as well.

Exit mobile version