Holding AI accountable: public leaders organize to ensure algorithms influencing government are ethical

In the 2002 movie Minority Report, three “Precogs” that have the God-given ability to see into the future are used to predict murders before they happen and law enforcement arrests the supposed perpetrator before they even have a chance to act – or perhaps even consider the act. Sixteen years later, in the real world, we still haven’t found any gifted soothsayers to aid in the legal justice system, but it’s possible that artificial intelligence (AI) might play a similar role.

That’s what Richard Zemel, research director at the Toronto-based Vector Institute for AI imagines. Speaking at an event hosted by Accenture, Zemel laid out a scenario where a judge might use an AI system as part of a risk and reward analysis predicting if a convict is likely to re-offend or not over the next several years. If the program reported back a high probability of committing another violent crime, the rewards to society of denying bail and delivering a harsh sentence would be high.

“So the idea would be that ideally, the program would be very good, right? It would be well-calibrated, it would accurately report the probability,” Zemel said. “But programs often aren’t that well calibrated and it’s not clear how they should be used.”

Alex Benay is shaking up preconceived notions of government as the CIO of Canada.

It’s not a hypothetical situation that Zemel is considering. A 2016 ProPublica investigation revealed ingrained bias in machine learning algorithms in systems used by judges in the U.S. The analysis reveals the tool to harbour a racist bias. It was twice as likely to wrongly predict that African Americans were high risks to reoffend compared to white people. Conversely, it was twice as likely to predict a white person would not reoffend when they did commit another crime.

Now that Canada is ramping up its investment in AI and tapping into the deep brain trust of researchers across the country with special expertise in the area, Zemel and other public policy leaders feel it’s the right time to hold a magnifying glass up to the algorithms used. By discussing ethical principles for AI, Canada might avoid some of the negative impacts of inherent biases seen south of the border.

Zemel’s own employer, the Vector Institute, is one of three recipients of $125 million in federal funding planned through to 2022 to develop a Pan-Canadian Artificial Intelligence Strategy. Announced in 2017, funding is also being directed to the Alberta Machine Intelligence Institute and the Montreal Institute for Learning Algorithms. Meanwhile, the CIO of the Government of Canada is leading a conversation of public and private sector leaders on the ethics of AI, and the former Ontario Privacy and Information Commissioner is putting forward a framework for ethics in AI.

Canada lacks strategic regulation around AI

Considering the impact that technology could have on a citizen’s life, Alex Benay, the CIO of the federal government, is scared by the current lack of regulation.

“We don’t have the strategic regulation covered yet in this country,” he says. “We’re fragmented.”

Benay hopes the CIO Strategy Council can play a role in pulling those fragments together. Co-founded by Benay and former BlackBerry leader Jim Balsillie in Fall of 2017, the not-for-profit group brings together public and private sector CIOs to discuss digital transformation issues, and to help set industry standards. At an early April meeting, that standards discussion turned to AI ethics. In that same week, Benay was working on an RFP to provide AI services to the federal government.

At the Treasury Board Secretariat, the federal department Benay is embedded in, two AI ethics researchers were hired exclusively for this issue. Benay wants to ensure the right data governance is in place while the government architects a service that is likely to plugin to many different platforms.

In what he makes clear is his own opinion – not that of the government’s – Benay stresses the importance of transparency here.

“If we’re going to use an algorithm to service a citizen in Canada, it has to be transparent, it can’t be a black box,” he says. “I don’t want to see algorithms that aren’t representative our greater society.”

Benay isn’t the only one that thinks this is the right approach. Ann Cavoukian, the head of Ryerson University’s Privacy By Design Centre of Excellence, is adamant that transparency is needed around AI algorithms.

“We have to avoid what will be the tyranny of the algorithms,” she said at the Future Technologies Conference in Vancouver at the end of 2017 (you can watch the video on Youtube, embedded below.) “What are the algorithms actually doing? We have to look under the hood.”

AI Ethics by Design Framework

Cavoukian is evolving her Privacy by Design framework, developed while she was the Information and Privacy Commissioner of Ontario and adopted around the world with translation into 40 different languages, to include AI ethics. Unveiled at the end of July 2017, her seven principles of AI Ethics By Design are as follows:

  1. Transparency and accountability of algorithms essential
  2. Ethical principles applied to the treatment of personal data
  3. Algorithmic oversight and responsibility must be assured
  4. Respect for privacy as a fundamental right
  5. Data protection/personal control via privacy as the defaut
  6. Proactively identify the security risks, thereby minimizing the harms
  7. Strong documentation to facilitate ethical design and data symmetry

Cavoukian is using the principles as a conversation starter with others interested in developing AI ethics standards.

As public policy wonks organize to take on the topic, AI researchers themselves are approaching it from a different angle. There’s an idea that AI itself could be used to remove biases from algorithms. Zemel says the Vector Institute is working on it.

“We’re not yet at the point to come out with standards,” he says. “Its more about developing metrics… metrics that indicate how biased something is, how fair it is, how private it is. Then there has to be a public discussion around what’s the level of bias that’s acceptable and what’s the level we are worried about?”

With the right metrics, the right algorithm could correct for bias. It’d only require a user to indicate what bias they want to eliminate first, whether it’s against race, gender, or some other attribute.

At the end of Minority Report, the precogs program is shut down and all previous convictions made based on their predictions are tossed out. It’s a hopeful ending for a movie that put forward a dystopian vision, but if Canada’s leading thinkers on AI ethics have their way, we’ll avoid having to make similar repairs to our own society.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Brian Jackson
Brian Jacksonhttp://www.itbusiness.ca/
Former editorial director of IT World Canada. Current research director at Info-Tech

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now