White House sets five principles for responsible AI design and use

Concerned about the potential abuse of artificial intelligence-driven applications, the White House has announced what it calls a Blueprint for an AI Bill of Rights, with five principles to guide the public and private sector’s design use and deployment of automated systems.

The U.S. guidelines would apply to automated systems that “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.”

The five principles are:

you should be protected from unsafe or ineffective systems;

you should not face discrimination by algorithms, and systems should be used and designed in an equitable way.

you should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. 

you should know that an automated system is being used and understand how and why it contributes to outcomes that impact you

you should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

Automated systems have brought about extraordinary benefits, the U.S. administration said in a statement, from technology that helps farmers grow food more efficiently and computers that predict storm paths to algorithms that can identify diseases in patients.

However, algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent, the statement says.

“The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values,” the statement says. It is a guide and not legislation or regulation.

The administration also announced that the National Science Foundation is adding US$140 million in funding to launch seven new National AI Research Institutes.

The Blueprint was released ahead of a meeting between U.S. Vice President Kamala Harris and senior administration officials and the CEOs of Alphabet, Anthropic, Microsoft, and ChatGPT developer OpenAI. They and other firms will participate in a public evaluation of AI systems in the AI Village at August’s DEF CON 31 cybersecurity conference. This will allow these models to be evaluated thoroughly by thousands of community partners and AI experts to explore how the models align with the principles in the Blueprint.

After that meeting, Harris issued a statement saying, “the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products. And every company must comply with existing laws to protect the American people.”

UK and Canada also scrutinizing AI

Also on Thursday, the U.K. government asked regulators to think about how the innovative development and deployment of AI can be supported against five overarching principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

As part of that process, the U.K. Competition and Markets Authority (CMA) will review how  AI foundation models are developing. It will assess the conditions and principles that will best guide the development of foundation models and their use in the future.

Related content: Calls for pause in development of AI

Meanwhile in Canada, Parliament still is dealing with the proposed Artificial Intelligence and Data Act (AIDA) to govern the use of AI in “high-impact” systems. However, many critics complain the proposed act lacks details that will only be filled in by regulations after it’s passed. Critics also don’t like the fact that the proposed AI data commissioner will report to the Minister of Innovation and not be independent. Second reading of Bill C-27 finished on Apr. 24, and it has been sent to the Industry committee for detailed examination. It’s part of a bill that includes new consumer privacy legislation. There are calls for the two pieces of legislation to be separated.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now