Biden issues Executive Order on AI for U.S. government departments and application developers

U.S. president Joe Biden today issued an Executive Order establishing new standards for the use of artificial intelligence applications in the U.S. federal government, government-funded programs and critical infrastructure sectors.

The goal, the White House said in a statement, is to “protect Americans from the potential risks of AI systems.”

The order comes in the absence of Congressional legislation on AI, as well as on top of voluntary commitments from 15 companies to develop AI solutions with some protections.

Today’s Executive Order

— requires developers of the most powerful AI systems to share their safety test results and other critical information with the U.S. government.

In accordance with the Defense Production Act, the order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety notify Washington when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public, the White House statement says.

— requires government agencies to develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.

The National Institute of Standards and Technology (NIST) will set the standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.

— promises protection against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.

— promises protection to Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.

The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic;

— promises to establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software;

— orders the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence communities use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI;

— directs the civil sevice to provide clear guidance to landlords, federal benefits programs and federal contractors to keep AI algorithms from being used to exacerbate discrimination;

— promises to address algorithmic discrimination through training, technical assistance and co-ordination between the Justice Department and federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI;

— directs the civil service to issue guidance to federal agencies on their use of AI, including clear standards to protect rights and safety, improve government purchases of AI applications and strengthen AI deployment.

The Executive Order comes as governments around the world try to get ahead of the potential abuse of AI systems.

In Canada, the Trudeau government has put forward the Artificial Intelligence and Data Act (AIDA) to control what it calls “high impact” AI systems. The legislation is part of a package of data and privacy laws currently being examined before the House of Commons’ Industry committee.

AIDA has been criticized for being too vague and relying on yet-to-be-set regulations, and for creating an AI and Data Commissioner who won’t be independent, but will report to the Innovation Minister. Others call for swift passage of the legislation, saying it’s better than nothing.

So far the committee has focused on the proposed Consumer Privacy Protection Act (CPPA).

On Tuesday, the committee will hear testimony from the Canadian Chamber of Commerce, the Canadian Bankers’ Association, the Canadian Labour Congress, the Canadian Marketing Association, the Centre for Digital Rights and others. It isn’t clear if these witnesses will focus on the CPPA or have comments as well on AIDA.

Last month, Innovation Minister François-Philippe Champagne announced a new voluntary code of conduct with measures for the responsible development and management of advanced generative AI systems in Canada.

Meanwhile, this week, the U.K. is hosting an international conference called the AI Safety Summit for world leaders, technology companies, academics and AI researchers.

In a commentary after the White House announcement, Jake Williams, former U.S. National Security Agency (NSA) hacker and faculty member at IANS Research, noted the Executive Order regulates AI foundation models. But most organizations won’t be training foundation models, he said. “This provision is meant to protect society at large and will have minimal direct impact to most organizations.

“The EO places emphasis on detection of AI-generated content and creating measures to ensure the authenticity of content. While this will likely appease many in government who are profoundly concerned about deepfake content, as a practical matter, generation technologies will always outpace those used for detection. Furthermore, many AI detection systems would require levels of privacy intrusion that most would find unacceptable.”

On the other hand, the risk of using generative AI for biological material synthesis is very real, he said. Early ChatGPT boosters were quick to note the possibility of using the tool for “brainstorming” new drug compounds — as if this could replace pharmaceutical researchers (or imply that they weren’t already using more specialized AI tools). “The impact of using generative AI for synthesizing new biological mutations, without any understanding of the impacts, is a real risk and it’s great to see federal funding being tied to the newly proposed AI safety standards,” Williams said.

“Perhaps the most significant contribution of the EO is dedicating funding for research into privacy-preserving technologies with AI,” he added. “The emphasis on privacy and civil rights in AI use permeates the EO. At a societal level, the largest near-term risk of AI technologies is how they are used and what tasks they are entrusted with. The Biden EO makes it clear: privacy, equity, and civil rights in AI will be regulated. In the startup world of “move fast and break things”, where technology often outpaces regulation, this EO sends a clear message on the areas startups should expect more regulation in the AI space.”

Ian Swanson, CEO of Protect AI and former worldwide leader for AI & ML at Amazon and former VP of machine learning at Oracle, said he believes in the need to protect AI equivalent to the immense value it can deliver. “In order to build and ship AI that is secure and trusted, organizations must rigorously test (“red team”) their AI and understand the total composition of elements used to create that AI. These include knowledge of data sets, models, and other code assets – not just those related to the AI model itself. AI is a new technical domain, and like any new domain, it comes with new risks and vulnerabilities.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now