On October 30, 2023, President Biden signed the 53-page Executive Order on the Safe, Secure, and
Trustworthy Development and Use of Artificial Intelligence that
significantly advances the United States’ policy framework
regarding artificial intelligence (AI). This directive builds upon
the Biden Administration’s previous Blueprint for an AI Bill of Rights and outlines
a comprehensive strategy aimed at positioning the United States as
a leader in the guidance of responsible AI development and
application.
While recognizing the potential of responsible AI systems to
make the world more prosperous, productive, innovative, and secure,
it also acknowledged that irresponsible use could make societal
issues such as fraud, discrimination, bias, and disinformation
worse, as well as resulting in displaced and disempowered workers,
stifled competition, and risks to national security.
The Executive Order also recognizes that addressing these issues
requires the coordination of the government, private sector,
academia, and civil society. While most of the requirements in the
Executive Order apply only to the federal government, private
businesses may be impacted by a few requirements that directly
apply to private enterprises, requirements that may apply to
businesses that contract with the federal government for the use of
AI, and any resulting statutes, regulations, or guidance that are
required to be developed under the Executive Order.
Guiding Principles
The Executive Order sets out eight guiding principles and
priority regarding the responsible development and use of AI:
- Safe and Secure AI: AI systems must undergo
robust, reliable, repeatable, and standardized evaluations to
ensure their safety and security. Policies, institutions, and
mechanisms should be in place to test, understand, and mitigate
risks from these systems before they are used. The federal
government must be notified, including the results of the safety
tests, when training AI systems that may pose a risk to national
security, economic stability, or public health and safety. - Leadership by the United States. The Executive
Order recognizes that by promoting responsible innovation,
competition, and collaboration, the United States can lead in AI
and unlock its potential. The United States should invest in
education, training, R&D, and capacity to attract the best AI
talent and to promote responsible innovation, competition, and
collaboration. The novel intellectual property rights questions
that arise from the development and use of AI should be addressed,
and unlawful collusion and monopoly over key assets and
technologies should be stopped. The Executive Order also states
that the federal government will promote a fair, open, and
competitive ecosystem and marketplace for AI and related
technologies to allow small developers and entrepreneurs to drive
innovation in this area and prevent unlawful collusion that could
arise in semiconductors, computing power, cloud storage, and the
availability and use of data. - Support of American Workers. The Executive
Order recognizes that the responsible development and use of AI
requires a commitment to supporting American workers through
education and job training. The impact of AI on the labor force and
workers’ rights should be understood. The Executive Order
therefore states that the administration will seek to adapt job
training and education to prepare American workers for the
deployment of AI while attempting to make sure that AI is not
deployed in ways that undermine rights, worsen job quality,
encourage undue worker surveillance, lessen market competition,
introduce new health and safety risks, or cause harmful labor-force
disruptions. This is in line with the goal of supporting
responsible uses of AI that improve the American workers’
lives, positively augment their work, and help people safely enjoy
the gains from this technology. - Equity and Civil Rights. The Executive Order
recognizes that AI policies must be consistent with the advancement
of equity and civil rights, including addressing “algorithmic
discrimination.” To further this goal, the Executive Order
promises that the administration will build on steps that have
already been taken by the federal government (such as the NIST AI Risk Management Frameworkand Executive Order No. 14091) to ensure that AI
complies with all federal laws and promotes appropriate evaluation
and oversight. This includes holding those responsible for
developing and deploying AI accountable to standards that protect
against discrimination and abuse. - Consumer Protection. Another guiding principle
outlined in the Executive Order is that the interests of Americans,
especially minors, who increasingly use, interact with, or purchase
AI and AI-enabled products in their daily lives must be protected,
and that the use of AI cannot excuse organizations from their legal
consumer protection obligations. As a result, the Executive Order
promises that the federal government will continue to enforce
existing consumer protection laws and enact appropriate safeguards
against fraud, unintended bias, discrimination, invasions of
privacy, and other similar harms related to the use of AI –
especially in the areas of healthcare, financial services,
education, housing, and transportation where errors can be
disastrous. At the same time, the federal government will promote
responsible uses of AI to protect consumers, improve the quality of
goods and services, lower prices, and expand selection and
availability. - Privacy and Civil Liberties. The Executive
Order outlines that Americans’ privacy and civil liberties must
be protected by ensuring that the collection, use, and retention of
data is lawful, secure, and promotes privacy. Recognizing that AI
makes it significantly easier to extract, re-identify, link, infer,
and act on sensitive information about people and their lives, the
Executive Order requires that the federal government ensure that
data collection, use, and retention is lawful, secure, and
addresses privacy and confidentiality risks by requiring federal
agencies to use available policy and technical safeguards
(including privacy-enhancing technologies) to protect individual
privacy and fight against legal and societal risks. - Responsible Use by Federal Government. The
Executive Order recognizes that the federal government should
manage the risks from its own use of AI and increase its internal
capacity to regulate, govern, and support the responsible use of AI
to deliver better results for Americans. The Executive Order
promises that the federal government will work to make sure that
federal employees will get adequate training to understand the
benefits, risks, and limitations of AI while attempting to
modernize and streamline its operations. - International Coordination. The Executive
Order recognizes that the federal government should lead the way to
global societal, economic, and technological progress by engaging
with international partners to develop a framework to manage AI
risks, unlock AI’s potential for good, and promote a common
approach to shared challenges. To do this, the Executive Order
promises that the administration will engage with the international
community to develop a responsible framework for the use of
AI.
Regulatory Requirements
The Executive Order sets forth a number of specific requirements
designed to realize these principles. While much of the Executive
Order creates obligations for cabinet members and agency heads, the
results of many of these activities are likely to impact private
businesses. Specifically, the Executive Order requires that cabinet
members and agency heads, generally in cooperation with private
industry, to enact policies and procedures and take other actions
that may impact private businesses in the following ways:
- Setting new standards for AI safety and security,
including:- Requiring that developers of powerful AI share their safety
test results and other critical information with U.S.
regulators. - Developing standards, tools, and tests to evaluate the safety,
security, and trustworthiness of AI. - Developing standards to protect against the risk of using AI to
develop dangerous biological materials. - Protecting against fraud and deception by establishing
standards and best practices designed to detect AI generated
content and authenticate legitimate content. - Developing AI tools to find and fix vulnerabilities in critical
software.
- Requiring that developers of powerful AI share their safety
- Calls to pass comprehensive privacy legislation to
protect Americans and especially minors by:- Prioritizing federal support for privacy enhancing technologies
to protect privacy, including ones that use AI and that lets AI
systems be trained while preserving privacy. - Funding a research coordination network to strengthen
privacy-preserving research and technologies. - Developing guidelines for federal agencies to evaluate privacy
enhancing technologies, which may affect private-sector businesses
that provide AI and such technologies.
- Prioritizing federal support for privacy enhancing technologies
- Advance equity and civil rights by:
- Providing clear guidance to landlords, administrators, and
federal benefits programs on the proper and responsible use of AI
and how to avoid using AI in a way that would increase
discrimination. - Addressing algorithmic discrimination through training,
technical assistance, and coordination between federal agencies
investigating civil rights violations related to AI. - Ensuring fairness throughout the criminal justice system by
developing best practices on the use of AI in criminal justice
activities.
- Providing clear guidance to landlords, administrators, and
- Stand up for consumers, patients, and students
by:- Advancing the responsible use of AI in the health care and
pharmaceutical industries. - Creating resources to support educators deploying AI-enabled
education tools.
- Advancing the responsible use of AI in the health care and
- Mitigate risks to workers from the dangers of increased
workplace surveillance, bias, and job displacement by:- Developing principles and practices that address job
displacement, labor standards, workplace equity, health, and
safety, and data collection to minimize the harms of AI for
workers. - Developing reports on the impact of AI on labor-markets and
study and identify options for supporting workers who may face
labor disruptions from AI.
- Developing principles and practices that address job
- Promote innovation and competition by:
- Piloting a tool that will provide AI researchers and students
with access to AI resources and data (National AI Research
Resource), and expand grants for AI research in vital areas, such
as health care and climate change. - Providing small developers and entrepreneurs with access to
technical assistance and resources and helping small businesses
commercialize AI technology. - Enhancing the AI workforce of highly skilled immigrants and
non-immigrants by modernizing and streamlining visa criteria,
interviews, and reviews.
- Piloting a tool that will provide AI researchers and students
- Take a leadership role in supporting safe, secure, and
trustworthy development and use of AI by:- Expanding international agreements and frameworks to
collaborate on the responsible use of AI to maximize its benefits
and managing its risks. - Accelerating the development of international standards.
- Promoting the safe, responsible, and rights affirming
development and deployment of AI
- Expanding international agreements and frameworks to
- Ensuring responsible and effective government use of AI
by:- Issuing guidance on use of AI by federal agencies that includes
clear standards to protect the rights of individuals and public
safety while improving procurement and strengthening
deployment. - Assisting in acquiring AI products and services quicker and
cheaper. - Accelerating the hiring of AI professionals in the U.S.
government and requiring agencies to provide AI training for
government workers at all levels.
- Issuing guidance on use of AI by federal agencies that includes
Key Takeaways
The definition and impact is seemingly broad.
The Executive Order defines an “AI system” to include any
data system, software, hardware, application, tool, or utility that
operates in whole or in part with AI. The current definition does
not narrow what specific products, software, or geographic
locations are included in an “AI system.” This leaves it
open to interpretation as to how far the new rules and regulations
will reach.
This is the start of potentially rigorous
regulation. Many agencies have been tasked with directives
to develop national standards to ensure the safety and security of
AI. It is possible that some agencies will develop standards or
regulations that will mirror or even conflict with each other.
However, until the agencies start promulgating regulations it is
unclear how this will affect AI in the long-term.
The administration is not shy about their desire to
promote competition. Not only has the Executive Order
called for small developers and entrepreneurs to be given access to
technical assistance and resources to help commercialize AI
breakthroughs, but it calls for streamlining the process for
noncitizens to conduct research in AI and other critical and
emerging technologies. Further, a pilot of the National AI Research
Resource will be launched as a tool to provide AI researchers and
students access to key AI resources and data.
Concern for safety and security is at the
forefront. The Executive Order focuses on protecting
individuals’ rights and safety, with hopes to mitigate the risk
of discrimination and bias from AI systems. Specifically, there are
directions to advance equity and civil rights, reduce the potential
for misleading the public, and support workers in workplace
surveillance, bias and job displacement.
Looking Ahead
Much of the Executive Order directs cabinet members and heads of
agencies to develop guidance or regulations regarding the use of AI
that has yet to be developed. That being said, it also imposes
deadlines on federal agencies to issue reports and draft guidelines
that address key concerns echoed by the administration. If Congress
enacts further legislation in response to the Executive Order, it
is vital for businesses to be attentive to these new rules and take
necessary steps to comply.
Companies should ensure they have assistance from counsel to
help guide them through this increasingly complex legal landscape.
Continuing to monitor for such guidance or regulations and
preparing to implement them as appropriate when they are finally
released will go a long way to maximizing the benefits of AI while
minimizing disruption to operations. In the meantime, organizations
may wish to begin self-audits and monitor systems currently under
development or already deployed for potential data privacy and
cybersecurity risks, erroneous outputs, and bias.
Go Deeper
Foley is here to help explore this new terrain – over the
next several weeks we will examine the Executive Order through the
lens of different industries and areas of the law. For our latest
insights, check out the following:
The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.
This post was originally published on this site be sure to check out more of their content.