On October 30, 2023, the Biden Administration issued the groundbreaking Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence1 (Order), which sets in motion a comprehensive US strategy for the responsible development and use of artificial intelligence (AI).
The Order goes beyond prior Administration actions on AI in numerous respects. The broad ranging and robust Order directs US executive departments and agencies (agencies) (and encourages independent agencies) to develop standards, frameworks, guidelines, and best practices in anticipation of using their existing authorities to regulate AI. Agencies must take specific steps on virtually every federal law, regulation, and policy that bears on the responsible use of AI.
While recognizing the benefits that can be derived from the use of AI, the Order highlights the numerous known risks associated with AI’s potential misuse, ranging from damage to national security, critical infrastructure and privacy to fraud, discrimination and bias to disinformation, and concern over workforce displacement and the stifling of competition.
The Order places urgency on advancing a set of principles, standards, and priorities designed to strike a balance between the need to encourage innovation and the need to build effective guardrails to protect against societal harms and ensure the safe and secure development and use of AI.
Perhaps the most important element of the Order in the short term is the requirement that the Commerce Department put in place by January 29, 2024 (i.e., within 90 days of the October 30, 2023 Order), binding reporting requirements for private sector developers of the most powerful AI models to report to Commerce the results of the model’s performance in AI red-team testing.2 Commerce also must issue proposed regulations on certain transactions with foreign persons with respect to AI models that have potential capabilities that could be used in malicious cyber-enabled activity.
Significantly, one of the core principles of the Order recognizes that AI is a global technology and that there is a strong need to develop, with international allies, a framework to both manage AI’s risks and unleash its benefits. In effect, the US is taking a leadership role by pioneering its own initial standards and safeguards while engaging with other countries in efforts to structure a more global framework over time. Whether traction can be reached on a more global approach remains to be seen. By adopting certain domestic rules and standards now, the US is seeking to encourage and give shape to global AI rules while assuming some risk that certain parties in the AI ecosystem will seek to move their activities offshore in order to avoid the reach of US AI governance.
Ensuring Safe and Reliable AI: Dual-Use Foundation Models. One of the most notable, and binding, components of the Order is the imposition of reporting requirements on private companies developing “dual-use foundation models,” which the Order generally defines as powerful, self-supervising AI trained on broad data with the capacity to perform tasks that pose serious risks to US national defense and critical infrastructure.3 (Sec. 3(k))
More specifically, under the Order, by January 29, 2024, the Commerce Department must require companies that develop or intend to develop dual-use foundation models to provide the Federal Government, on an ongoing basis, with information, reports, or records, regarding the following:
- ongoing or planned activities related to training, developing, or producing such dual-use foundation models, including the physical and cyber security protections taken to assure the integrity of such training against sophisticated threats;
- the ownership and possession of the model weights of such dual-use foundation models, and physical and cyber security measures taken to protect those model weights; and
- the results of any such model’s performance in relevant AI red-team testing based on guidance developed by NIST and prior to the development of such NIST guidance, the results of any red-team testing that the company has conducted relating to certain types of specified risks (e.g., lowering the barriers to entry for development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities; use of software or tools to influence real or virtual events; the possibility for self-replication or propagation; and associated measures to meet safety objectives). (Sec. 4.2(a))
Commerce also must require reporting by companies, individuals, or other organizations or entities with respect to the acquisition, development, or possession of a potential large-scale “computing cluster,” including the existence and location of such clusters and the amount of computer power available in each cluster.
Additionally, the Order requires Commerce, by January 29, 2024, to propose rules that impose a number of reporting and related obligations on US Infrastructure as a Service (IaaS) Providers (i.e., major US cloud providers) with respect to certain of their dealings with foreign persons and in particular foreign resellers of their IaaS Products. (Section 4.2(c)) Among other things, the proposed regulations would require US IaaS Providers to: 1) submit a report to the Secretary of Commerce when a foreign person transacts with such Provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity; and 2) require that US IaaS providers prohibit any foreign reseller of their US IaaS product from providing those products unless such reseller also submits a report to the US IaaS Provider that such Provider must in turn submit to Commerce. It remains to be seen when such proposed regulations would become binding, but the fact that the Order requires them to be proposed indicates they will in all likelihood not enter into force and effect on January 30, 2024.
Undoubtedly, the pending Commerce rules will be groundbreaking in nature. Several things can be noted at this juncture:
- The Order invokes the Defense Production Act (DPA), which affords the President certain authorities related to national defense and the protection for critical infrastructure. The DPA has historically been invoked in wartime and sporadically in peacetime to establish defense priorities and resource allocations. It was more broadly utilized during the recent Covid-19 crisis to afford priorities to contracts for development of vaccines and personal protective equipment and address supply chain issues. Its use here to create reporting requirements for AI is novel in nature under the DPA, which generally is used to create priorities in government contract performance; in contrast, the companies involved here are primarily developing AI for the private sector rather than government usage. Nevertheless, the DPA is broad in scope and federal courts are loath to interpret the scope of such national security statutes narrowly in practice. Moreover, other federal statutes also could support such reporting requirements and Congress is currently working on creating a legislative framework for AI.
- The reporting requirements, once issued by Commerce, will need to be closely reviewed by companies for their full scope and application. Numerous definitions relating to the coverage of “companies” and various foreign entities will determine questions such as whether it applies to offshore AI development by US firms, whether foreign firms undertaking development of AI models in the US are subject to the requirements, and so on.
- Significantly, the Order does include some key initial definitions of technical conditions for AI models and clusters subject to the new reporting requirements (Sec. 4.2(b)), and directs Commerce to use these initial de facto standards until it defines and periodically updates its own set of such standards. The definition of a specified quantity of computing power as a reference point for AI models is particularly noteworthy, and can be expected to evolve over time as AI models become widely available globally and more powerful. This type of quantitative standard is reminiscent of the standards Commerce has employed for many years for the adoption of export controls on computers.
- Finally, the requirement that companies turn over the “results” of a broad range of red-team testing undoubtedly will raise sensitivities by companies who view such materials as highly proprietary in nature. Certainly, Commerce will need to consider putting in place measures to maintain the confidentiality of such information and limit its transmission within the Federal Government. (Sec. 4.2(a)(i)(C))
Ensuring the Safety and Security of AI Technology: With the goal of protecting American’s safety and security, this section of the Order sets out a broad range of requirements that will engage dozens of departments and agencies in safeguarding AI’s use and development.
- Guidelines and Standards – The Order tasks the Secretary of Commerce, acting through NIST, to establish guidelines and best practices for safe, secure, and trustworthy AI systems. Commerce must also develop guidance and benchmarks for auditing and evaluating AI capabilities, with a focus on those capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity. As part of this effort, NIST is also instructed to develop a companion resource to its AI Risk Management Framework, which we summarized here, and the Secure Software Development Framework.4 (Sec. 4.1)
- Chemical, Biological, Radiological or Nuclear Risks (CBRN) – To better understand and mitigate the risks of AI being misused to promote CBRN threats, such as biological weapons, the Department of Energy is instructed to consult with a broad range of experts within the Federal Government and in private AI laboratories, academia, and third-parties to evaluate the potential for AI to be misused to develop CBRN threats, while considering the application of AI to counter these threats, and provide a report to the President within 180 days. (Sec. 4.4)
- Cybersecurity and Critical Infrastructure – The Order directs the heads of each agency with authority over critical infrastructure to provide DHS with an assessment of potential risks related to the use of AI in critical infrastructure, including whether the use of AI makes critical infrastructure more vulnerable to critical failures, physical attacks, and cyber attacks. Independent agencies are encouraged to contribute to this effort. DHS must develop security guidelines for use by infrastructure owners and operators and, with the heads of relevant agencies, DHS must take steps to mandate such guidelines through regulatory or other action, as appropriate. And the Department of Defense and DHS must conduct an operational pilot project to test AI systems, such as large language models, to discover and remediate vulnerabilities in critical US government software, systems, and networks. (Sec. 4.3)
- Synthetic Content Created or Modified by AI – In order to improve transparency and increase public trust in synthetic content produced by AI systems, and to establish the authenticity and provenance of digital content, the Order requires the Department of Commerce to identify standards, tools, methods, and practices for authenticating content and tracking its provenance, and detecting and labeling synthetic content, such as using watermarks. (Sec. 4.5)
Protecting Privacy: To mitigate privacy risks potentially exacerbated by AI and to protect against the misuse of personal information and data, the Order tasks the Director of OMB with evaluating the types of commercially available information (CAI) that agencies procure, particularly CAI procured from data brokers, and with assessing how CAI is collected, used, disseminated, and disposed in order to inform potential guidance to agencies. A request for information (RFI) must be issued seeking input on revisions to current guidance on how agencies implement privacy provisions in the E-Government Act of 2002. Federal agencies must evaluate the effectiveness of their privacy enhancing technologies and the Secretary of Energy is directed to create a Research Coordination Network committed to privacy research and privacy enhancing technologies. (Sec. 9)
Supporting Workers: With the evolving capabilities of AI, there are growing concerns about AI-related workforce disruptions. The Order directs the Council of Economic Advisers to prepare a report to the President within 180 days on the effects of AI on the labor-market. The Secretary of Labor is directed to evaluate the necessary steps for the Federal Government to take to address AI-related workforce disruptions and to submit a report to the President analyzing the ability of agencies to support workers displaced by the adoption of AI. The report must assess how current and former federal programs designed to assist workers facing job disruptions, such as unemployment insurance, could be used to address possible future AI-related disruptions and address potential legislative measures.
The Order further requires that the Secretary of Labor, in consultation with labor unions and workers, develop and publish principles and best practices for employers to use to mitigate AI’s potential harms to an employee’s well-being. Among other topics, the principles and best practices must cover the implications for workers of employers’ AI-related collection and use of data about the workers, including transparency, engagement, management, and activity protected under worker-protection laws. The Order also requires the Secretary of Labor to support employees whose work is monitored or augmented by AI by ensuring that employees are compensated for their time worked. (Sec. 6)
Advancing Equity and Civil Rights: In response to strong evidence showing how the irresponsible use of AI can lead to unlawful discrimination and other harms, the Order mandates the Attorney General to submit a report to the President on the use of AI in the criminal justice system and to recommend best practices, safeguards, and appropriate limits on AI use in such areas as sentencing, parole, bail, police surveillance, prison-management tools, and forensic analysis. Agencies are directed to use their civil rights and civil liberties offices and authorities to prevent and address discrimination in the use of automated systems, including algorithmic discrimination. The Order calls on HHS to publish a plan addressing the use of algorithmic systems in the distribution of public benefits by states and localities to assess unjust denials, processes to appeal denials to human reviewers, and whether algorithmic systems achieve equitable and just outcomes. (Sec. 7)
Promoting Innovation and Competition: To attract AI talent to the US, the Order instructs the Secretary of State and DHS to streamline the visa process, create a program to locate talent abroad, and initiate policy changes that modernize pathways to immigration for experts in AI and other critical and emerging technologies. The Order directs the NSF to launch a pilot program that implements the National AI Research Resource by creating and distributing an AI-related research resource and tool. The Secretary of Labor is directed to publish a RFI requesting information on AI and STEM jobs that need qualified candidates. (Sec. 5.2(a)(i)) Other provisions include:
- Creating Institutes and Engines – NSF must establish one NSF Regional Innovation Engine dedicated to AI-related work and at least four new National AI Research Institutes, and with the Department of Energy enhance training programs for scientists in hopes of training 500 new researchers by 2025 in AI. (Sec. 5.2(a)(ii)-(iii), (b))
- Mitigate Climate Change – The Secretary of Energy is directed to publish a report on ways that AI can improve planning, investment, and operations for the electric grid. (Sec. 5.2(g))
- Patent and Trademark – The US Patent and Trademark Office is directed to publish guidance for patent examiners and applicants addressing inventorship and the use of AI, including generative AI, in the inventive process. (Sec. 5.2(c)(i))
- Copyright – The US Copyright Office is directed to prepare recommendations to the President on potential executive actions relating to copyright and AI that will address the scope of protection for works produced using AI and the treatment of copyrighted works in AI training. (Sec. 5.2(c)(iii))
Advancing Federal Government Use of AI: AI has the potential to improve governmental agencies’ ability to deliver results. The Order advances the coordinated use of AI across the Federal Government by directing the Director of OMB to assemble an interagency council to develop guidance to strengthen the effective and appropriate use of AI by agencies and to manage risks from AI. Each agency must designate a Chief Artificial Intelligence Officer to coordinate their agency’s use of AI and implement required risk management practices for the agency’s use of AI that impacts people’s rights or safety. To advance the responsible and secure use of generative AI, agencies must put appropriate safeguards in place, including limiting access, as necessary, to specific generative AI services based on specific risk assessments and guidelines, training, and the negotiation of appropriate terms of service with vendors. The Order also directs the Federal Government to increase top AI talent at federal agencies. (Sec. 10)
Consumers, Patients, and Students: The Order mandates the development and use of AI in the human-services, healthcare, and education sectors in ways that enhance access and the affordability of resources in an efficient way and that also protects citizens from fraud, discrimination, and threats. At their discretion, independent regulatory agencies are encouraged to take additional steps to protect consumers from fraud and discrimination. (Sec. 8)
Strengthening American Leadership Abroad: To strengthen US leadership of global efforts to meet AI’s challenges and potential, the Secretary of State is directed to lead efforts to establish a strong international framework for managing the risks and harnessing the benefits of AI, including by encouraging international allies and partners to support voluntary commitments similar to those that US companies have made. The Secretary of Commerce is directed to advance responsible global technical standards for AI development and establish a plan for global engagement. To address global AI risks to critical infrastructure, DHS is ordered to lead efforts with international allies and partners to enhance abilities to respond to and recover from potential critical infrastructure disruptions resulting from the incorporation of AI into critical infrastructure systems or malicious use of AI.
What’s Next
- Companies developing highly sophisticated dual-use foundation models and US IaaS providers should be on the lookout for requirements from Commerce implementing the private sector reporting requirements under the DFA, and should consider providing their views to Commerce informally as it writes its new rules.
- Expect the various departments and agencies to roll out mandated guidelines, standards, and best practices for responsible AI over the next year, with the strong implication that regulation may follow.
- Companies developing, using, or selling AI-related technology should consider adding flexible terms to their contracts that will accommodate the expected wave of new regulations, particularly with regard to content authentication and guardrails.
- Consider whether Congress in the not-too-distant future will enact a law mandating the private sector to adopt new safeguards to ensure the safe, secure, and trustworthy development and use of AI.
We will continue to monitor developments related to this order and related policies.
________
If you have any questions about this Legal Alert, please feel free to contact any of the attorneys listed or the Eversheds Sutherland attorney with whom you regularly work.
1 The White House: Presidential Actions, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, The White House, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.
2 AI red-team testing must be based on guidelines developed by the National Institute of Standards and Technology (NIST), in coordination with the Department of Energy and the Department of Homeland Security (DHS) and include guidelines related to assessing and managing the safety, security, and trustworthiness of dual-use foundation models. NIST, the National Science Foundation (NSF) and the Department of Energy shall develop and help to ensure the availability of testing environments, such as testbeds, to support these goals, as well as to support the design, development, and deployment of privacy enhancing technologies (PETs). (Sec.4.1(ii))
3 The Order defines dual-use foundation model as “AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters…”
4 NIST, Secure Software Development Framework, NIST, https://csrc.nist.gov/Projects/ssdf.
This post was originally published on this site be sure to check out more of their content.