“Regulation of AI is essential,” Sam Altman, chief executive of technology firm OpenAI, told US senators this May during a hearing on artificial intelligence (AI). Many tech experts and non-experts agree, and the clamour for legal guard rails around AI is rising. This year, the European Union is expected to pass its first broad AI laws after more than two years of debate. China already has AI regulations in place.
But in practice, people still dispute precisely what needs reining in, how risky AI is and what actually needs to be restricted. Even as California-based OpenAI and other firms have publicly called for more oversight, these companies have resisted some of the EU’s proposed controls and have advocated for international guidance bodies and voluntary commitments, rather than new laws. Meanwhile, the technology is a constantly moving target.
Three key players — the United States, the EU and China — have so far taken different approaches, says Matthias Spielkamp, executive director of AlgorithmWatch, a Berlin-based non-profit organization that studies the effects of automation on society. The EU is highly precautionary — its forthcoming Artificial Intelligence Act focuses on banning some uses and allowing others, while laying out due diligence for AI firms to follow. The United States, where many leading AI firms are based, has so far been the most hands-off. In China, the government is trying to balance innovation with retaining its tight control over corporations and free speech. And everyone is trying to work out to what degree regulation is needed specifically for AI, because existing laws might already address some of its risks.
“Many people are saying this is the most important innovation humanity has ever produced,” says David Wang, chief innovation officer at Wilson Sonsini, a large law firm in Silicon Valley, California. “It’s easy to say ‘Stop’, but so much harder to say, ‘Go in this direction’.”
In a sense, we’re witnessing a grand regulatory experiment.
The EU: regulate by risk
This June, the EU’s parliament passed the AI Act — a giant piece of legislation that would categorize AI tools on the basis of their potential risk. Although the act might yet change because it needs to be agreed by all three voting EU bodies — the parliament, the European Commission and the Council of the EU — the current draft would ban the use of software that creates an unacceptable risk. The AI Act defines that as covering most uses in predictive policing, emotion recognition and real-time facial recognition.
Many other uses of AI software would be permitted, but with different requirements depending on their risk. This includes tools that guide decisions in social welfare and criminal justice, as well as those that help firms to choose which prospective employees to hire. Here, the EU act requires developers to show that their systems are safe, effective, privacy-compliant, transparent, explainable to users and non-discriminatory.
For ‘high-risk’ uses, which include software in law enforcement and education, the act requires detailed documentation, that all use of AI systems is automatically logged and that the systems are tested for their accuracy, security and fairness.
Stop talking about tomorrow’s AI doomsday when AI poses risks today
Companies that violate rules could be fined 7% of their annual global profits; they would have about two years to comply after the act comes into force, which might not be until 2025.
Questions remain about what counts as high risk. Last year, OpenAI presented a white paper to the EU arguing that its large language models (LLMs, such as those behind ChatGPT) and image-generation models should not be considered in this category. That advice is reflected in the current act, which places ‘foundation’ models (general-purpose AI systems, as opposed to those intended for a specific application) in their own category. This includes generative AI tools that can automate the production of realistic text, images and video.
The risks here are different from those for the AI classification systems that might be used in law enforcement. Image-generating tools and LLMs, for instance, can lead to a proliferation of harmful content such as ‘revenge porn’, malware, scams and misinformation, and might ultimately undermine people’s trust in society. What kind of transparency should be required for such tools — and whether it’s possible to enforce it — is a major concern. And because these systems are trained on immense amounts of human-generated text and art, copyright violation is also an unresolved issue.
The EU would require providers of foundation models to compile and publish a summary of copyright-protected material used for their training data, and to train their models to safeguard them from generating law-breaking content. The current text of the act also requires disclosure when content has been generated by AI, but this applies only to a particular kind of ‘deepfake’ content that non-consensually depicts real people as doing or saying things they didn’t.
A ‘good start’
Whether the EU’s approach is too strong or too weak depends on whom you ask, Spielkamp says.
Policy analyst Daniel Leufer agrees. “I think there’s a lot of bluster from industry about how it’s going to kill all the innovation, and they’ll never be able to comply with it,” says Leufer, who works at Access Now in Brussels, an international organization that defends digital rights. “But it’s the usual showboating.”
Joanna Bryson, who researches AI and its regulation at the Hertie School in Berlin, says the companies she has heard from welcome the laws, because compliance isn’t a heavy burden and will improve their products. A spokesperson for Microsoft, for instance, pointed to company blog posts stating that it supports the need for regulation, including the EU’s AI Act.
One critique of the EU approach is that, as long as companies adhere to the rules associated with their application’s risk category, they have a strong defence against liability for harm that could come from their system, Spielkamp says. What’s more, one company might build on a tool from another, which builds on a tool from a third firm, so it is unclear who would be liable for any harm caused.
The AI Act will keep evolving before it passes, says Lilian Edwards, who specializes in Internet law at Newcastle University, UK, and warns that it shouldn’t be overanalysed at this point. But she considers it “a good start”, with some useful technical detail, such as a mention of providers needing to be wary of data ‘poisoning’, in which people hack AI systems by messing with their training data.
AI weapons: Russia’s war in Ukraine shows why the world must enact a ban
Edwards would prefer, however, that the act defined high-risk AI by a set of criteria rather than a list of existing use cases, so as to future-proof the legislation.
The EU already has regulations that apply to AI. Its GDPR (General Data Protection Regulation) legislation has put restrictions on the collection of personally identifying data since 2018, for instance. And EU citizens already had the right, through the GDPR, to ‘meaningful information’ about the logic involved in automated decisions (sometimes referred to as the right to explanation), as well as a right to opt out. In practice, however, these rights are currently of limited use: only a few processes are fully automated, such as the placement of adverts, says Michael Birtwistle, who directs law and policy on AI and data at the Ada Lovelace Institute, a London-based research organization that studies issues of technology ethics.
Finally, for recommendation and content-moderation AI algorithms in particular, the EU last year adopted the Digital Services Act, which aims to stem the flow of dangerous content online. Companies must explain to users how their algorithms work and provide alternatives. The act will officially apply from February 2024, although large online platforms — including Google, Facebook, X (formerly known as Twitter) and TikTok — must comply from the end of this month.
The US: ‘the appearance of activity’
In contrast to the EU, the United States has no broad, federal AI-related laws — nor significant data-protection rules.
In October 2022, the White House Office of Science and Technology Policy (OSTP) did release a Blueprint for an AI Bill of Rights, a white paper describing five principles meant to guide the use of AI, as well as potential regulations. The paper says that automated systems should be safe and effective, non-discriminatory, protective of people’s privacy and transparent: people should be notified when a system makes a decision for or about them, be told how the system operates and be able to opt out or have a human intervene.
“Philosophically, [the blueprint and the EU’s AI Act] are very similar in identifying the goals of AI regulation: ensuring that systems are safe and effective, non-discriminatory and transparent,” says Suresh Venkatasubramanian, a computer scientist at Brown University in Providence, Rhode Island, who co-authored the blueprint when he was assistant director for science and justice at the OSTP. Although the US ideas about implementation differ a little from those of the EU, “I’d say they agree on far more than they disagree on,” he adds.
It can be helpful when a country outlines its vision, says Sarah Kreps, director of the Tech Policy Institute at Cornell University in Ithaca, New York, “but there’s a yawning gap between a blueprint and an implementable piece of legislation”.
The United States has also held congressional hearings and presidential meetings related to AI regulation. In July, seven US companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — met with President Joe Biden and announced that they would implement safeguards such as testing their products, reporting limitations and working on watermarks that might help to identify AI-generated material. However, the promises are vague and unenforceable. In a Senate hearing that month, Dario Amodei, the head of Anthropic in San Francisco, California, called for US legislation that mandates auditing and safety testing for AI models; he also stated that he was most worried about bad actors misusing AI systems.
“There’s the appearance of activity,” says Ryan Calo, a founding co-director of the University of Washington Tech Policy Lab in Seattle, “but nothing substantive and binding.”
Last October, one law did make it through Congress. It requires that officials at federal agencies who procure AI products and services be trained on how AI works. This February, Biden also signed an executive order that briefly mentions a requirement to “prevent and remedy … algorithmic discrimination” — but again, it applies only to federal agencies.
Venkatasubramanian says the blueprint is detailed enough that agencies and states are starting to implement its principles in their proposals. For instance, a bill proposed in the California State Assembly (known as AB 331) would require deployers of automated decision tools to register their tool’s purpose with the state and explain how it would be used.
He has also urged the White House to issue an executive order based on the blueprint and on a voluntary AI risk-management framework issued by the US National Institute of Standards and Technology. This would insist that federal agencies using AI comply with certain practices, such as disclosure when AI systems are used, and that they provide intelligible explanations of their decisions.
Federal legislation has been put forward. Lawmakers have previously considered a bill aimed at algorithmic accountability that would ask firms using automation to present impact assessments to the Federal Trade Commission (FTC), for instance. But this did not pass, and it is unclear whether it or other bills would get through Congress in its current state of political division.
And existing rules, enforced by federal agencies, could be extended to cover AI-related products. This April, the US Department of Health and Human Services proposed updating its regulations on electronic health records to give patients access to the factors that influence predictive models. Last year, the Consumer Financial Protection Bureau clarified that firms must explain why they are denying someone credit, even if the decision is made by an algorithm. The FTC has also reminded firms that consumer protection laws that prohibit “unfair or deceptive acts or practices in or affecting commerce” apply equally to AI. In July, it opened an investigation into OpenAI’s data-security practices, and asked the firm to provide details of any complaints that its LLMs had made false or harmful statements about people.
It’s “a challenging space we’re in right now, trying to figure out what we can do with existing regulation”, Venkatasubramanian says. In some cases, new federal rules might be useful, he says. For example, regulations might need to set required levels of transparency in automated systems, or specify how to limit an algorithm’s bias before it can be deployed.
Some US states and cities already have their own AI-related rules. In Illinois, a 2020 act requires firms to announce and explain the use of AI to analyse employment interviews, and the state has long had a law that lets citizens sue over the abuse of biometric data, including scans used for facial recognition. (Facebook paid US$650 million to settle a class-action case under this rule in 2021.) Other states have banned law enforcement from using facial recognition, and some protect personal data and limit automated decisions that are based on that data. “At the state level, you end up with kind of a patchwork of rules,” Kreps says.
As for generative AI, lawsuits about copyright are currently the most important US developments, says James Grimmelmann, director of the Cornell Tech Research Lab in Applied Law and Technology in New York City. The stock-photo company Getty Images sued the firm Stability AI for training its image-generation software, Stable Diffusion, using Getty’s content. And Microsoft and OpenAI have been sued by anonymous litigants for training the code-writing software GitHub Copilot on people’s code. The plaintiffs might be looking only for royalties, but it is possible that a victory could see copyright concerns being used to drive broader legislation on issues such as bias, misinformation and privacy, Grimmelmann says.
ChatGPT broke the Turing test — the race is on for new ways to assess AI
Some firms have fought the blueprint, arguing that the industry can easily address concerns with self-regulation, Venkatasubramanian says. But other companies have told him that they support it to prevent a race to the bottom in AI ethics, in which firms undercut each other for competitive advantage. When Altman made his US Senate committee appearance in May, he suggested issuing licences for large models. But he and others have also articulated the risk of large companies guiding regulators towards rules that give them advantages over smaller firms.
Big tech hasn’t yet had to put up much of a fight over AI regulation, Kreps says. “I don’t think that there’s a sense right now that meaningful legislation is on the horizon.”
“A common quip among lawyers is that the Americans innovate on the technology front, and the Europeans innovate on the regulatory front,” Wang says. “Some people say it’s not a coincidence that Europe is so ahead on regulating big tech, because there are fewer hyper-scale tech companies in Europe” and therefore less lobbying.
China: keeping societal control
China has so far issued the most AI legislation — although it applies to AI systems used by companies, not by government. A 2021 law requires firms to be transparent and unbiased when using personal data in automated decisions, and to let people opt out of such decisions. And a 2022 set of rules on recommendation algorithms from the Cyberspace Administration of China (CAC) says that these must not spread fake news, get users addicted to content or foster social unrest.
In January, the CAC began enforcing rules issued in 2022 to tackle deepfakes and other AI-created content. Providers of services that synthesize images, video, audio or text must verify users’ identities, obtain consent from deepfake targets, watermark and log outputs and counter any misinformation produced.
And the CAC will this month begin enforcing other regulations aimed at generative tools such as ChatGPT and DALL-E. These say that firms must prevent the spread of false, private, discriminatory or violent content, or anything that undermines Chinese socialist values.
“On the one hand, [China’s government] is very motivated to impose social control. China is one of the most censored countries on the planet. On the other hand, there are genuine desires to protect individual privacy” from corporate invasion, says Kendra Schaefer, head of tech policy research at Trivium China, a Beijing-based consultancy that briefs clients on Chinese policy. The CAC did not respond to Nature’s request for comment for this article.
Global uncertainties
Some other countries have made clear their aims for AI regulation. Canada’s government has introduced an Artificial Intelligence and Data Act, which promises to require transparency, non-discrimination and safety measures for what it calls ‘high-impact’ AI systems (these are yet to be defined). The United Kingdom, which is hosting a summit on AI safety later this year, published a white paper in March describing a “pro-innovation” approach, in which it planned no new regulations. The EU’s AI Act, however, could affect firms worldwide, just as the GDPR has affected how global tech firms operate. Some of China’s AI rules could affect how businesses operate elsewhere — although Grimmelmann says companies might amend their AI services for different markets.
There are also discussions over potential international agreements. The Council of Europe (a human-rights organization that is distinct from the Council of the EU) is drafting a treaty that would govern the effect of AI on human rights, but countries might be able to opt out of some of its rules. United Nations Secretary-General António Guterres has also suggested that a new UN body might be needed to govern AI.
AI companies have generally suggested that intergovernmental agreements will be necessary, but are vaguer on what needs to be agreed and how it might be enforced. In July, for instance, London-based Google DeepMind and some of its academic collaborators proposed a global Advanced AI Governance Organization that would set standards and might monitor compliance, although the firm made limited reference to enforcement.
A DeepMind spokesperson said that, where the proposed organization establishes guidelines for domestic governance, it would be up to governments to “incentivize” developers to follow standards. (She also noted that when it came to creating new policies, regulation should focus on applications of AI that could cause physical harm, such as in medical settings or the energy grid, and not be applied indiscriminately to all systems.) Microsoft has said that it endorses various efforts to develop international voluntary codes, arguing that “principle-level guardrails” would help even if they are non-binding. OpenAI declined to comment to Nature on regulation, and instead pointed to blog posts about its voluntary efforts.
Hard to enforce?
Whatever the regulations on AI, countries might find it difficult to enforce them. That applies particularly to rules around explainability, because of the black-box nature of many machine-learning systems, which find their own patterns in data. For those that make classification decisions, it is possible to bombard them with a range of inputs and see how different factors affect what the algorithm decides. But these methods don’t work as well for LLMs, such as ChatGPT. For these, governments might need to use auditing to force companies to be transparent about when they are using generative AI. Still, Venkatasubramanian thinks that “any direct, aggressive enforcement, even for a few entities, will start getting people to stop and think a little bit”.
It is unlikely that audits would be targeted at non-professional use of generative AI, so, whatever transparency regulations are in place, individuals might secretly use LLMs without being detected.
Some AI developers are more worried about long-term risks, such as dystopias of AIs that escape humanity’s control or that are used by bad actors to create widespread havoc. They have proposed regulations to cover how powerful AI models are developed.
Yet a March letter, signed by tech leaders, calling for a six-month pause in the development of powerful AI seems to have had little effect. Luke Muehlhauser, a senior programme officer at Open Philanthropy, a research and grant-making foundation in San Francisco, has laid out other ideas, including licences for large AI models, remotely operated kill switches on large computing clusters and a reporting system for harms and close calls. The foundation has funded an effort to build an AI incident database.
The use of AI to guide weapons is also a concern, and dozens of countries have called for the UN to regulate lethal autonomous weapons systems. (Military AI isn’t in the scope of the EU’s AI Act.) But the United States is not yet on board. Ahead of a UN meeting on the issue in March, it argued that states don’t yet agree on what counts as autonomous weapons and that it would be better, for now, to have guidelines that are not legally binding.
It is another example of how global coordination on AI regulation seems unlikely, with different societies having starkly contrasting visions of what’s necessary.
This post was originally published on this site be sure to check out more of their content.