The ‘World Cup’ of AI policy kicks off

Happy Halloween and welcome to Eye on AI.

The Center for AI and Digital Policy said it best when it wrote on LinkedIn over the weekend that “This week is truly the World Cup of AI policy.” 

First, the Biden administration yesterday announced its long-awaited executive order on AI, the U.S. government’s first official action targeting the technology. The order will require some AI companies to share safety test results with the federal government prior to releasing their models and also calls on Congress to pass bipartisan data privacy legislation. It also directs several federal agencies to create a sweeping set of security standards and guidances around consumer privacy, authenticating official content, the use of AI in government, preventing potentially harmful AI practices in areas like healthcare, and preventing the exacerbation of discrimination in the criminal justice system, housing, and other areas. Additionally, it orders a report on the potential labor market implications of AI and will make it easier for highly skilled tech workers to study, work, and immigrate to the U.S. 

That same day, the G7—comprised of Canada, France, Germany, Italy, Japan, Britain, the U.S., and the EU — also agreed to a voluntary code of conduct for companies developing advanced AI systems. The 11-point code urges companies to identify, evaluate, and mitigate risks across AI systems and also tackle misuse discovered in AI products that have already been made available in the public market. It also says that companies should invest in robust security controls and share public reports outlining the capabilities, limitations, uses, and misuses of their systems. 

On Wednesday and Thursday, the U.K. will host the AI Safety Summit, bringing together international governments, research experts, civil society groups, and leading AI companies to discuss the risks of AI and how they can be mitigated through internationally coordinated action. While we’re still a day out, there’s already been quite a bit of action that reveals how the U.K. is thinking about all of this. OpenAI, Google DeepMind, Meta, Anthropic, Microsoft, and Amazon published their responses to U.K. officials’ request to outline their policies across nine areas of concern (from security controls to model evaluations and red teaming) ahead of the summit, and the U.K. followed up with a paper it’s framing as a “potential menu” of AI safety policies it would like these “frontier” AI organizations to consider. 

“The request for companies to publish their AI Safety Policies and the publication of this supporting document demonstrates this flexible approach by focusing on frontier AI organisations with the highest risks, while recognising that—with technology progressing very quickly—processes and practice are still evolving,” reads the paper. 

It’s not exactly the entire globe putting a spotlight on AI safety this week—and it’s important to note the already-apparent “AI divide,” wherein the Global North is dominating discussions and positioned to reap the economic benefits of AI as workers largely in the Global South perform the low-paid labor making it all possible. But it does feel like an international scramble to keep up with what are undeniably rapid AI advancements.  

Just looking at OpenAI’s products, there’s a wide consensus that improvements from GPT-3.5 to GPT-4 are truly significant. The latest model also outperformed its predecessor on a bunch of simulated exams including the Bar, LSAT, SATs, and several AP tests, according to OpenAI. And it’s the same story for its image generators. 

“You can see the progress that we have made in the last 18 months, and it is extraordinary,” said tech journalist Casey Newton when describing the jump between DALL-E 2 and DALL-E 3 on the most recent episode of Hard Fork.

And there are no signs of slowing down as AI companies rake in venture capital, race to market, and break through technical hurdles with lightning speed. 

Just this past week, a paper published in Nature described an “AI breakthrough” where an AI system outperformed ChatGPT and performed about as well as humans in folding newly learned words into its vocabulary and using them in fresh contexts. The neural-network-based system used what the authors are calling a meta-learning for compositionality (MLC) approach for training, which essentially involves the model learning from its mistakes as it goes. 

“MLC shows much stronger systematicity than neural networks trained in standard ways, and shows more nuanced behavior than pristine symbolic models,” they wrote in the paper.

While a narrow experiment, paper author Brenden Lake told Eye on AI it “helps to clarify exactly what is needed to achieve systematic generalization.”

“With LLMs, the hope is that systematic generalization will just emerge, but weaknesses remain in current LLMs,” he said. “In this article, we showed how MLC unlocks the powers of systematic generalization through practice.”

Especially against the backdrop of such rapid progress, how these international efforts play out could be critical. Only time will tell if the AI companies make good on any of these voluntary commitments or order off the U.K.’s “menu” of suggested safety suggestions, but there’s no doubt that the whole world is watching. 


Programming note: Gain vital insights on how the most powerful and far-reaching technology of our time is changing businesses, transforming society, and impacting our future. Join us in San Francisco on Dec. 11–12 for Fortune’s third annual Brainstorm A.I. conference. Confirmed speakers include such A.I. luminaries as Google Assistant and Google Bard GM Sissie Hsiao, IBM Chief Privacy and Trust Officer Christina Montgomery, Walmart International SVP and CTO Sravana Karnati, Pfizer Chief Digital and Technology Officer Lidia Fonseca, and many moreApply to attend today!

And with that, here’s the rest of this week’s AI news.

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

AI IN THE NEWS

OpenAI announces a new team to study ‘catastrophic’ AI risks. The team, called Preparedness, “will help track, evaluate, forecast and protect against catastrophic risks spanning multiple categories” including cybersecurity, individualized persuasion, CBRN threats (chemical, biological, radiological, and nuclear), and ARA (autonomous replication and adaptation). It will also be charged with formulating a “risk-informed development policy” for the company, according to a blog post

Google expands its bug bounty program to include generative AI-specific threats. Acknowledging that generative AI raises new and different concerns than traditional digital security, Google in a blog post announced it’s expanding its Vulnerability Rewards Program to include generative AI-specific threats and shared guidelines around what’s in scope for reward collection. Discovering training data extraction that leaks private, sensitive information is within the scope of the bounty program, for example, while using an AI product to do something potentially harmful that is already possible with other tools is not. 

Researchers from MIT and a dozen other institutions launch platform to tackle data transparency in AI. That’s according to VentureBeat. The researchers audited around 2,000 of the most widely used fine-tuning data sets, resulting in what the authors say is “the single largest audit to date of AI datasets.” They created the Data Provenance Explorer as an interactive platform to make the results accessible, allowing developers, researchers, and journalists to filter through the datasets for legal and ethical considerations. 

Grammarly’s new generative AI feature learns your personal voice and style. A new feature in Grammarly, called “personalized voice detection and application,” can automatically detect a person’s unique writing style, create a “voice profile,” and then rewrite any text in the person’s style, according to TechCrunch. It’s expected to launch by the end of the year for business-tier subscribers—and itnraises a lot of questions about the potential harms and labor issues of being able to replicate someone’s written voice. 

AI-created child sexual abuse images ‘threaten to overwhelm internet.’ That’s according to the Guardian, which reported on The Internet Watch Foundation’s findings of nearly 3,000 AI-made abuse images that broke U.K. law. The organization said existing images of real-life abuse victims are being built into AI models, specifically Stable Diffusion, to produce new AI-generated depictions. They also found instances of AI being used to “nudify” pictures of clothed children found online and to “de-age” celebrities and depict them in abuse scenarios.

EYE ON AI RESEARCH

Evaluating AI governance. Speaking of AI governance and voluntary commitments, a paper set to publish this Wednesday that examines companies’ AI governance found a “low volume of AI ethics implementation as well as a lack of any significant improvements in 2022,” according to a preview shared by paper author Ravit Dotan on LinkedIn.

“One of the main conclusions in the paper is that we found no evidence that such commitments impact companies’ implementation of risk management practices,” she wrote in her post

The study further found that this discrepancy between government signals and implementation can actually contribute to “ethics washing” and urges that it’s “crucial to incentivize, ideally require,” companies to publicly provide evidence of their risk mitigation efforts for external review. You can check back on Dotan’s LinkedIn for the full paper.

FORTUNE ON AI

Microsoft gains $70 billion and Alphabet loses more than twice as much as Satya Nadella steals Sundar Pichai’s thunder on AI —Paolo Confino

What Microsoft’s earnings tell us about AI and cloud trends —Sheryl Estrada

Amazon’s big Bedrock bet —Rachyl Jones

Wall Street is demanding financial results that support all the AI hype. Microsoft’s latest earnings finally delivered —Rachyl Jones

Zoom meetings make you anxious? This company is using AI to make you feel more confident on your next video callMegan Arnold And Rachyl Jones

CEOs rank HR as the lowest AI investment priority —Paige Mcglauflin And Joseph Abrams

BRAINFOOD

Raging against the AI. Here’s the most punk rock AI story from the week. A research team out of the University of Chicago created a tool to enable artists to “poison” AI models that are trained on their work without permission. Called Nightshade, the tool lets artists add invisible changes to the pixels in their art before uploading it online. If the art is scraped into a training set for image generators along the lines of DALL-E, Stable Diffusion, or Midjourney, it can break the resulting model in unpredictable ways, such as making dogs become cats in the data, according to MIT Technology Review, which got an early look at the research

“The researchers tested the attack on Stable Diffusion’s latest models and on an AI model they trained themselves from scratch. When they fed Stable Diffusion just 50 poisoned images of dogs and then prompted it to create images of dogs itself, the output started looking weird—creatures with too many limbs and cartoonish faces. With 300 poisoned samples, an attacker can manipulate Stable Diffusion to generate images of dogs to look like cats,” reads the article. (We already know about AI hallucinations; this is perhaps akin to intentionally dosing an AI with a hallucinogen.)

Between the lawsuits and now technical counterattacks, it’s clear artists are going to exhaust every resource possible to stop exploitation of their work and labor for AI. And can you blame them? Even their efforts to opt out have proved fruitless, with more than a dozen alleging in a Wired article this past week that Meta’s AI data deletion request process is just a “fake PR stunt” and it’s actually impossible to opt out of having their work used in the company’s training data. 

When Wired asked Meta about mounting frustration with this process, the company responded that the data deletion request form is not an opt-out tool and emphasized that it has no intention of deleting information that’s on its own platforms. 

“I think there is some confusion about what that form is and the controls we offer,” the spokesperson said.

Logo-favicon

Sign up to receive the latest local, national & international Criminal Justice News in your inbox, everyday.

We don’t spam! Read our [link]privacy policy[/link] for more info.

Sign up today to receive the latest local, national & international Criminal Justice News in your inbox, everyday.

We don’t spam! Read our privacy policy for more info.

This post was originally published on this site be sure to check out more of their content.