Cops are falling in love with AI, and it’s much deeper than facial recognition

Hello and welcome to Eye on AI.

If you’ve spent any time on social media in the past month or so, chances are you’ve seen videos of New Yorkers trolling the new NYPD “robocop” that’s now patrolling the Times Square subway station. Standing 5-foot-3 and roaming around like a goofy sidekick in Star Wars, the Nvidia AI-powered K5 robot from Knightscope is certainly an easy target for mockery. 

But while arguably the most in-your-face example of how law enforcement agencies are tapping AI technology for policing, the K5 is just the tip of a much more discrete iceberg. Police across the country—and the world—are increasingly using AI systems that, without meme-appeal, are likely to fly under the radar, yet may be far more consequential. This past week, reports in both The Markup and 404 Media revealed more about these tools and how police are using AI. Police interest in AI is nothing new, but it’s clearly ramping up with the recent AI boom. 

Ese Olumhense’s dispatch from “Cop Con” in The Markup was particularly enlightening. She attended the San Diego conference, officially called the International Association of Chiefs of Police (IACP) conference, where nearly 700 vendors demonstrated their new tech for policing to over 16,000 attendees. The attendees were largely law enforcement officials not only from the U.S., but also from countries including Saudi Arabia, Japan, Brazil, Canada, Jamaica, Indonesia, Ireland, the Dominican Republic, the U.K., and many more. 

Olumhense reports that the technology she saw largely fell into three buckets. There were robotic tools like drones and police robots, such as a drone that can come armed with a tool for breaking windows. There were also emerging and enhanced surveillance technologies, like automatic license plate readers. And lastly, a whole lot of AI. 

“Artificial intelligence and algorithmic products were, predictably, among the tools I saw the most,” she wrote in The Markup. For example, she saw companies advertise voice analysis algorithms that claim to detect fraud risks based on speech as well as various tools that can purportedly aggregate and analyze data to generate insights, including data from social media, jail systems, video feeds, and court records.

Facial recognition technology, however, was noticeably missing from the conference. 

“I didn’t really see it brought up that often in panels. People just weren’t touching it,” Dave Maass, director of investigations at the nonprofit Electronic Frontier Foundation, who has attended three IACP conferences, told The Markup in another version of the story. “And I’m not sure that’s because the technology has become pretty toxic in public discourse, if it’s just not as useful and hasn’t lived up to promises—who knows, but people were not into it.” 

Indeed, facial recognition technology has been shown to be both racially biased and frequently wrong. Several black Americans have already been wrongly arrested and even jailed based solely on inaccurate facial recognition matches, according to the Associated Press, the New York Times, and many other recent reports.  

Beyond the emerging tech police are eyeing and buying, a new report from 404 Media uncovered an “AI-powered system that is rapidly springing up across small town America and major cities alike.” Essentially, the system allows law enforcement to link together all of a town’s security cameras—including those that are government-owned as well as privately owned cameras at businesses and homes—into one central hub.

“Fusus’ product not only funnels live feeds from usually siloed cameras into one central location, but also adds the ability to scan for people wearing certain clothes, carrying a particular bag, or look for a certain vehicle,” reads the article, which also discusses the absence of clear policies, auditable access logs, and community transparency about the capabilities and costs of the system. 

Especially with AI’s proven racial biases, the use and possible misuse of the technology for policing has been a driving part of the AI conversation for years. As the tech community and now governments and population at large debate the need to focus on near versus long-term risks of AI, surveillance concerns and the likely potential for AI to exacerbate the racial biases already rampant in policing is not just near-term, but playing out rapidly right now. 

In his AI executive order last week, President Joe Biden included a direction to “ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.” While this is just a directive and doesn’t have the power of legislation, its inclusion in the order is important. It’s also worth noting that of the “surge” in state-level AI laws passed, proposed, and going into effect this year, none explicitly address policing or law enforcement. But clearly, governments are just getting started.


Programming note: Gain vital insights on how the most powerful and far-reaching technology of our time is changing businesses, transforming society, and impacting our future. Join us in San Francisco on Dec. 11–12 for Fortune’s third annual Brainstorm A.I. conference. Confirmed speakers include such A.I. luminaries as Google Assistant and Google Bard GM Sissie Hsiao, IBM Chief Privacy and Trust Officer Christina Montgomery, Walmart International SVP and CTO Sravana Karnati, Pfizer Chief Digital and Technology Officer Lidia Fonseca, and many moreApply to attend today!

And with that, here’s the rest of this week’s AI news.

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

AI IN THE NEWS

OpenAI unveils GPT-4 Turbo, the ability to create custom models, and more new features. The company announced a ton of new models, capabilities, and developer products Monday at its Dev Day conference. Highlights include: a new GPT-4 Turbo model the company says is more capable and less expensive; GPTs, a way for anyone to create custom versions of ChatGPT for their own purposes; and Assistants API, designed to help developers build “assistants” into their own applications. 

U.S. District Judge dismisses artists’ copyright lawsuit against Midjourney and DeviantArt, but advances the case against StabilityAI. That’s according to The Hollywood Reporter. The judge concluded the claims are “defective in numerous respects,” with one of the main issues being whether the AI systems actually contain copies of the copyrighted images. The claim for direct infringement against Stability AI, however, was allowed to proceed based on allegations the company actually used and stored the copyrighted images—not just data pertaining to the images—without permission to create Stable Diffusion.

Kai-Fu Lee’s 01.AI launches its first LLM, an open model operating in both English and Chinese. The prominent computer scientist and investor told TechCrunch the decision to introduce an open LLM for the Chinese market is a way to “‘give back’ to society” and that it’s providing “a compelling alternative” to Meta’s LLaMA. He noted that China doesn’t have access to OpenAI and Google. Lee is hoping the company, valued at $1 billion, will be China’s equivalent of OpenAI. 

Microsoft makes its Phi 1.5 LLM ‘multimodal,’ proving the capabilities of small models. That’s according to Semafor. The new capability means Phi 1.5 can now view and interpret images, giving it some of the same features as the massive GPT-4 but with a fraction of the energy usage, processing power, and parameters—1.3 billion, as opposed to the 1.7 trillion parameters of GPT-4. Microsoft researchers say this is a big step that shows smaller LLMs like Phi 1.5 can have the same features as their massive counterparts and offers a roadmap to democratizing access and easing shortages of GPUs.

Snowflake announces a suite of GenAI tools for its platform. Called Snowflake Cortex, the fully managed service lets the data management platform’s users tap its custom LLMs to more easily analyze their data and build applications, according to the company’s blog post. Specific features include Snowflake Copilot (an assistant users can interact with in natural language), Universal Search (LLM-powered search to make it easier to interact with data stored in Snowflake), and Document AI (a data extraction capability that lets users process any document and get answers to their inquiries). 

LinkedIn announces an AI ‘job seeker coach” to help you land a new role. Powered by GPT-4, the Microsoft-owned company’s new chatbot started rolling out to premium users last week and boasts a range of capabilities meant to help candidates determine if they’re the right fit for a job and how to best position themselves. Users can launch the chatbot from a job listing and prompt it to analyze their profile and offer suggestions, according to the company’s blog post

EYE ON AI RESEARCH

AI for the heart. The American Heart Association yesterday publicized two preliminary studies that showed AI may accurately detect heart valve disease and predict cardiovascular risk. One study, conducted at three different primary care clinics in the U.S., found that an AI program evaluating sound data from a stethoscope detected 94.1% of cases of valvular heart disease, compared to just a 41.2% detection rate by medical professionals using a standard stethoscope.

In a second study, conducted by a different research group using data from the U.K. Biobank, a deep learning program proved effective at analyzing retinal imaging as an early detection tool for heart disease in high-risk groups, such as people who have prediabetes and Type 2 diabetes.

These studies are preliminary and have limitations, but they could indicate a path toward earlier and more accurate diagnoses. The findings from both will be presented next week at the American Heart Association’s Scientific Sessions 2023 in Philadelphia. You can read more about the studies here.

FORTUNE ON AI

Elon Musk’s AI startup is about to release its tech to a ‘select group’—and he says it’s ‘the best that exists’ on the market —Chloe Taylor

Elon Musk touts ‘real-time access to X’ as a ‘massive advantage’ for his ChatGPT rival Grok—after threatening to sue Microsoft over using Twitter data for AI training —Steve Mollman

Meet Grok: Elon Musk’s AI bot has ‘a bit of wit and a rebellious streak’ as it seeks to take on ChatGPT —Eleanor Pringle

Tim Cook keeps giving the same answer about Apple’s generative AI plans, and it’s falling flat with investors —Rachyl Jones

In the AI race, Canva cofounder and CEO Melanie Perkins doesn’t believe in ‘reinventing the wheel’ —Emma Hinchliffe and Joseph Abrams

An AI bot performed insider trading and deceived its users after deciding helping a company was worth the risk —Ryan Hogg

BRAINFOOD

AI.gov. We couldn’t close out this week’s newsletter without circling back to the happenings in AI policy. Last week’s “World Cup of AI policy”—comprised of the Biden administration’s executive order, G7 meeting, and U.K. AI Safety Summit—packed a real punch, and we now have even more specifics about the outcomes, plus additional AI policy movement. 

At the U.K. summit, U.S. Vice President Kamala Harris followed up on the Biden executive order by announcing an AI Safety Institute to protect American consumers. This came after 21 U.S. lawmakers published a letter urging Harris to specifically promote civil rights at the summit. Overall, the U.K. AI Safety Summit culminated in 28 countries signing the “Bletchley Declaration,” an agreement establishing a shared idea of the urgent need to understand and collectively manage the potential risks of AI. 

The First Committee of the UN General Assembly approved a new resolution expressing concern about lethal autonomous weapons systems. 

Back in the states, the U.S. Office of Management and Budget released for comment a draft policy for how federal agencies adopt, procure, and use AI. The U.S. Department of Commerce’s National Institute of Standards and Technology additionally called for participants to join a new consortium to support the development of methods for evaluating AI systems. 

What’s more, the U.S. House Subcommittee on Cybersecurity, Information Technology, and Government Innovation announced a hearing on the harms and potential damages of advancing deepfake technology, which will be livestreamed for the public tomorrow at 2 p.m. ET. 

Lastly, the U.S. government launched AI.gov, a website for keeping up with all things government and AI.

Logo-favicon

Sign up to receive the latest local, national & international Criminal Justice News in your inbox, everyday.

We don’t spam! Read our [link]privacy policy[/link] for more info.

Sign up today to receive the latest local, national & international Criminal Justice News in your inbox, everyday.

We don’t spam! Read our privacy policy for more info.

This post was originally published on this site be sure to check out more of their content.