“Where is it most likely for somebody to get hurt or killed? It’s drinking and driving. And in that one area (Phoenix), I welcome our new artificial intelligence (AI) […] in terms of taking away the most dangerous part of human activity on a daily basis.”
That’s what panelist Erik Luna, associate dean for applied research at the Sandra Day O’Connor College of Law, shared when emphasizing how AI can have positive outcomes and increasingly affect every part of our daily lives. In this example, he says automated rideshare driving vehicles may offer a safer alternative to those who choose to drive under the influence.
“We already see it in Phoenix, and in Washington D.C., it is Waymo,” he said. “If you were thinking about the system as it currently exists, big volumes of cases could be removed.”
In a thought-provoking panel, a distinguished group of experts gathered at ASU’s Barrett & O’Connor Center in Washington D.C. to explore one of the most critical aspects of the AI revolution: the implications of artificial intelligence for the criminal justice system.
As AI tools continue to gain momentum—from predictive policing to forensic analysis—panelists offered insights into both the promises and pitfalls of this rapidly emerging technology.
Educating the Front Line and Being an AI Early Adopter
There is an urgent need for comprehensive AI education and training across the criminal justice system. Law enforcement officers, attorneys, judges, and government officials must develop a foundational understanding of AI tools to effectively and ethically implement them.
For Daniela Nogueira, the chief counsel to the New Jersey Attorney General, a concern is how AI potentially amplifies discrimination in the justice system, especially in ways that are hard to detect. In New Jersey, the state has invested in technology to monitor for bias in AI beginning in 2023, when Governor Phil Murphy (D-NJ) created an AI Task Force to study the potential impacts of AI on society and to prepare recommendations for government actions encouraging the ethical use of AI technologies.
“The state of New Jersey was the first state to do a comprehensive survey of its public employees and try to understand what they know about AI and how they anticipate to use it in the future,” Nogueira said. “For example, our division of civil rights has launched a Civil Rights Innovation Lab that is exploring different ways that technology needs to help review. You know…incidents of bias and complaints of bias and harassment or discrimination and doing things like being able to review online advertisements for employment or housing for discrimination.”
Double-Edged Sword: AI in Investigations
When asked how AI might be leveraged not just by prosecutors and law enforcement but also by defense attorneys, Jay Stanley, senior policy analyst at the ACLU, explained its dual potential.
On one hand, AI offers powerful investigative capabilities—such as identifying suspects, analyzing massive datasets, and detecting patterns that human analysts might miss. On the other hand, without proper oversight, these tools can perpetuate bias, produce flawed outputs, and mislead investigators.
“Apparently, they [a state agency in Idaho] had integrated a new system—a computer system for evaluating who deserved to get paid and who didn’t. There were transparency issues and so forth,” Stanley explained. “It turned out that they were using a formula and a spreadsheet to evaluate—to use different variables—to cut a bunch of people off.”
These same vulnerabilities, Stanley noted, can be used by defense teams to challenge the integrity of AI-generated evidence.
Fragmented Regulations: A Call for Cohesion
As AI use accelerates, so too does the conversation around how it should be regulated. AI regulation exists within a fragmented landscape of local, state, and international rules, with varying degrees of oversight, enforcement, and ethical standards—creating inconsistencies in how AI is governed across jurisdictions.
There’s a patchwork of guidelines, according to Nema Milaninia, partner at the Atlanta-based multinational law firm King & Spalding.
“There’s no federal legislation on AI,” Milaninia said.
Milaninia, who is on the firm’s Special Matters & Government Investigation team, largely focuses on defense and compliance for tech companies in emerging technologies, including AI. Prior to that, he spent nine years as a prosecutor at the International Criminal Court following his role as Head of Regulatory Investigations at Google.
“I think the landscape, for better and for worse depending on how you view that, is deeply, deeply fragmented,” emphasized Milaninia. “In terms of who’s doing [oversight] better and who’s doing it worse—it’s hard to say. It fundamentally comes down to what is the benchmark by which you are judging these regulations.”
Practical Applications of AI in Justice
For Ashwini Jarral, director at the Integrated Justice Information Systems Institute, AI is already proving to be a valuable tool in enhancing public safety and helping justice services to be more productive—but the technology still requires strong human oversight.
“We have created AI translation and interpretation services that can handle over 120 languages. That’s been a very successful use case—because translation is what you hear, but interpretation is what matters. It ensures responders understand the true context and can dispatch the right resources quickly,” he said
Jarral, who has led international efforts around the creation of many interoperability programs, says human validation is required for demonstrating high accuracy. In unique ways, AI is being applied meaningfully at various stages of the justice process—from emergency dispatch and courtroom document processing to corrections and community supervision.
“Georgia Department of Corrections has a great success story of how some of the illegal enterprises were smuggling some of the contrabands, drugs and others using drones,” he said. “They showcased actual use of AI in a very small scenario where they had some intel. There is still [a] need to be validated with others, because if something works in one jurisdiction, it doesn’t mean it’s going to work in another.”
Navigating the Ethical Dilemma of Dual-Use Technology
A question from the audience prompted a broader conversation about the darker potential of artificial intelligence. The attendee was grappling with how AI can be used not just for innovation, but also for destruction and deception.
In response, Lindsay Freeman, director of technology, law, and policy at the Human Rights Center at UC Berkeley School of Law, addressed the concern by exploring the dual-use nature of AI—how it can be harnessed for both investigative progress and malicious intent.
“We’re looking at AI both in terms of how we can use it in our own investigations—and my team experiments with that quite a bit—but we also increasingly have been looking at how perpetrators are using it to facilitate or enable their crimes,” Freeman explained.
She and her team at UC Berkeley are involved in many related ongoing projects, one of which uses AI and satellite imagery to identify mass graves by detecting disturbances in the earth—an innovative approach to documenting atrocities and seeking justice in post-conflict zones.
Later, Freeman added, “I think with any new technology, actually, criminals tend to be the ones who use it first and are the most innovative, and police and law enforcement are catching up.”
As AI continues to evolve, its integration into the criminal justice system will demand ongoing collaboration, thoughtful regulation, and a steadfast commitment to balancing innovation with ethical responsibility.
“I hope this is the first of many of these joint projects that we’re going to be doing,” said Andrea Cayley, executive director of Washington, D.C. Programs at the Sandra Day O’Connor College of Law.
This important conversation was part of a panel discussion hosted by Arizona State University’s Center for Law, Science and Innovation, in partnership with the American Bar Association.
This post was originally published on this site be sure to check out more of their content.