Updates to our Terms of Use

We are updating our Terms of Use. Please carefully review the updated Terms before proceeding to our website.

Friday, March 20, 2026
Courthouse News Service
Friday, March 20, 2026 | Back issues
Courthouse News Service Courthouse News Service

Generative justice: AI courtroom content could change how Americans think about the law

AI is being used to create fake scenes from courthouses. What does that mean for public trust in institutions?

(CN) — Bound to a chair, an unnaturally large defendant struggles to free himself. A squad of armed sheriff’s deputies surrounds him as a judge reads his sentence: 6,700 years in federal prison.

“Take these chains off and say it again!” the convict rumbles.

There are many videos like it. A courtroom erupts in cheers as a judge dismisses a case against a woman who saved her neighbor’s children. A teenage offender laughs as her victim’s family weeps. Sometimes, there’s a narrator who describes these salacious legal dramas to viewers on YouTube, Instagram and TikTok. 

Many of these courtroom videos are racially tinged. Some of them are more convincing than others. All of them are fake, stitched together with out-of-context clips or generated wholesale with artificial intelligence.

AI, we were told, would change the world, solving age-old problems and freeing humans from the drudgery of menial computer tasks. And yet so far at least, one of the main uses of this groundbreaking technology has been to produce an endless stream of online content.

Commonly known as slop, experts warn it's eroding public trust and a shared sense of reality.

“We’re going to have a real problem in our society, because a civilized society is built on trust mechanisms,” said Edward Delp, a professor at Purdue University. “If we can’t trust each other or things we’re getting from the government, I think it's going to cause lots of problems.”

Some slop might seem benign. Take videos of celebrities hugging younger versions of themselves, or of animals doing impossibly cute things. 

These endearing videos can have a dark side: Research shows that even when fake content isn’t full-blown fake news, regular exposure to it can still lead to cynicism and disengagement, a phenomenon sometimes called “reality apathy.”

A computer-generated convict pulls against his restraints in court. (FearFeedUSA/YouTube via Courthouse News)

Then there’s the less cute side of AI content. President Donald Trump has become a prolific poster of it, using generative content to smear and embarrass his opponents. In his second term, the president has used the technology to create a racist caricature of Democratic House Minority Leader Hakeem Jeffries as a mustachioed Mexican, a video of himself as a pilot-king dumping feces on protesters and even a mocking altered photo of a private-citizen activist arrested in Minnesota.

Tools capable of producing realistic-looking content are now widely available to the public, with little guardrails on their use. 

In 2023, Elon Musk unveiled his AI service Grok, allowing users on his social-media site X to create fake images of real people. This month, after ICE killed Renee Nicole Good in Minnesota, Grok was used to digitally undress her.

For the past 25 years, Delp has studied synthetic media manipulation and media forensics. In that time, he’s seen a wide range in the quality and/or harmfulness of fake online media.

“Some of them are funny,” Delp said of the many AI slop videos he’s watched. “Some of them are not good.”

When AI is used to generate content like purported police bodycam videos or courtroom CCTV footage, it has the potential to cause real social problems, Delp said.

“People will believe what they see, even if it’s labeled AI-generated,” he said. “What may have to happen is all of these content providers — even police body-worn camera videos — are going to have to be protected or labeled.”

Guardrails are possible. A wide range of companies are experimenting with tools like invisible watermarking, which embeds data about origin into media, and cryptographic hashes, which could provide unique identifiers that also help provide provenance. Delp’s doctoral students are working on this problem, helping insurance companies figure out how to authenticate claims.

Unlike some observers, Delp does not believe AI is a bubble that might burst. 

Instead, he believes that the technology will continue to receive buy-in and financial support in new and unforeseen ways. At the same time, AI will make distinguishing between real and fake videos increasingly difficult.

“I’m interested in looking at where this is all going,” Delp said. “We’re going to get to a position where these things are getting better and better.” For now, he says “the detection tools are still reasonable” but that “the problem is most of the really good detection tools are not available to the public.”

A woman gives a courtroom statement in an emotional but entirely computer-generated scene. (Kleinkunstbühne Zehntscheuer Amorbach/Facebook via Courthouse News)

Despite these issues, Delp believes education — not legislation — should be used to safeguard the public from harm. 

That’s something Tori Noble, an attorney with the digital civil-rights group Electronic Frontier Foundation, agrees with. 

In an interview, Noble stressed there’s a difference between “a problem for society to tackle versus something that there should be a law about.” 

“I think that the content people consume online can of course affect their worldview,” said Noble, who specializes in copyright law, artificial intelligence and free speech. “I think there is evidence of that. I think that begs the question: What do we do?”

Speech created with AI is still speech, Noble says. She’s cautious about implementing prohibitions on artificially-generated versions of it. Instead, “I think social media or media training in general and media literacy are really important,” she said. “Trying to control what algorithms are able to show people is a bad idea, because whenever you put the government in the position of what they can or cannot show, you end up limiting First Amendment rights.”

One of the first legal thrillers was the “Oresteia” trilogy, written by Greek playwright Aeschylus around 500 B.C. 

Parts of it read like a heavy-handed TV procedural: In the third and final play, a man kills his mother in revenge for killing his father King Agamemnon. The man is granted a trial by the goddess Athena. His defense attorney is the god Apollo.

From ancient plays to modern television, courtroom dramas have long shaped how the public understands justice, even when those portrayals are fictionalized — and even when they involve figures from Greek mythology.

Powerbrokers know this: In an amusing example from 1963, California Governor Edmund Brown criticized the popular legal procedural “Perry Mason” for portraying the Los Angeles district attorney as a hapless and bungling lawyer who regularly loses cases.

A recent study from the University of Tennessee at Chattanooga found that media portrayals can negatively influence how people view policing and criminal justice, particularly for people of color. Likewise, a 2006 study in the William & Mary Law Review found that TV news — which has a profit incentive to hook viewers with titillating accounts of monstrous crimes — can skew American’s perceptions of law and justice, ultimately leading to more punitive, tough-on-crime policies. Still, Americans love a good legal drama when it’s not their own: A 2025 YouGov poll found that half of Americans enjoy true crime content.

On platforms like YouTube, the proliferation of fake courtroom content can lean into the disturbing and bizarre — like this video claiming a person was "executed in court." (mixedphenomenon/YouTube via Courthouse News)

In other words, even without generative AI, courthouse and true-crime media can get into tricky ethical territory.

Nate Eaton, an Idaho-based YouTuber and journalist, said that he regularly grapples with the responsibilities of online content in a world where clicks translate to cash. 

On his channel East Idaho News, Eaton produces a series entitled “Courtroom Insider” that discusses the ins and outs of courtroom sagas. It began with coverage of the viral Lori Vallow Daybell murder trial, when he would produce a special recap at the end of the day. 

“The numbers were through the roof,” he said. 

Since then, he’s continued delving into other well-known criminal cases. The AI-generated videos are a nuisance, he said.

“Some of them have millions of views, [and] the headline is so catchy,” he said, giving a hypothetical example: You won’t believe what Bryan Kohberger whispered to his attorney!

“It looks real and the pictures look real, but then it’s an AI voice reading it,” Eaton said. “If I was just trying to churn as many videos out a day, there’s a million court cases I could be covering. I’m specific about the ones I choose.”

Eaton says he’s tried to distinguish himself as a responsible journalist. That means not focusing exclusively on cases that drive web traffic, as well as building up trust not just with his audience but with attorneys.

“I want to be able to educate the public on how the court system works, why it works, and what it does, but also use terms that are easy to understand and make it compelling content,” Eaton said. “I don't want to be boring. A lot of times, court terminology is boring and hard to understand. I want to be able to explain [things] clearly to the audience, along with giving victims and family members a place to talk.”

A computer-generated defendant bound to a chair. (minute.history8/TikTok via Courthouse News)

Eaton may be doing his best — but not all content-creators are. And AI is making it easier than ever to manipulate or even entirely fabricate authentic-seeming media. So far, there are still few legislative or corporate protections to prevent these tools from being misused.

Late last year, OpenAI unveiled a social-media platform that seamlessly produces and shares videos based on only a text prompt. The project quickly took heat for creating fake content of dead celebrities, as well as media that infringed on copyrights.

In January, YouTube CEO Neal Mohan wrote in the company’s blog that it would crack down on what he deemed “low quality, repetitive” AI content. More than 1 million channels use AI daily, he wrote in the post. 

These days, anyone with a free account on an AI generator can generate a 30-second video of a fake courtroom, Delp said. And with a little money, those videos get longer and more convincing.

The tendency to believe these videos can be strong, Delp said. That’s especially when they confirm existing assumptions and beliefs — for example, that criminal defendants are inhuman monsters who must be chained in court as they’re sentenced to thousands of years in prison.

“We're going to come to a point where the average person … seeing content online is not going to know whether it’s real or not,” he said. “I think a lot of people who watch a video and it confirms what they already believe will have a tendency to believe the video. That's going to happen all the time.”

Categories / Courts, Criminal, Features, Technology

Subscribe to our free newsletters

Our weekly newsletter Closing Arguments offers the latest about ongoing trials, major litigation and rulings in courthouses around the U.S. and the world, while the monthly Under the Lights dishes the legal dirt from Hollywood, sports, Big Tech and the arts.