Despite pushback from judges, the use of generative AI tools in legal proceedings continues to grow. Initially, it was used in courtrooms to create fake cases, but now it’s evolving with advanced video and audio technology.

In a recent Arizona case, the family of a murder victim presented a video featuring an AI version of Chris Pelkey, who was killed in 2021. This AI-generated “clone” addressed his alleged killer in court, marking the first known use of a deepfake in a victim impact statement.

In the video, the generated version of Pelkey spoke directly to the accused, expressing regret over their encounter. The judge sentenced the accused to 10.5 years in prison, noting that the AI-generated statement influenced his decision. The Pelkey family created the video by training an AI model on clips of him and applying an “old age” filter to show what he might look like today.

Gary Marchant, a law professor at Arizona State University who studies ethics and new technologies like AI, praised Pelkey’s family for making a statement that seemed to go against their own goal of getting the toughest punishment for Horcasitas. However, he expressed concern about the example it sets.

While prosecutors and defense attorneys have traditionally used visual aids, charts, and other illustrations to support their arguments, Marchant noted that artificial intelligence introduces new ethical challenges. Marchant remarked that the situation is quite complicated. He pointed out that you can see someone appearing to speak who isn’t doing so; in reality, that person is deceased and not speaking at all. He believes this creates an additional layer of complexity that could lead to risky situations.

In another instance, a man in New York, Jerome Dewald, used a deepfake video to support his legal defense in a contract dispute. The judge was confused, thinking the computer generated figure was Dewald’s attorney. Dewald later clarified that the video was created by him to help explain his case more clearly, not to mislead the court.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

These examples highlight the growing trend of generative AI in courtrooms, which began gaining traction with the popularity of chatbots like ChatGPT. Lawyers have used AI to draft legal documents, but this has led to issues, including the submission of fake case names generated by AI. Some lawyers have faced sanctions for using AI inappropriately, raising questions about the rules surrounding AI in legal settings.

Ethical challenges

The main ethical concern with using artificial intelligence in legal cases is the risk of bias and unfair results because of biased training data. Because it learns from the information it is given, and if that information contains past biases, the AI will likely continue and even increase those biases, resulting in unfair outcomes.

The lack of clarity around how an AI makes its decisions will reduce trust in the legal system and make it hard for lawyers to explain their arguments. There are also worries about data privacy and security, since AI often needs access to sensitive client information.

The future courtroom

While courts have punished the misuse of AI, the guidelines for acceptable use remain unclear. Recently, a federal panel voted to seek public input on rules to ensure AI-assisted evidence meets the same standards as human evidence.

Supreme Court Chief Justice John Roberts acknowledged both the potential benefits and risks of AI in the courtroom, emphasising the need for careful consideration as this technology becomes more prevalent. One thing is clear: AI deepfakes are likely to continue appearing in legal settings.