Voltaire, the French writer and philosopher wrote, “it is better to risk saving a guilty person than to condemn an innocent one”—a principle which underpins today’s criminal justice system (CJS). Modern law does not guarantee the conviction of the guilty, nor the acquittal of the innocent. It instead stipulates that criminal prosecutors must provide evidence that a defendant is “guilty beyond reasonable doubt”, a characteristic commonly known as the burden of proof. 

Legal professionals are constantly seeking out new ways to find this proof, and while lawyers took an oath to be faithful to the law, AI did not. Modern AI has led to some alarming developments within the CJS, specifically in the judicial system. Examples include prosecuting wrongly convicted criminals, ethical concerns, and biased judgments and opinions.  

How is AI used in court proceedings, and what are the pitfalls?  

Law practitioners have started experimenting with AI to assist with reviewing legal documents and making decisions. The House of Lords of Justice and Home Affairs Committee considered the use of AI in the UK judicial system in 2022 and concluded that it poses a risk to an individual’s right to a fair trial. Their findings suggest that AI manipulated and tampered with legal documents, creating false accusations. For example, in Mata versus Avianca [2022], Mr Mata’s representative, Mr Steven Schwartz, abandoned his duties to provide and prepare legal documents after submitting a non-existent judicial opinion with fabricated law citations produced by OpenAI’s ChatGPT—a large language model capable of generating text, images, codes, and algorithms. 

In Woodruff versus City of Detroit [2023], Porcha Woodruff, an eight-month pregnant woman, was wrongly sentenced to eight months in prison for carjacking after a facial recognition system falsely identified her due to her ethnicity. Furthermore, in Williams versus City of Chicago [2013], Michael Williams was wrongly sentenced to 11 months in prison for first-degree murder of Safarian Herring after an AI algorithm failed to detect gunshots. Instances such as this highlight the potential risk of using AI in criminal trials. 

While AI is far from being 100% accurate, its output has raised legal and ethical concerns about the right to a fair trial. By using AI algorithms to draft legal documents, AI learns that ethnic minorities, such as Black and Asian people, receive a higher conviction rate and are often stopped and searched by the police. Therefore, based on previous cases, AI prejudged the conviction of ethnic minorities. 

The future role of AI in the judicial system 

Countries such as the UAE, China, Canada, some parts of the US—such as Texas and Illinois—and the UK Supreme Court have allowed the use of AI in court proceedings to enhance efficiency, productivity, problem-solving, and the speed to prosecute the guilty. Yet, this may prove to be a reckless decision as AI has been found to manipulate evidence, breach confidentiality, and create biased decisions based on pre-existing judgment. As a result, AI could potentially increase the number of wrongly convicted criminals. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

In defence of AI in court proceedings 

However, as of March 2023, OpenAI’s ChatGPT-4 passed the Uniform Bar Exam—an exam designed to test lawyers’ ability to demonstrate knowledge and skills before receiving a license to practice law. The large language model scored 75%, higher than the 68% average and good enough to sit in the 90th percentile. 

By relying on AI’s intelligence in the justice system, the public’s fundamental human rights and civil liberties are at risk at the hands of AI. Ultimately, it is crucial to understand that AI is far from perfect, and the CJS must acknowledge the potential risk of AI prosecuting the innocent.