At a time of low trust in the legal system, AI deepfakes are set to appear in court proceedings.

At a time of low trust in the legal system, AI deepfakes are set to appear in court proceedings.
At a time of low trust in the legal system, AI deepfakes are set to appear in court proceedings.
  • Experts predict that the likelihood of deepfakes being used in courtrooms is not only possible but probable.
  • There is concern that deepfakes could be used to fabricate evidence to justify actions or exonerate individuals.
  • The use of AI in court reporting increases the possibility of manipulated evidence.

The legal profession is utilizing AI and machine learning to optimize their work processes, both inside and outside the courtroom. However, what occurs when this same technology is employed for unethical purposes?

The likelihood of deepfakes being used in courtrooms has increased due to recent advancements in technology, such as OpenAI's Sora software.

"Jay Madheswaran, CEO and co-founder of AI legal case assistant Eve, stated that the likelihood of someone abusing this technology is already occurring," said Jay Madheswaran.

Sarah Thompson, BlueStar's chief product officer, is concerned that deepfakes could be used in criminal proceedings to fabricate evidence for alibis or to prove innocence or guilt.

The judicial system worldwide faces a threat, as the U.S. in particular is subject to "equally enforced legal standards and principles" that are not influenced by the personal whims of powerful corporations, individuals, governments, or other entities, as stated in a whitepaper on AI cloning in legal proceedings from the NCRA.

If we challenge the established truth, we may encounter numerous problems.

The risk of alteration in the judicial process

The use of AI in court reporting increases the risk of evidence tampering, as it allows for the possibility of alteration. According to Kristin Anderson, president of the National Court Reporters Association and official court reporter in the Judicial District Court of Denton County, Texas, there is a significant risk that the justice system is opening itself up to by not having someone who is certified to have care, custody, and control.

AI reporting errors could compromise the accuracy and impartiality of traditional court reports, as demonstrated by Melissa Buchman's column in the Los Angeles San Francisco Daily Journal, where "entire chunks of testimony, including descriptive statements of a horrible event that had transpired, were missing."

A Stanford study found that Black speakers had nearly twice the error rate compared to white speakers in speech recognition.

Several states have enacted laws against AI-altered audio and video, primarily targeting deepfake pornography. California's bill, which criminalizes altered depictions of sexually explicit content, was the first of its kind.

"The legislative body is slow in implementing legislation and regulations on digital evidence and court reporting due to a lack of expertise in technology," said Thompson.

Thompson suggested that the judicial system must establish procedures for verifying digital evidence and incorporate them into the Federal Rules of Evidence and the Federal Rules of Civil Procedure.

Challenge to 'gold standard' of audio, video evidence

Madheswaran suggests that steps can be taken to combat the risk of deepfakes in the courtroom. He says that historically, audio and video evidence are considered the gold standard, but everyone needs to have a bit more critical thinking in terms of how much weight to give to such evidence.

At least there will be a pathway to some kind of justice, as judges can alert juries to the possibility of digitally falsified evidence during instructions and begin developing precedent based upon cases that involve deepfakes, according to Thompson.

Institutions such as MIT, Northwestern, and OpenAI are developing deepfake detection technology, but the ongoing battle between development and detection is likely to continue, with much of the legal AI development aimed at benefiting society by reducing attorney workload and increasing access to representation for businesses and individuals with limited resources.

The affordability of digital forensic experts who can handle deepfakes limits the use of this evidence authentication method.

Embedding additional evidence into data collection devices may be the most proactive approach to ensure data trustworthiness, as suggested by Madheswaran. This can be achieved through techniques that are currently available, such as time stamps and geo-locations, which can help authenticate original files or mark fabricated ones.

SynthID is a new Google tool that uses watermarks to identify AI-generated images.

Techniques like this are both easy and cost-effective, as Thompson suggests.

Official court proceedings require the presence of trained and certified humans to prevent intentional or unintentional misrepresentation, as there is currently no AI backed by regulatory and licensing oversight like an official court reporter.

The National Artificial Intelligence Act of 2020 stipulates that AI can make predictions, recommendations, or decisions that affect real or virtual environments. This is a serious matter that must be handled with caution.

In the courtroom, Madheswaran stated that deepfakes pose a trust issue.

The American judicial system should exercise caution when implementing AI and monitoring it, as public trust in the U.S. justice system is at a historic low with only 44% of public trust, according to a 2023 Pew Research Center survey.

OpenAI unveils new text-to-video AI tool Sora
by Rachel Curry

Technology