AI-generated images are becoming more persuasive and widespread, and they could lead to more complicated court cases if the synthetic media is presented as evidence, legal experts say.
“Deepfakes” often involve editing videos or photos of people to make them look like someone else using deep learning AI. The technology hit the public radar broadly in 2017 after a Reddit user posted realistic celebrity pornography on the platform.
The pornography turned out to be falsified, but the revolutionary technology only became more realistic and easier to achieve in the years that followed.
For legal experts, deepfakes and AI-generated images and videos are likely to cause huge headaches in the courts – which have been preparing for this technology for years. The ABA Journal, the American Bar Association’s flagship publication, published an article in 2020 warning that courts around the world were grappling with the proliferation of deepfake images and videos presented as evidence.
HOW DEEPFAKES ARE ABOUT DESTROYING POLITICAL ACCOUNTABILITY
AI-generated images are becoming more persuasive and widespread, and they could lead to more complicated court cases if the synthetic media is presented as evidence, legal experts say. (Reuters/Dado Ruvic/Illustration)
“If a picture is worth a thousand words, a video or sound could be worth a million,” Jason Lichter, a member of the ABA’s E-Discovery and Digital Evidence committee, told the ABA Journal at the time. “Because of the weight given to a video by an investigator, the risks associated with deepfakes are all the greater.”
Deepfakes presented as evidence in a court case have already surfaced. In the UK, a woman who accused her husband of being violent during a custody battle was discovered to have doctored the audio she submitted as evidence of her abuse.
CALIFORNIA BILL WOULD CRIMINALIZE AI-GENERATED PORN WITHOUT CONSENT
Two men accused of participating in the January 6, 2021 Capitol Riot have claimed video of the incident may have been created by AI. And Elon Musk’s Tesla lawyers recently argued that a 2016 video of Musk, which appeared to show him touting the cars’ self-driving features, may be a deepfake after a one-man family died in a Tesla sued the company.
A professor at Newcastle University in the UK argued in an essay last year that deepfakes could become a problem for low-level offenses making their way through the courts.
“While political deepfakes are in the news, tampering with evidence in fairly low-level legal cases — such as parking appeals, insurance claims, or family feuds — could become a problem very quickly,” he said. writes law professor Lillian Edwards in December. “We are now at the beginning of living in this future.”
Edwards highlighted how deepfakes used in criminal cases are likely to become more widespread, citing a BBC report on how cybercriminals used deepfake audio to scam three unsuspecting business executives into transferring million dollars in 2019.
DISINFORMATION MACHINES? AI CHATBOT “HALLUCINATIONS” COULD POSE POLITICAL, INTELLECTUAL AND INSTITUTIONAL DANGERS

Artificial intelligence will hack data in the near future. (Stock)
In the United States this year, there have been a handful of warnings from local police departments about criminals using AI to try to extract ransom from families. In Arizona, a mother said last month that criminals used AI to impersonate her daughter’s voice and demand $1 million.
The ABA Journal reported in 2020 that as technology becomes more powerful and compelling, forensic experts will be tasked with the difficult task of deciphering what is real. This can involve analyzing photos from videos for inconsistencies, such as the presence of shadows, or authenticating footage directly from a camera source.
MUSK WARNS OF AI’S IMPACT ON ELECTIONS, CALLS US FOR MONITORING: ‘THINGS GET WEIRD… FAST’
With the rise of new technology, there are also fears that people will more easily claim that any evidence was generated by AI.

The Supreme Court (AP Photo/Jacquelyn Martin/File)
BEWARE OF CRYPTO CRIMINALS: THE AI IS AFTER YOU
“That’s exactly what we were concerned about: that when we entered this era of deepfakes, anyone could deny reality,” said Hany Farid, digital forensics expert and professor at the University of California, Berkeley. , to NPR this week.
Earlier this year, thousands of experts and tech leaders signed an open letter calling for a pause in research in AI labs so that policymakers and industry leaders can develop measures of security for technology. The letter claimed that powerful AI systems could pose a threat to humanity, and also noted that the systems threatened the spread of “propaganda”.
CLICK HERE TO GET THE FOX NEWS APP
“Contemporary AI systems are now becoming competitive for humans in general tasks, and we have to ask ourselves: should we let machines flood our information channels with propaganda and untruth?” states the letter, which was signed by tech leaders such as Musk and Apple co-founder Steve Wozniak.