Get Updates
Get notified of breaking news, exclusive insights, and must-see stories!

Deepfake Voice Scandal at School Shows Widespread AI Risks

The latest criminal case involving artificial intelligence (AI) emerged last week from a Maryland high school, where police say a principal was framed as racist by a fake recording of his voice. This case is another reason why everyone should be concerned about the increasingly powerful deep-fake technology, experts say. "Everybody is vulnerable to attack, and anyone can do the attacking," said Hany Farid, a professor at the University of California, Berkeley, who focuses on digital forensics and misinformation.AI has become very accessible in recent years, making it easier for anyone with an internet connection to manipulate recorded sounds and images. The fake audio clip that impersonated the principal is an example of a subset of AI known as generative AI, which can create hyper-realistic new images, videos, and audio clips. "Particularly over the last year, anybody can go to an online service and either for free or for a few bucks a month, they can upload 30 seconds of someone's voice," said Farid.In the Maryland case, authorities said Dazhon Darien, the athletic director at Pikesville High, cloned Principal Eric Eiswert's voice. The fake recording contained racist and antisemitic comments and appeared in some teachers' inboxes before spreading on social media. The bogus audio forced Eiswert to go on leave while police guarded his house. Detectives asked outside experts to analyze the recording, and one expert found that "multiple recordings were spliced together."Many cases of AI-generated disinformation have been audio because the technology has improved so quickly. Human ears cannot always identify telltale signs of manipulation, while discrepancies in videos and images are easier to spot. Some people have cloned the voices of purportedly kidnapped children over the phone to get ransom money from parents. Others have pretended to be company executives who urgently needed funds.Experts warn of a surge in AI-generated disinformation targeting elections this year. But disturbing trends go beyond audio, such as programs that create fake nude images of clothed people without their consent, including minors. Singer Taylor Swift was recently targeted.Most providers of AI voice-generating technology say they prohibit harmful usage of their tools. However, self-enforcement varies. Some vendors require a voice signature or ask users to recite a unique set of sentences before a voice can be cloned. Bigger tech companies like Facebook parent Meta and ChatGPT-maker OpenAI only allow a small group of trusted users to experiment with the technology because of the risks of abuse.Farid suggests that more needs to be done, such as requiring users to submit phone numbers and credit cards so they can trace back files to those who misuse the technology. Another idea is requiring recordings and images to carry a digital watermark.Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, said law enforcement action against criminal use of AI is the most effective intervention. More consumer education is also needed. Another focus should be urging responsible conduct among AI companies and social media platforms.Yet another challenge is finding international agreement on ethics and guidelines, said Christian Mattmann, director of the Information Retrieval & Data Science group at the University of Southern California. "People use AI differently depending on what country they're in," Mattmann said. "And it's not just the governments; it's the people. So culture matters."

Deepfake Scandal Exposes AI Risks
Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+