Cybercriminals are leveraging AI-driven voice simulation and deepfake video technology to deceive individuals and organizations, Bloomberg reported. In a recent incident, a CEO transferred $249,000 in funds after receiving a call that sounded like it came from a trusted source, only to discover it was generated by AI.
Udi Mokady, chairman of the cybersecurity firm CyberArk Software, had a surprising encounter with such an attack. In a Microsoft Teams video message in July, Mokady was taken aback when he came face-to-face with an eerily convincing deepfake version of himself, a move that was later revealed to be a prank by one of his coworkers.
“I was shocked,” Mokady told Bloomberg. “There I was, crouched over in a hoodie with my office in the background.”
While smaller companies may have tech-savvy employees who can spot deepfakes, larger organizations are more vulnerable to such attacks, as there may not be as intimate work relationships or technological understanding to spot whether someone is, well, real.
“If we were the size of an IBM or a Walmart or almost any Fortune 500 company there’d be legitimate cause for concern,” Gal Zror, research manager at CyberArk who carried out the stunt on Mokady, told Bloomberg. “Maybe Employee No. 30,005 could be tricked.”
Cybersecurity experts have warned of the consequences of a human-like AI copy of an executive who unearths vital company data and information such as passwords.
Related: A Deepfake Phone Call Dupes An Employee Into Giving Away $35 Million
In August, Mandiant, a Google-owned cybersecurity company, disclosed the first instances of deepfake video technology explicitly designed and sold for phishing scams, per Bloomberg. The offerings, advertised on hacker forums and Telegram channels in English and Russian, promise to replicate individuals’ appearances, boosting the effectiveness of extortion, fraud, or social engineering schemes with a personalized touch.
Deepfakes impersonating well-known public figures have also increasingly surfaced. Last week, NBC reviewed over 50 videos across social media platforms wherein deepfakes of celebrities touted sham services. The videos featured altered appearances of prominent figures like Elon Musk, but also media figures such as CBS News anchor Gayle King and former Fox News host Tucker Carlson, all falsely endorsing a non-existent investment platform.
Deepfakes, along with other rapidly expanding technology, have contributed to an uptick in cybercrime. In 2022, $10.2 billion in losses due to cyber scams were reported to the FBI — up from $6.9 billion the year prior. As AI capabilities continue improve and scams are becoming more sophisticated, experts are particularly worried about the lack of attention given to deepfakes amid other cyber threats.
Related: ‘Biggest Risk of Artificial Intelligence’: Microsoft’s President Says Deepfakes Are AI’s Biggest Problem
“I talk to security leaders every day,” Jeff Pollard, an analyst at Forrester Research, told Bloomberg in April. “They are concerned about generative AI. But when it comes to something like deepfake detection, that’s not something they spend budget on. They’ve got so many other problems.”