
In a Perspective, Hany Farid highlights the risk of manipulated and fraudulent images and videos, known as deepfakes, and explores interventions that could mitigate the harms deepfakes can cause.
In the PNAS Nexus article, Farid explains that visually discriminating the real from the fake has become increasingly difficult and summarizes his research on digital forensic techniques, used to determine whether images and videos have been manipulated.
Farid celebrates the positive uses of generative AI, including helping researchers, democratizing content creation, and, in some cases, literally giving voice to those whose voice has been silenced by disability. But he warns against harmful uses of the technology, including non-consensual intimate imagery, child sexual abuse imagery, fraud, and disinformation. In addition, the existence of deepfake technology means that malicious actors can cast doubt on legitimate images by simply claiming the images are made with AI.
So, what is to be done? Farid highlights a range of interventions to mitigate such harms, including legal requirements to mark AI content with metadata and imperceptible watermarks, limits on what prompts should be allowed by services, and systems to link user identities to created content.
In addition, social media content moderators should ban harmful images and videos. Furthermore, Farid calls for digital media literacy to be part of the standard educational curriculum. Farid summarizes the authentication techniques that can be used by experts to sort the real from the synthetic, and explores the policy landscape around harmful content.
Finally, Farid asks researchers to stop and question if their research output can be misused and if so, whether to take steps to prevent misuse or even abandon the project altogether. Just because something can be created does not mean it must be created.
More information:
Mitigating the harms of manipulated media: Confronting deepfakes and digital deception, PNAS Nexus (2025). academic.oup.com/pnasnexus/art … 93/pnasnexus/pgaf194
Provided by
PNAS Nexus
Citation:
Fraud detection strategies outlined may explain how to survive explosion of deepfakes (2025, July 29)
retrieved 29 July 2025
from https://techxplore.com/news/2025-07-fraud-strategies-outlined-survive-explosion.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.








