Research Stories
Deepfakes Identified as the Most Potent Source of Health-Related Misperceptions
Media & Communication
Prof.
LEE, JIYOUNG
This study is an international collaboration between Professor Jiyoung Lee of the Department of Media and Communication at Sungkyunkwan University and Professor Michael Hameleers of the University of Amsterdam. It provides empirical evidence on how AI-generated health deepfakes amplify misbeliefs and how individual characteristics shape these effects. The research was published in Media Psychology, a leading international journal in the media and psychology fields.
The researchers examined three central questions:
Do AI-generated health deepfakes exert stronger effects than text-based misinformation?
Do individuals exposed to deepfakes show greater or reduced intentions to verify information accuracy?
How do personal factors—such as interest in health issues and accuracy motivation—shape these influence processes?
A key contribution of the study lies in showing that deepfakes can mimic the authority of medical experts by reproducing visual and linguistic cues that resemble professional communication. In doing so, deepfakes function not merely as vivid, persuasive videos but as technological mechanisms that simulate authority and make their messages appear more credible and expert-like.
The findings show that deepfakes produced the largest increase in health-related misperceptions among all misinformation types tested. Participants exposed to deepfakes reported substantially higher levels of misbelief than those exposed to text-based misinformation. This effect appears to stem from the combination of deepfakes’ vivid realism and their ability to imitate authoritative styles—through expert-like tone, wording, and visual presentation—making false health claims particularly convincing.
Although exposure to deepfakes did not directly alter accuracy-checking intentions, the pattern varied across individuals. For text-based misinformation, those with greater interest in health issues showed stronger intentions to verify accuracy. However, this pattern disappeared in the deepfake condition. Even participants highly interested in health topics did not show higher intentions to fact-check, suggesting that the realistic and authority-mimicking nature of deepfakes may inhibit critical evaluation even among highly involved individuals.
Another notable finding concerns accuracy motivation. Conventional wisdom suggests that individuals who prioritize accuracy should be better at recognizing misinformation. Yet the study found the opposite: those with stronger accuracy motivation exhibited greater increases in misperceptions when exposed to deepfakes. This paradox may reflect a psychological tendency toward illusory accuracy perception—the belief that one’s judgment is correct—particularly when the misinformation is delivered through a video that appears authoritative. The deepfake’s expert-like cues may have reinforced this misplaced sense of confidence.
Overall, the study provides empirical evidence of how AI-driven, multimodal misinformation affects cognition, highlighting previously understudied risks in the domain of health communication. By demonstrating how deepfakes technologically reproduce authority cues to enhance perceived credibility and persuasion, the research offers important theoretical insights into the mechanisms through which misinformation exerts its influence. It also clarifies how factors such as health-issue interest and accuracy motivation operate—sometimes counterintuitively—when individuals process deepfake content.
The findings carry significant societal implications. Deepfakes can function not only as distorted health information but as serious public health risks. Moreover, individuals who normally value accuracy and engage in careful information processing may still be vulnerable to deepfakes that convincingly imitate expert authority. These insights underscore the need for strengthened AI literacy education, improved digital risk communication strategies, and more robust institutional response systems to mitigate the threats posed by synthetic media in health contexts.
Article Information
Title: Effects of Health-related Deepfakes on Misperceptions: Moderating Effects of Issue Relevance and Accuracy Motivation
Authors: Jiyoung Lee & Michael Hameleers
Journal: Media Psychology
DOI: https://doi.org/10.1080/15213269.2024.2401539
Pure: https://pure.skku.edu/en/persons/jiyoung-lee/
(Deepfake stimuli)