RESEARCH PAPER
Deepfake Influence Tactics through the Lens of Cialdini’s Principles: Case Studies and the DEEP FRAME Tool Proposal
More details
Hide details
1
Strategy and Development of Cyberspace Security Team, NASK – National Research Institute, Poland
2
Audiovisual Analysis and Biometric Systems Department, NASK – National Research Institute, Poland
These authors had equal contribution to this work
Submission date: 2024-09-17
Final revision date: 2024-12-12
Acceptance date: 2024-12-13
Online publication date: 2024-12-30
Publication date: 2024-12-30
Corresponding author
Pawel Zegarow
Strategy and Development of Cyberspace Security Team, NASK – National Research Institute, Kolska 12, 01-045, Warsaw, Poland
KEYWORDS
TOPICS
ABSTRACT
The advancement of artificial intelligence (AI) has introduced both significant opportunities and challenges, with deepfake technology exemplifying the dual nature of AI’s impact. On the one hand, it enables innovative applications; on the other, it poses severe ethical and security risks. Deepfakes exploit human psychological vulnerabilities to manipulate perceptions, emotions, and behaviours, raising concerns about the public’s ability to distinguish authentic content from manipulated material. This study examines the methods of influence embedded in deepfake content through the lens of Robert Cialdini’s six principles of persuasion. By systematically analysing how these mechanisms are employed in deepfakes, the research highlights their persuasive impact on human behaviour, particularly in scenarios such as financial fraud.
To address the challenges posed by deepfake technology, this study introduces DEEP FRAME, an original tool for systematically recording and
analysing deepfake content. DEEP FRAME integrates technical and psychological analysis, enabling the identification of technological characteristics and manipulation strategies embedded within deepfakes. The findings underscore the need for a holistic and interdisciplinary approach that combines technological innovation, psychological insights, and legal frameworks to counter the growing threat of deepfakes.