Detecting Deepfakes: Artifical Intelligence and Anti-Jewish Hate: A Case for Regulating Generative AI

Detecting Deepfakes: Artifical Intelligence and Anti-Jewish Hate: A Case for Regulating Generative AI

Online antisemitism has existed since the invention of the internet. However, recent political and social developments have caused a major increase in this pernicious form of racism.1 At the same time, the use of Artificial Intelligence (AI) technology — in particular the generation of fake images — has never been easier. People with negligible technical skills for little to no cost can develop content or design systems that reach millions worldwide. Consequently, research on contemporary expressions of hate in digital communication is urgently needed to understand and counter the impact of these technologies. This research focuses on AI-generated antisemitic fake
images in digital communication (so-called deepfakes). It provides insights into, and an overview of, existing research practices. It evaluates available solutions for detecting AI-generated antisemitic deepfakes creates a method for labelling such deepfakes in different online content, building on models established by our researchers for the “Decoding Antisemitism” project, and for the first time presents, analyses and evaluates a dataset with such labels. Our results show that further research in this area is required if a model to detect artificially created antisemitic online content is to be accurate and successful. Current algorithmic solutions struggle to account for complex, nuanced forms of imagery, which are particularly prevalent in the dissemination of hate ideologies. As online actors try to avoid automatic recognition, they often resort to implicit rather than explicit, obvious patterns, making detection even more challenging.

Find the report here