About DeFake Project

The DeFake Project is a research initiative dedicated to advancing digital media forensics and the development of usable, interpretable solutions for the detection of AI-generated and manipulated media, including face manipulation. Our mission is to empower professionals—including journalists, intelligence analysts, and law enforcement—with innovative tools and resources to critically assess and verify the authenticity of digital content.

Our Origins

The DeFake Project is a research lab located at the ESL Global Cybersecurity Institute at Rochester Institute of Technology. Our work is conducted in collaboration with partners from the School of Journalism and New Media at University of Mississippi and Michigan State University. We are honored to be one of the eight winners of the Knight Foundation’s AI and the News Open Challenge, recognizing our innovative approach to addressing the challenges posed by deepfakes in journalism and beyond.

The Challenge We Address

The rapid advancement of deepfake technology has significantly lowered the barriers to creating convincing, manipulated video content. This poses several critical challenges, including the potential for disinformation campaigns targeting democratic processes, risks to the reputations of individuals and organizations, and erosion of public trust in visual media and news sources.

Our Approach

We take a usability-centered approach, focusing on the needs of end-users in sensitive roles. Our work centers on:

Developing intuitive technologies for AI-driven identification of synthetic and manipulated media.

Advancing media content analysis through computer visionmetadata examination, and machine learning techniques.

Conducting user studies and community engagement with practitioners in journalism and intelligence sectors to ensure real-world applicability and transparency.

Community Engagement

We maintain strong ties to professional communities through a diverse range of outreach and collaborative activities. Our team actively:

  • Organizes and participates in interdisciplinary workgroups, bringing together experts from journalism, law enforcement, cybersecurity, and digital forensics to exchange best practices in digital media verification.
  • Leads hands-on training sessions and custom workshops focused on the practical uses of AI-based face manipulation detection, forensic analysis, and generated media investigation. These sessions help participants build the skills needed to assess the provenance and integrity of digital content in real-world scenarios.
  • Delivers presentations, keynotes, and tutorials at national and international industry events, academic conferences, and practitioner forums, sharing our latest research findings and promoting responsible adoption of forensic technologies.
  • Develops and distributes open educational resources, guidance documents, case studies, and toolkits to enable practitioners and educators to better understand evolving threats in media manipulation and leverage our solutions to safeguard information integrity.

These initiatives help shape best practices and ensure our work addresses real-world needs while strengthening public trust in digital communications.

Project Features

The DeFake Project offers a suite of rigorous, user-friendly technologies and resources for digital media forensics and analysis of AI-generated and manipulated media:

Educational offerings—both in-person and online—are regularly updated to reflect the latest advances, equipping journalists, intelligence analysts, and law enforcement officers with the tools necessary to adapt to rapidly evolving technological threats.

Our forensic analysis platform integrates computer vision, metadata extraction, and machine learning for robust identification and classification of image, audio, and video manipulations, with a focus on face manipulation.

The system is designed for transparency and interpretability, presenting clear explanations and forensic evidence to support users’ decision-making processes.

Our team prioritizes ease of integration and accessibility for users of varying technical backgrounds, developing intuitive interfaces and comprehensive documentation.

We contribute to the scientific and professional community by publishing peer-reviewed articles, organizing research workshops, and piloting new standards for evaluating and reporting generated media analysis results.

Acknowledgements

The DeFake Project is made possible through generous support from Rochester Institute of Technology, partner institutions, the Knight Foundation, the National Science Foundation (NSF), and other research grants. We gratefully acknowledge the contributions of our academic collaborators, industry partners, and the open-source community, whose expertise and dedication drive our mission to advance trustworthy digital media forensics and generated media analysis.

Meet the Team

Loading Categories...

Our team is made up of a diverse group of individuals from around the world. We are united by our passion to preserve the integrity of digital media.