Dungeons & Deepfakes
Dungeons & Deepfakes is an innovative research project that creatively combines the immersive experience of role-playing games with the critical challenges faced by journalists in the digital age. By employing scenario-based role-play, this project invites journalists to step into high-stakes newsroom environments where they must navigate the complexities of verifying video content in the face of deepfake threats. Inspired by the flexible and imaginative nature of tabletop games like Dungeons & Dragons, we aim to understand how journalists interact with AI-based verification tools, making decisions that could impact public perception and trust. Through this engaging approach, we seek to uncover valuable insights into the behaviors, biases, and workflows of journalists as they confront the evolving landscape of manipulated media. This version aims to create a more inviting tone while clearly outlining the project's objectives and methodology.
Key Features:
- Semi-structured, Scenario-Based Role-Play Exercises: Participants engage in immersive role-play that simulates high-stakes newsroom environments, allowing them to navigate complex verification challenges.
- Customizable Scenarios: Scenarios reflect real-world challenges, including high-profile events and controversial claims, enabling participants to experience various contexts.
- Integration of Deepfake Detection Tool Prototypes: Journalists use prototypes of deepfake detection tools within the simulated workflow, providing insights into their practical application.
- Cross-Cultural Comparison: The project examines the behaviors of journalists in different cultural contexts, specifically comparing practices in the United States and Bangladesh.
- Visualization of Decision-Making Processes: Participants' actions and decisions are tracked and visualized through step graphs, illustrating their verification processes before and after using detection tools.
Research Objectives:
- Examine Journalists' Perceptions: Understand how journalists perceive deepfake detection tools and their readiness to adopt them.
- Analyze Workflow Changes: Investigate how and when journalists incorporate these tools into their verification workflows.
- Investigate Overreliance on AI Tools: Explore instances of automation bias and confirmation bias that may arise from using deepfake detection tools.
Methodology:
- Participants assume the role of investigative journalists tasked with verifying potentially manipulated video content.
- The study design allows for both online and in-person sessions to maintain control over interactions.
- Key findings reveal that while journalists generally view deepfake detection tools positively, they often rely on traditional verification methods first and may exhibit biases based on tool outputs.
Implications:
The Dungeons & Deepfakes project provides valuable insights into improving the usability and interpretability of deepfake detection tools. It highlights the importance of training journalists on how to effectively use these tools while maintaining critical thinking skills in their verification processes. By combining elements of game design with serious research objectives, this project offers a unique approach to understanding the complex interactions between journalists and emerging AI technologies in combating misinformation.