“Promoting AI’s Safe Usage for Elections,” Book in production
The DeFake Project team contributed a chapter, “Verification AI in the Newsroom: A Cross-Cultural Study of Journalists’ Use of Deepfake Detection Tools” to the book, “Promoting AI’s Safe Usage for Elections”. With the release date unknown, more details can be found at http://ai4ce.org/book/.
Authors
Saniat Javid Sohrawardi, Rochester Institute of Technology,
Y. Kelly Wu, Rochester Institute of Technology,
Matthew Wright, Rochester Institute of Technology
Abstract: This chapter examines how journalists in the United States and Bangladesh perceive and utilize deepfake detection tools in their news verification workflows. Through a semi-structured, scenario-based role-play study, we investigate the factors influencing journalists’ adoption of these tools, their placement within the verification process, and the potential biases associated with their use. Our findings show that while journalists recognize the potential of deepfake detection tools, their usage is not yet standardized or universally embraced. The tools are often employed midway through the workflow, only after initial assessments based on traditional methods prove inconclusive. Factors influencing tool usage include uncertainty about content authenticity, the perceived importance of the news story, and explicit speculations of deepfakes. The study also revealed instances of automation bias and confirmation bias among journalists, emphasizing the need for a balanced approach that combines technological tools with traditional journalistic methods and critical thinking skills. We conclude by discussing the implications of our findings for the development and deployment of deepfake detection tools that are effective, reliable, and ethically sound.