Trust in news is a hot topic when you talk about AI. Many newsrooms I've talked to, who are exploring generative AI, are primarily focused on preserving audience trust. Consequently, newsrooms are deliberating over AI guidelines to ensure transparency. A question I often get is about how other newsrooms are disclosing their AI use, to guide their own practices. This question comes up so often that I've got a go-to response ready that I email. At the end of this post, I'll share that list of newsrooms with AI guidelines I've checked out, to help anyone else looking for examples.
To explore this topic further, I reached out to Lynn Walsh for insights on fostering trust in news in the AI age. What elements should we incorporate into our guidelines and transparency disclosures? Also, is there such a thing as too much disclosure?
Lynn, an Emmy Award-winning journalist with over 15 years in investigative, data, and TV journalism, is the Assistant Director at Trusting News. Her focus is on helping journalists regain public trust. She also teaches at Point Loma Nazarene University and has served as a national president and Ethics Chair for the Society of Professional Journalists.
Below are the key insights I derived from our conversation
1️⃣ Guidelines for Transparency in AI Use: Lynn stresses the need for clear transparency guidelines in newsrooms using AI tools. As these guidelines evolve, she recommends conducting audience research to understand reactions to AI disclosures. Crafting these disclosures carefully in terms of wording and placement is essential for the public to understand the AI's role in the newsroom.
2️⃣ Ethical Considerations in Using AI Avatars: A new tool, Vidiofy.ai, just had its public launch recently. It turns news articles into videos but you also have the option to create an AI talking head avatar as the narrator to add personality and engagement to the video. This raises a crucial question for news organizations: how to approach the use of these AI avatars? Lynn believes that open conversations with the audience about using these avatars and their impact on trust are essential.
3️⃣ AI's Role in Rebuilding Trust in News: Lynn shares experiments she’s been doing at Trusting News using tools like ChatGPT to analyze news content for potential biases. This helps journalists recognize and address unintended biases, leading to more balanced and objective reporting, thereby enhancing audience engagement and trust.
For a deeper dive into this episode, building on last week's session with Dr. Mario Garcia, there's a custom GPT trained on this episode's transcript available for interaction here. Subscribers to ChatGPT Plus can engage with it for additional insights. Your thoughts and experiences with this GPT are valuable, so please share them with me.
🎧 Check out this episode on your favorite podcast platform to learn about how newsrooms can responsibly and ethically implement guidelines for experimenting with AI.
🔔 Course registration is now open for the Newsroom Robots X Wonder Tools Generative AI for Media Professionals Masterclass
I’ll be co-teaching this course with Jeremy Caplan, the author of Wonder Tools and the Director of Teaching and Learning at the Craig Newmark Graduate School of Journalism. We’ve designed this course to empower media professionals to harness generative AI effectively. It builds on the experiences from previous Generative AI workshops I've conducted at institutions, including the University of Toronto, the Craig Newmark Graduate School of Journalism at the City University of New York, and the International Center for Journalists. Sign up now.
Here are some example position statements on AI that I have been looking at:.
This article by Hannes Cool in Nick Diakopoulous’ Generative AI in the Newsroom Challenge, titled "Towards Guidelines for Guidelines on the Use of Generative AI in Newsrooms," also offers insightful perspectives on establishing guidelines for the use of generative AI in newsrooms.
Share this post