Newsroom Robots
Newsroom Robots
Why the Future of Journalism Is Still Human: In Conversation with Vilas Dhar
0:00
-46:54

Why the Future of Journalism Is Still Human: In Conversation with Vilas Dhar

Can AI write, analyze, and create? Absolutely. But empathy, imagination, and care remain firmly human.

This week on Newsroom Robots, I sit down with Vilas Dhar, President of the Patrick J. McGovern Foundation, one of the world’s foremost philanthropies advancing AI for public good. Vilas leads a $1.5 billion endowment that has committed over $500 million to projects spanning climate action, public health, education, and democratic governance. He has served on the UN Secretary-General’s High-Level Advisory Body on AI, is the U.S. government’s nominated expert to the Global Partnership on AI, and was named a World Economic Forum Young Global Leader in 2022.

Across philanthropy, policy, and technology, Vilas carries one central conviction: technology may accelerate, but the future of journalism and society must remain human-centered.

In our conversation, Vilas discusses his three-part framework for ethical AI deployment (responsible data, clear boundaries, and transparency) and explains how to translate abstract principles into concrete newsroom decisions. He unpacks his LISA framework (Listen, Involve, Share, Assess) for audience-centered AI design, and tackles the hardest questions facing newsroom leaders: Should we buy or build AI tools? How do we balance innovation with environmental sustainability? What happens to human creativity when machines can create?

But perhaps most powerfully, Vilas challenges a deeply held belief in journalism: that media organizations can remain ‘just’ media companies in an AI-driven world. There is no way to be a media organization today without also being a technology organization, he argues, and that shift requires not just new tools, but a fundamental reckoning with organizational identity and purpose.

In this episode, we cover:

00:31 – Introducing Vilas Dhar and his human-centered AI vision
Why technology should serve dignity, equity, and democracy—not just profit

02:17 – The three-part framework for ethical AI
Responsible data, clear boundaries, and transparency as actionable principles

07:08 – Questions leaders must ask before deploying AI
Who’s involved? Who’s accountable? Who has editorial control over AI use?

10:16 – The LISA framework: Listen, Involve, Share, Assess
Turning AI experimentation into behind-the-scenes reporting that builds public trust

13:30 – Navigating ethical dilemmas around AI-generated content
Voice, attribution, and distinguishing between human and machine work

13:51 – The three phases of newsroom AI adoption
From individual experimentation to strategic alignment to building custom tools

18:54 – Why “we’re not a tech company” no longer works
The buy vs. build debate and restructuring organizational identity

23:12 – Organizational reckoning in an 18-month transformation cycle
Redefining purpose when change that took decades now happens in months

25:23 – Reconciling AI’s environmental costs
Why smaller, targeted models and collective action matter more than massive systems

29:14 – Fighting misinformation with AI
Provenance standards, verification tools like Gigafact, and rebuilding trust

34:13 – What journalism is missing compared to other industries
The courage to tackle operational efficiency and low-hanging fruit

37:01 – The evolving role of human creativity and agency
Why AI can find patterns but never understand vulnerability, empathy, or love

39:33 – The McGovern Foundation’s North Star
Moving from AI done “to us” to AI done “for us” to AI done “by us”

44:23 – How Vilas uses AI personally
Family storytelling and flipping AI into a Socratic questioner to sharpen his own thinking

🎧 Listen to the full conversation with Vilas Dhar on Apple Podcasts, Spotify or your favorite podcast platform.