Skip to content
Menu
Menu

Listen to Our Global Guests

Text content

Podcasts

The Latest Episodes Are Here

RegulatingAI Podcasts
In this episode of RegulatingAI, host Sanjay Puri speaks with Karin Andrea-Stephan — COO & Co-founder of Earkick, an AI-powered mental health platform redefining how technology supports emotional well-being. With a career that spans music, psychology, and digital innovation, Karin shares how she’s building privacy-first AI tools designed to make mental health support accessible — especially for teens navigating loneliness and emotional stress. Together, they unpack the delicate balance between AI innovation and human empathy, the ethics of AI chatbots for youth, and what it really takes to design technology that heals instead of harms. Key Takeaways: • AI and Empathy: Why emotional intelligence—not algorithms—must guide the future of mental health tech. • Teens and Trust: How technology exploits belonging, and what must change to rebuild digital trust. • Regulating Responsibly: Why the answer isn’t bans, but thoughtful, transparent policy shaped with youth input. • Privacy by Design: How ethical AI can protect privacy without compromising impact. • Bridging the Global Mental Health Gap: Why collaboration and compassion matter as much as code. If this conversation made you rethink the relationship between AI and mental health, hit like, share, and subscribe to RegulatingAI for more insights on building technology that serves humanity. #RegulatingAIpodcast #sanjaypuri #ResponsibleAI Resources Mentioned: https://www.linkedin.com/in/karinstephan/

In this episode of RegulatingAI, host Sanjay Puri speaks with Karin Andrea-Stephan — COO & Co-founder of Earkick, an AI-powered mental health platform redefining how technology supports emotional well-being.

With a career that spans music, psychology, and digital innovation, Karin shares how she’s building privacy-first AI tools designed to make mental health support accessible — especially for teens navigating loneliness and emotional stress.

Together, they unpack the delicate balance between AI innovation and human empathy, the ethics of AI chatbots for youth, and what it really takes to design technology that heals instead of harms.

Key Takeaways:
• AI and Empathy: Why emotional intelligence—not algorithms—must guide the future of mental health tech.
• Teens and Trust: How technology exploits belonging, and what must change to rebuild digital trust.
• Regulating Responsibly: Why the answer isn’t bans, but thoughtful, transparent policy shaped with youth input.
• Privacy by Design: How ethical AI can protect privacy without compromising impact.
• Bridging the Global Mental Health Gap: Why collaboration and compassion matter as much as code.

If this conversation made you rethink the relationship between AI and mental health, hit like, share, and subscribe to RegulatingAI for more insights on building technology that serves humanity.

#RegulatingAIpodcast
#sanjaypuri
#ResponsibleAI

Resources Mentioned:
https://www.linkedin.com/in/karinstephan/

YouTube Video UExHQkZHSTNBcHpxZFM3MEg4NHlsaTFRLWpfY3FxaUpiTC5EN0U0NTk3NzIwMjUxN0M3

Karin Stephan on Building Emotionally Intelligent Technology | RegulatingAI Podcast

The Human Side of Machine Intelligence: Jeff McMillan on AI at Morgan Stanley – RegulatingAI Podcast

In this episode of the RegulatingAI Podcast, we host California State Senator Scott Wiener, one of the most influential policymakers shaping the future of AI regulation, AI safety, and transparency standards in the United States. As President Donald Trump’s new AI executive order pushes for federal control over AI regulation, Senator Wiener explains why states like California must retain the power to regulate artificial intelligence — and how California’s laws could influence global AI governance. Senator Wiener is the author of: • SB 1047 – California’s proposed liability bill for high-risk AI systems • SB 53 – California’s new AI transparency law, now in effect We dive deep into: • The battle between federal vs. state AI regulation • Why California remains the frontline of AI governance • The real impact of Trump’s AI executive order • Growing risks of AI-driven job displacement • How governments can balance innovation with public safety • The future of responsible and accountable AI development 🔑 KEY TAKEAWAYS 1. California’s Policy Power California’s tech dominance allows it to shape national and global AI standards even when Congress stalls. 2. SB 1047 vs. SB 53 Explained SB 1047 proposed legal liability for dangerous AI systems, while SB 53 — now law — requires AI companies to publicly disclose safety and risk practices. 3. Why Transparency Won After SB 1047 was vetoed, California shifted toward transparency as a regulatory first step through SB 53. 4. AI Job Disruption Is Accelerating Senator Wiener warns that workforce displacement from AI is happening faster than expected. 5. A Realistic Middle Path He advocates for smart AI guardrails — avoiding both overregulation and total deregulation. If you found this conversation valuable, don’t forget to like, subscribe, and share to stay updated on global conversations shaping the future of AI governance. Resources Mentioned: https://www.linkedin.com/company/ascet-center-of-excellence https://www.linkedin.com/in/james-h-dickerson-phd Timestamps: 00:00 – Intro: Trump’s AI order vs California​ 01:20 – Who is Senator Scott Wiener?​ 03:00 – Why AI needs proactive regulation​ 05:00 – Catastrophic AI risks and real‑world threats​ 07:30 – Lessons from past tech underregulation (social media, privacy)​ 09:30 – Why California wants to lead on AI rules​ 11:30 – Inside SB 1047 & SB 53: goals and pushback​ 14:00 – Big‑tech lobbying and accusations of “caving to industry”​ 16:30 – Trump’s AI executive order and Senator Cruz’s preemption idea​ 19:00 – Federal vs state power: when preemption is acceptable​ 21:00 – Balancing innovation with safety in San Francisco’s AI boom​ 23:00 – AI, jobs, and inequality: what keeps Wiener up at night​ 25:30 – Closing thoughts

In this episode of the RegulatingAI Podcast, we host California State Senator Scott Wiener, one of the most influential policymakers shaping the future of AI regulation, AI safety, and transparency standards in the United States.

As President Donald Trump’s new AI executive order pushes for federal control over AI regulation, Senator Wiener explains why states like California must retain the power to regulate artificial intelligence — and how California’s laws could influence global AI governance.

Senator Wiener is the author of:
• SB 1047 – California’s proposed liability bill for high-risk AI systems
• SB 53 – California’s new AI transparency law, now in effect

We dive deep into:
• The battle between federal vs. state AI regulation
• Why California remains the frontline of AI governance
• The real impact of Trump’s AI executive order
• Growing risks of AI-driven job displacement
• How governments can balance innovation with public safety
• The future of responsible and accountable AI development

🔑 KEY TAKEAWAYS
1. California’s Policy Power
California’s tech dominance allows it to shape national and global AI standards even when Congress stalls.
2. SB 1047 vs. SB 53 Explained
SB 1047 proposed legal liability for dangerous AI systems, while SB 53 — now law — requires AI companies to publicly disclose safety and risk practices.
3. Why Transparency Won
After SB 1047 was vetoed, California shifted toward transparency as a regulatory first step through SB 53.
4. AI Job Disruption Is Accelerating
Senator Wiener warns that workforce displacement from AI is happening faster than expected.
5. A Realistic Middle Path
He advocates for smart AI guardrails — avoiding both overregulation and total deregulation.

If you found this conversation valuable, don’t forget to like, subscribe, and share to stay updated on global conversations shaping the future of AI governance.

Resources Mentioned:
https://www.linkedin.com/company/ascet-center-of-excellence
https://www.linkedin.com/in/james-h-dickerson-phd

Timestamps:
00:00 – Intro: Trump’s AI order vs California​
01:20 – Who is Senator Scott Wiener?​
03:00 – Why AI needs proactive regulation​
05:00 – Catastrophic AI risks and real‑world threats​
07:30 – Lessons from past tech underregulation (social media, privacy)​
09:30 – Why California wants to lead on AI rules​
11:30 – Inside SB 1047 & SB 53: goals and pushback​
14:00 – Big‑tech lobbying and accusations of “caving to industry”​
16:30 – Trump’s AI executive order and Senator Cruz’s preemption idea​
19:00 – Federal vs state power: when preemption is acceptable​
21:00 – Balancing innovation with safety in San Francisco’s AI boom​
23:00 – AI, jobs, and inequality: what keeps Wiener up at night​
25:30 – Closing thoughts

YouTube Video UExHQkZHSTNBcHpxZFM3MEg4NHlsaTFRLWpfY3FxaUpiTC45MEI3NjgzMTVFQkZGODYx

Trump’s AI Executive Order vs California: Senator Scott Wiener Responds | RegulatingAI Podcast

In this episode of RegulatingAI, host Sanjay Puri sits down with Congresswoman Sarah McBride of Delaware — a member of the U.S. Congressional AI Caucus — to talk about how America can lead responsibly in the global AI race. From finding the right balance between innovation and regulation to making sure AI truly benefits workers and small businesses, Rep. McBride shares her human-centered vision for how AI can advance democracy, fairness, and opportunity for everyone. Here are 5 key takeaways from the conversation: 💡 Finding the “Goldilocks” Zone: How to strike that just-right balance where AI regulation protects people without holding back innovation. 🏛️ Federal vs. State Regulation: Why McBride believes the U.S. needs a unified national AI framework — but one that still values state leadership and flexibility. 👩‍💻 AI and the Workforce: What policymakers can do to make sure AI augments human talent rather than replacing it. 🌎 Democracy vs. Authoritarianism: The U.S.’s role in leading with values and shaping AI that reflects openness, ethics, and democracy. 🔔 Delaware’s Legacy of Innovation: How Delaware’s collaborative approach to growth can be a model for responsible tech leadership. If you enjoyed this episode, don’t forget to like, comment, share, and subscribe to RegulatingAI for more conversations with global policymakers shaping the future of artificial intelligence. Resources Mentioned: mcbride.house.gov https://mcbride.house.gov/about Timestamps: 00:00 — Introduction and welcome 03:00 — Balancing AI innovation vs. regulation 07:00 — Federal vs. state roles in AI policy 10:00 — Policymaking for small businesses and equity 13:00 — Bipartisan efforts in Congress for AI legislation 16:00 — America’s global leadership in ethical AI 19:00 — Workforce impact and policies for an AI-driven economy 23:00 — The future: rethinking the American dream and social policy 26:00 — AI in government services and the need for safeguards 29:00 — Delaware’s legacy and collaborative innovation 32:00 — Closing thoughts

In this episode of RegulatingAI, host Sanjay Puri sits down with Congresswoman Sarah McBride of Delaware — a member of the U.S. Congressional AI Caucus — to talk about how America can lead responsibly in the global AI race.

From finding the right balance between innovation and regulation to making sure AI truly benefits workers and small businesses, Rep. McBride shares her human-centered vision for how AI can advance democracy, fairness, and opportunity for everyone.

Here are 5 key takeaways from the conversation:
💡 Finding the “Goldilocks” Zone: How to strike that just-right balance where AI regulation protects people without holding back innovation.
🏛️ Federal vs. State Regulation: Why McBride believes the U.S. needs a unified national AI framework — but one that still values state leadership and flexibility.
👩‍💻 AI and the Workforce: What policymakers can do to make sure AI augments human talent rather than replacing it.
🌎 Democracy vs. Authoritarianism: The U.S.’s role in leading with values and shaping AI that reflects openness, ethics, and democracy.
🔔 Delaware’s Legacy of Innovation: How Delaware’s collaborative approach to growth can be a model for responsible tech leadership.

If you enjoyed this episode, don’t forget to like, comment, share, and subscribe to RegulatingAI for more conversations with global policymakers shaping the future of artificial intelligence.

Resources Mentioned:
mcbride.house.gov
https://mcbride.house.gov/about

Timestamps:
00:00 — Introduction and welcome
03:00 — Balancing AI innovation vs. regulation
07:00 — Federal vs. state roles in AI policy
10:00 — Policymaking for small businesses and equity
13:00 — Bipartisan efforts in Congress for AI legislation
16:00 — America’s global leadership in ethical AI
19:00 — Workforce impact and policies for an AI-driven economy
23:00 — The future: rethinking the American dream and social policy
26:00 — AI in government services and the need for safeguards
29:00 — Delaware’s legacy and collaborative innovation
32:00 — Closing thoughts

YouTube Video UExHQkZHSTNBcHpxZFM3MEg4NHlsaTFRLWpfY3FxaUpiTC5ENjg3MEUyQ0IzODMzQThB

Inside AI Policy with Congresswoman Sarah McBride | RegulatingAI Podcast with Sanjay Puri

In this panel of the World Summit AI, Amsterdam, host Sanjay Puri, President of RegulatingAI, moderates a high-impact panel on “Funding the Future: How to Invest in Trustworthy AI.” Joining him are Vanessa Butera, Director of Data & Information Solutions at the European Investment Bank (EIB), and Catalina Muller, President of ALLAI and Observer at the Council of Europe. Together, they unpack how capital, regulation, and governance are reshaping the global AI landscape — offering founders, investors, and policymakers a blueprint for responsible and sustainable AI innovation. 5 Key Takeaways 🎙️ 1️⃣ Trustworthy AI must be funded first — not flashy “marketing AI.” Real value comes from systems that have strong data integrity, governance, and traceability from the start. 2️⃣ The EU AI Act is not a barrier, but a blueprint — both experts emphasise that compliance, built early, reduces liability and accelerates sustainable returns. 3️⃣ Liability and public trust drive investment decisions — unsafe or non-compliant AI can lead to fines, reputational damage, and loss of investor confidence. 4️⃣ Europe is building a unified AI ecosystem — through instruments like EIB venture debt, the TechEU initiative, and EIC programs designed specifically for early-stage startups. 5️⃣ Regulation and innovation go hand in hand — debunking the myth that rules slow progress; instead, they create more competitive, future-proof AI solutions. Enjoyed this panel? Make sure to like, comment, share, and subscribe to RegulatingAI for more conversations shaping the global AI governance landscape. Timestamps: 00:00 — Introduction and panel setup 03:10 — Guest introductions 06:00 — Why fund trustworthy AI 10:00 — Regulation and public trust 13:00 — The EU AI Act and compliance 16:00 — How funding decisions are made 19:00 — Infrastructure and investment 21:00 — Q&A and closing remarks

In this panel of the World Summit AI, Amsterdam, host Sanjay Puri, President of RegulatingAI, moderates a high-impact panel on “Funding the Future: How to Invest in Trustworthy AI.” Joining him are Vanessa Butera, Director of Data & Information Solutions at the European Investment Bank (EIB), and Catalina Muller, President of ALLAI and Observer at the Council of Europe. Together, they unpack how capital, regulation, and governance are reshaping the global AI landscape — offering founders, investors, and policymakers a blueprint for responsible and sustainable AI innovation.

5 Key Takeaways 🎙️
1️⃣ Trustworthy AI must be funded first — not flashy “marketing AI.” Real value comes from systems that have strong data integrity, governance, and traceability from the start.
2️⃣ The EU AI Act is not a barrier, but a blueprint — both experts emphasise that compliance, built early, reduces liability and accelerates sustainable returns.
3️⃣ Liability and public trust drive investment decisions — unsafe or non-compliant AI can lead to fines, reputational damage, and loss of investor confidence.
4️⃣ Europe is building a unified AI ecosystem — through instruments like EIB venture debt, the TechEU initiative, and EIC programs designed specifically for early-stage startups.
5️⃣ Regulation and innovation go hand in hand — debunking the myth that rules slow progress; instead, they create more competitive, future-proof AI solutions.

Enjoyed this panel? Make sure to like, comment, share, and subscribe to RegulatingAI for more conversations shaping the global AI governance landscape.

Timestamps:
00:00 — Introduction and panel setup
03:10 — Guest introductions
06:00 — Why fund trustworthy AI
10:00 — Regulation and public trust
13:00 — The EU AI Act and compliance
16:00 — How funding decisions are made
19:00 — Infrastructure and investment
21:00 — Q&A and closing remarks

YouTube Video UExHQkZHSTNBcHpxZFM3MEg4NHlsaTFRLWpfY3FxaUpiTC41MzJFNEYxODEyNzA0QUUx

Can We Trust AI if We Don’t Fund It Right? | World Summit AI Panel | RegulatingAI

Recorded live at the OGP Global Summit 2025, this special episode of RegulatingAI explores “Power, Participation, and the Algorithm: AI Governance for the People.” Moderated by Sanjay Puri, President of RegulatingAI, the panel features Mehdi Jomaa (Former PM of Tunisia), Yvonne Wamucii (Presidency of Kenya), Augusta Nnadi (Nigeria), Tim Hughes (OGP), and Alex Walsh (IE University). Together, they confront a critical question: How can democracies ensure AI serves people—ethically, inclusively, and transparently? 5 Key Takeaways ~ Democratic Oversight Matters: AI governance frameworks must embed transparency, accountability, and participation from the start. ~ Inclusion from Day One: Policymakers must co-design systems with marginalised groups—especially youth, women, and rural communities. ~ Trust and Transparency: Strategic communication and accessible information are essential to counter misinformation and build civic trust. ~ Global Collaboration: Democracies need shared AI norms rooted in human rights and open governance—not just national agendas. ~ Education & Foresight: Ethical literacy, digital inclusion, and anticipatory regulation are key to balancing innovation and democratic safeguards. If you care about the future of democracy in an algorithmic world, this episode is for you. 👉 Like, share, and subscribe for more global conversations on AI, governance, and civic power. #AIGovernance #RegulatingAIpodcast #sanjaypuri Timestamps: 00:00 — Podcast introduction & framing the session 03:00 — Panelist introductions 07:00 — Why democratic oversight of AI matters 14:00 — Inclusion and participation: From principles to practice 21:00 — Building trust and fighting misinformation 29:00 — Universities & ethical AI education 36:00 — International collaboration & global south voices 43:00 — Preventing concentration of power 50:00 — Procurement, accountability, and democratic adaptation 57:00 — Audience Q&A 1:07:00 — Key risks and opportunities for democracy 1:14:00 — Closing takeaways

Recorded live at the OGP Global Summit 2025, this special episode of RegulatingAI explores “Power, Participation, and the Algorithm: AI Governance for the People.”

Moderated by Sanjay Puri, President of RegulatingAI, the panel features Mehdi Jomaa (Former PM of Tunisia), Yvonne Wamucii (Presidency of Kenya), Augusta Nnadi (Nigeria), Tim Hughes (OGP), and Alex Walsh (IE University).

Together, they confront a critical question: How can democracies ensure AI serves people—ethically, inclusively, and transparently?


5 Key Takeaways
~ Democratic Oversight Matters: AI governance frameworks must embed transparency, accountability, and participation from the start.
~ Inclusion from Day One: Policymakers must co-design systems with marginalised groups—especially youth, women, and rural communities.
~ Trust and Transparency: Strategic communication and accessible information are essential to counter misinformation and build civic trust.
~ Global Collaboration: Democracies need shared AI norms rooted in human rights and open governance—not just national agendas.
~ Education & Foresight: Ethical literacy, digital inclusion, and anticipatory regulation are key to balancing innovation and democratic safeguards.

If you care about the future of democracy in an algorithmic world, this episode is for you.

👉 Like, share, and subscribe for more global conversations on AI, governance, and civic power.

#AIGovernance #RegulatingAIpodcast #sanjaypuri

Timestamps:
00:00 — Podcast introduction & framing the session
03:00 — Panelist introductions
07:00 — Why democratic oversight of AI matters
14:00 — Inclusion and participation: From principles to practice
21:00 — Building trust and fighting misinformation
29:00 — Universities & ethical AI education
36:00 — International collaboration & global south voices
43:00 — Preventing concentration of power
50:00 — Procurement, accountability, and democratic adaptation
57:00 — Audience Q&A
1:07:00 — Key risks and opportunities for democracy
1:14:00 — Closing takeaways

YouTube Video UExHQkZHSTNBcHpxZFM3MEg4NHlsaTFRLWpfY3FxaUpiTC5EODkyNDMzRkJBNkQ2NkMz

From Principles to Practice: How Governments Can Regulate AI Responsibly | OGP Global Summit 2025

Armenia is quietly becoming one of the world's most interesting AI hubs—and you probably haven't heard about it yet. In this episode, I sit down with Armenia's Minister of Finance to discuss: ~ Why Nvidia is building a massive AI factory in Armenia ~ How a country of 3 million is attracting Synopsis, Yandex, and major tech companies ~ The secret advantage: abundant energy + Soviet-era engineering talent ~ Is the AI investment boom a bubble or the real deal? ~ How AI is already being used in tax collection and government services ~ The peace agreement with Azerbaijan and what it means for tech investment ~ Why the "Middle Corridor" could make Armenia the next tech destination The Minister doesn't think AI investment is a bubble—he thinks we're just getting started. He shares honest insights about job displacement, efficiency gains, and why human connection still matters in an AI-driven world. About the Guest: Armenia's Minister of Finance is an economist who rose from bank accounting to leading the nation's fiscal policy. He oversees Armenia's economic transformation during a pivotal era of digital ambitions and AI development. Timestamps: 0:00 — Intro to Regulating AI Podcast & Sanjay Puri 0:31 — Armenia’s Minister of Finance: Guest Introduction 1:09 — Armenia’s history as a Soviet tech hub 2:10 — Nvidia’s AI factory in Armenia explained 3:10 — Why Armenia is attracting global tech companies 4:30 — Armenia’s talent and energy advantages 5:30 — R&D centres: Synopsis, Yandex, Nvidia in Armenia 7:20 — Is the AI investment boom a bubble? 8:00 — Impact of AI on jobs and economy in Armenia 10:00 — How Armenia’s Ministry of Finance uses AI (tax/data) 11:00 — Piloting AI in government services 12:00 — Citizen interactions: Can AI improve public services? 13:00 — Digitalisation, cybersecurity & future AI agencies 14:10 — Armenia’s peace agreement & tech cooperation 15:30 — Outro and closing remarks 🎙️ Subscribe for conversations with global leaders at the intersection of AI, policy, and innovation 💬 Leave a comment: What surprised you most about Armenia's AI strategy? 🔔 Hit the bell to catch our next episode

Armenia is quietly becoming one of the world's most interesting AI hubs—and you probably haven't heard about it yet.

In this episode, I sit down with Armenia's Minister of Finance to discuss:

~ Why Nvidia is building a massive AI factory in Armenia
~ How a country of 3 million is attracting Synopsis, Yandex, and major tech companies
~ The secret advantage: abundant energy + Soviet-era engineering talent
~ Is the AI investment boom a bubble or the real deal?
~ How AI is already being used in tax collection and government services
~ The peace agreement with Azerbaijan and what it means for tech investment
~ Why the "Middle Corridor" could make Armenia the next tech destination

The Minister doesn't think AI investment is a bubble—he thinks we're just getting started. He shares honest insights about job displacement, efficiency gains, and why human connection still matters in an AI-driven world.

About the Guest:
Armenia's Minister of Finance is an economist who rose from bank accounting to leading the nation's fiscal policy. He oversees Armenia's economic transformation during a pivotal era of digital ambitions and AI development.

Timestamps:
0:00 — Intro to Regulating AI Podcast & Sanjay Puri
0:31 — Armenia’s Minister of Finance: Guest Introduction
1:09 — Armenia’s history as a Soviet tech hub
2:10 — Nvidia’s AI factory in Armenia explained
3:10 — Why Armenia is attracting global tech companies
4:30 — Armenia’s talent and energy advantages
5:30 — R&D centres: Synopsis, Yandex, Nvidia in Armenia
7:20 — Is the AI investment boom a bubble?
8:00 — Impact of AI on jobs and economy in Armenia
10:00 — How Armenia’s Ministry of Finance uses AI (tax/data)
11:00 — Piloting AI in government services
12:00 — Citizen interactions: Can AI improve public services?
13:00 — Digitalisation, cybersecurity & future AI agencies
14:10 — Armenia’s peace agreement & tech cooperation
15:30 — Outro and closing remarks

🎙️ Subscribe for conversations with global leaders at the intersection of AI, policy, and innovation

💬 Leave a comment: What surprised you most about Armenia's AI strategy?
🔔 Hit the bell to catch our next episode

YouTube Video UExHQkZHSTNBcHpxZFM3MEg4NHlsaTFRLWpfY3FxaUpiTC5BRUVCN0E0MzEwQzAwNjMy

Small Nations & Big AI Ideas

Contact US

Knowledge Networks brings together Non-Profits dedicated to domain-specific community building.