Do LLMs Comply with the First Amendment? – Video

Do LLMs Comply with the First Amendment? – Video

Do LLMs follow the First Amendment?

In the thought-provoking video “Do LLMs Follow the First Amendment?”, free speech scholar Jacob Mchangama explores the profound implications of AI as a mediator of our information landscape. As AI technologies increasingly shape how we access and engage with information, their built-in biases pose significant risks to free speech. Mchangama highlights a compelling experiment where 268 prompts were tested across popular language models (LLMs). The results revealed a troubling trend: many AIs adopted restrictive stances on controversial yet legal speech, raising concerns about censorship in the digital age.

Mchangama emphasizes the importance of ownership and ethical guidelines in the development of AI models. He argues that proprietary models can create choke points that limit the diversity of thought, hindering public discourse. Conversely, the rise of open-source models could empower individuals to tinker with AI technologies, fostering a healthier ecosystem for free expression. This conversation is crucial as we navigate an era where the frontiers of free thought are increasingly mediated by algorithms and corporate interests.

Join Mchangama on this journey to safeguard our freedoms as he advocates for a future where AI can enhance, rather than restrict, our engagement with the world of ideas.

Watch the video by Big Think

Author Video Description

This interview is an episode from ‪@The-Well, our publication about ideas that inspire a life well-lived, created with the ‪@JohnTempletonFoundation.

Subscribe to The Well on YouTube ► https://bit.ly/thewell-youtube
Watch all of Mchangama’s interviews ► https://www.youtube.com/playlist?list=PL_B7bI1QVmJBpGqPZQP1mSFrN1Rfy0CR5

What happens when the technology mediating nearly all our information begins to decide what speech is acceptable?

Free speech scholar Jacob Mchangama warns that AI’s growing role in search, email, and word processing means its hidden biases could shape freedom of thought itself. With his team at the Future of Free Speech, Mchangama ran an experiment that tested 268 prompts against popular LLMs and found that the results often reflected inconsistent standards.

According to Mchangama, this shows why ownership of AI models matters, since their values, incentives, and pressures ultimately shape public access to information.

Read the video transcript ► https://bigthink.com/the-well/when-ai-self-censors/?utm_source=youtube&utm_medium=video&utm_campaign=youtube_description_bigthink

———————————————————————————-

About Jacob Mchangama:
Jacob Mchangama founded and leads The Future of Free Speech, is a research professor at Vanderbilt, and a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE). A prolific commentator and author on free speech and human rights, he created the podcast “Clear and Present Danger” and wrote the 2022 book “Free Speech: A History From Socrates to Social Media.”

———————————————————————————-

About The Well
Do we inhabit a multiverse? Do we have free will? What is love? Is evolution directional? There are no simple answers to life’s biggest questions, and that’s why they’re the questions occupying the world’s brightest minds.

Together, let’s learn from them.

Subscribe to the weekly newsletter ► https://bit.ly/thewellemailsignup

———————————————————————————-

Join The Well on your favorite platforms:
► Facebook: https://bit.ly/thewellFB
► Instagram: https://bit.ly/thewellIG

About Big Think

Big Think is the leading source of expert-driven, actionable, educational content — with thousands of videos, featuring experts ranging from Bill Clinton to Bill Nye, we help you get smarter, faster. Get actionable lessons from the world’s greatest thinkers & doers. Our experts are either disrupting or leading their respective fields.

Video “Do LLMs follow the First Amendment?” was uploaded on 11/18/2025 to Youtube Channel Big Think