Facts don’t matter to people who believe in debunked conspiracy theories—at least that’s the belief. But this theory itself might prove untrue, according to psychology researchers.
Evidence delivered by an AI chatbot convinced a significant number of participants in a study to put less faith in a conspiracy theory they previously said was true, according to a study published today in Science. Researchers at MIT and Cornell University, led by Thomas Costello, an assistant professor of psychology at American University in Washington, D.C., concluded that chatbots excelled at delivering information that debunked the specific reasons participants believed in conspiracy theories.
“Many people who strongly believe in seemingly fact-resistant conspiratorial beliefs can change their minds when presented with compelling evidence,” the study’s authors wrote.
Current psychological research posits that conspiracy theorists resist facts because their beliefs serve some internal need, such as belonging to a group, maintaining a sense of control over their circumstances or feeling special. The researchers started with the hypothesis that conspiracy theorists could be swayed from their positions with clear, specific facts to refute the erroneous evidence their participants cited.
While many people may believe in a given conspiracy theory, the researchers said, the evidence they rely on varies among individuals.
“People have different versions of the conspiracy in their head,” Costello said in a press briefing.
Can a chatbot convince a conspiracy theorist?
To measure the chatbot’s effectiveness, the researchers sought out participants who endorsed theories including the belief that the 11 September, 2001 attacks were an inside job and that certain governments have funneled illegal drugs into ethnic minority communities. They defined a conspiracy theory as a belief that certain events were “caused by secret, malevolent plots involving powerful conspirators.”
The chatbot reduced participants’ confidence in a conspiracy theory by an average of 20 percent, as rated on a scale of 0 percent to 100 percent by the participants themselves before and after the conversations. In follow-up queries, the change in beliefs persisted at 10 days and again at 2 months. The chatbot was powered by GPT-4 Turbo, a large-language model from OpenAI that gave it a wide range of information to use in response to the participants’ remarks. Participants were told the study was investigating conversations about controversial topics between AI and humans.
The chatbot wasn’t prompted by researchers to refute true conspiracies. For example, the chatbot wouldn’t discredit the well-documented MKUltra program, in which the CIA tested drugs on human subjects in the mid-20th century. Fact checkers reviewed the evidence given by the chatbot and found it was accurate 99.2 percent of the time, and the other 0.8 percent of claims were misleading. They didn’t find any claims…
Read full article: AI Chatbots Can Talk Conspiracy Theorists Out of Believing
The post “AI Chatbots Can Talk Conspiracy Theorists Out of Believing” by Laura Hautala was published on 09/12/2024 by spectrum.ieee.org
Leave a Reply