“EVEN THE DEEPEST OF RABBIT HOLES may have an exit” says a news release, September 12, 2024 from American University. Indeed, appearing the same day AAAS Science’s Editor-in-Chief H. Holden Thorp writes, “ChatGPT to the Rescue?”

What, “hallucinators” trying to straighten out people already down rabbit holes?
Here are tidbits gleaned primarily from Thorp’s Editorial. See the AU news release as well as “A.I. Chatbot Shows Promise in Talking People Out of Conspiracy Theories,” appearing in Science, again that same day.
Scientists as Communicators. Thorp says, “Making every scientist a communicator is a mantra that pervades academia, but not every scientist desires or is well suited to speak effectively with nonscientists…. Finding diverse science spokespeople who can ‘meet people where they are’ also would be valuable for connecting science information to specific communities, given that misunderstandings can stem from a given group’s valid distrust of scientific institutions.”

H. Holden Thorp, Editor-in-Chief, Science journals.
Failures of Communication. “These approaches,” Thorp says, “have been persistently difficult to implement and are no match for the juggernaut of nonsense unleashed by social media, podcasts, 24-hour news outlets, and online discussion forums. However, the new study reports that using a large language model (LLM, a type of machine learning program) such as ChatGPT may be a method for counteracting conspiratorial beliefs.”
Thorp continues, “That may seem a bit ironic because LLMs are notorious for inventing (‘hallucinating’) facts and spreading misinformation. But in conversations with individuals convinced of conspiracies, an LLM was found to produce a relatively large and durable reduction in beliefs in conspiracies. Conspiracy believers reduced their misinformed beliefs by 20% on average, an effect that lasted for at least 2 months.”
I wouldn’t call this correction rate a total success, but every little bit helps.
How Does an A.I. Reach These Whackos? (My word choice, not Science’s). Thorp explains, “This success is attributed to the LLM’s access to vast amounts of information. It can therefore readily generate counterevidence that is specific to the individual’s reasoning.”

Thorp continues, “David Rand, one of the study’s authors, told me after the paper was accepted that ‘all of the evidence we have suggests that it really is the counterarguments and nonconspiratorial explanations that are doing the work.’ I was curious about whether it might have something to do with the LLM not becoming emotional or showing the kind of perceived bias that might be attached to a human interlocutor, but so far, the investigators have not seen strong evidence for that.”
I guess the A.I. is just better at picking holes in know-it-all Uncle Harrumph’s B.S. [Editor: an overly vivid mixed metaphor?]
A.I. Isn’t Bothered by the ‘Gish Gallop.’ Thorp explains, “Purveyors of misinformation sometimes use a technique called the “Gish gallop”—named after a creationist who was particularly good at it—in which a conversation is flooded with an overwhelming number of mistruths.”
Uncle Harrumph is good at this too.
Thorp recognizes, “ Humans are unable to respond to this effectively, no matter how skilled the manner of their responses may be. But the LLM cannot be overwhelmed; it can cite counterevidence indefinitely.
Hallucinating? Researchers used independent fact-checkers to assess A.I. accuracy. It proved impressively high: 99.2 percent true, 0.8 percent misleading, and none false.
A whole lot better than Uncle Harrumph on his best of occasions,
A Promising Result. Although it is perhaps discouraging,” Thorp concludes, “that a machine might be better at countering misinformation than a human, the fact that it is ultimately the scientific information that does the persuading is a relief. It falls on human scientists to show that the AI future may not be so dystopian after all.”

I must get into the thick of these other two sources to learn more. I surely like the idea of A.I. being less dystopian. ds
© Dennis Simanaitis, SimanaitisSays.com, 2024