This study explores how large language models (LLMs) used for fact-checking affect the perception and dissemination of political news headlines. Despite the growing adoption of AI and tests of its ability to counter online misinformation, little is known about how people respond to LLM-driven fact-checking. This experiment reveals that even LLMs that accurately identify false headlines do not necessarily enhance users’ abilities to discern headline accuracy or promote accurate news sharing. LLM fact checks can actually reduce belief in true news wrongly labeled as false and increase belief in dubious headlines when the AI is unsure about an article’s veracity. These findings underscore the need for research on AI fact-checking’s unintended consequences, informing policies to enhance information integrity in the digital age.
Read the full article in PNAS: doi:10.1073/pnas.2322823121