Can We Rely on Grok or Perplexity for Fact-Checking Posts on X?

The Use of AI Tools for Fact-Checking in Political Discourse
In an era where misinformation can spread rapidly through social media, users are increasingly turning to AI tools like Grok and Perplexity to verify facts related to politics, politicians, and even AI-generated content. While these tools provide quick answers, the reliability of their information sources raises important questions.
The Reliability of AI
Many users are now relying on AI bot responses to fact-check news and claims. However, a recent investigation by Columbia Journalism Review’s Tow Center for Digital Journalism revealed that these AI-driven tools returned incorrect answers for over 60% of the questions they were tasked with. This prompts a need for caution among users when interpreting their responses.
Grok, for instance, has openly acknowledged its limitations by stating that it cannot guarantee complete accuracy in the information it provides. It utilizes a vast array of online data, but this also increases the chance of it misidentifying facts. For example, Grok mistakenly identified a scene from Disney’s 101 Dalmatians as being from Lady and the Tramp, a blunder it quickly admitted.
The Importance of Fact-Checking Sources
AI bots frequently reference fact-checking websites to provide answers. A notable example includes a claim about Congress leader Rahul Gandhi allegedly promising farmers that he could turn potatoes into gold. This was a soundbite often used to mock Gandhi; however, this claim was misrepresented, as he was actually addressing a promise made by Prime Minister Narendra Modi.
When Grok and Perplexity assessed this claim, they misrepresented the context but cited The Quint for their reasoning, highlighting the need for careful examination of the sources from which these AI tools draw their information.
Conflicting AI Responses
AI responses can also be inconsistent, particularly on controversial subjects. For example, regarding Prime Minister Modi’s education, the bots offered conflicting information. While one version addressed controversies surrounding his degree, another seemed to corroborate that he held degrees in political science.
An associated study indicated that AI chatbots are not adept at declining to answer questions they cannot address properly. Instead, they tend to provide incorrect conclusions, compounding misinformation rather than clarifying it.
Analyzing Public Posts
In a recent post shared by comedian Kunal Kamra about activist Umar Khalid, the bots were summoned for fact-checking. Khalid, a controversial figure, has been jailed since 2020 under charges related to the Delhi riots conspiracy. The prosecution has struggled with inconsistencies in its case, yet Khalid’s situation has sparked broad public debate.
When users sought AI verification of Kamra’s statement, Grok and Perplexity offered rapid but imprecise responses. For instance, Perplexity described Kamra’s post as containing misinformation but failed to provide a thorough context regarding Khalid’s legal status.
In another instance, Grok made reference to certain “fringe groups” praising historical figures like Nathuram Godse, while also acknowledging Khalid’s polarizing status. These answers exhibited a bias that leaned towards emphasizing his alleged involvement in the riots.
The Challenge of Information Integrity
As the number of misinformation sources grows, individuals are utilizing AI tools to confirm facts across political discourse. However, the inconsistency in responses from these bots amplifies the challenge. A viral user post implicated certain influencers in spreading fake news, prompting both Grok and Perplexity to offer differing views on the situation.
While these AI tools can provide swift information, it’s essential for users to approach their findings with a critical eye, taking the time to verify facts independently or consult additional credible sources. The continuing evolution of AI-enabled technology highlights the necessity for heightened media literacy among users, especially given the notable inaccuracies that can arise from fast, AI-generated content.