The positive influence of large language models on fact-checking practices: A case study of Grok

Samet, Uri (2025) The positive influence of large language models on fact-checking practices: A case study of Grok. World Journal of Advanced Engineering Technology and Sciences, 15 (3). pp. 1727-1738. ISSN 2582-8266

[thumbnail of WJAETS-2025-1123.pdf] Article PDF
WJAETS-2025-1123.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.

Download ( 566kB)

Abstract

This paper investigates the profound impact of Large Language Models (LLMs), with a specific focus on Grok, on the evolution of user fact-checking practices. Contrary to a simplistic view that LLMs solely exacerbate misinformation, this paper argues that their widespread adoption has positively contributed to a heightened public awareness of the critical need for information verification and a demonstrable increase in proactive fact-checking behaviors. This phenomenon is driven by the inherent limitations of LLMs, such as "hallucinations" and biases, which compel users to adopt a more vigilant and critical approach to digital content. Through empirical data on LLM proliferation, shifts in user trust, and evolving fact-checking habits, this study illustrates how these technologies, despite their imperfections, serve as catalysts for enhanced digital literacy. The unique integration and accessibility of Grok within the X platform are highlighted as a significant factor in fostering user-initiated fact-checking, offering a compelling case study for the future of AI-driven information integrity strategies.

Item Type: Article
Official URL: https://doi.org/10.30574/wjaets.2025.15.3.1123
Uncontrolled Keywords: Large Language Models (LLMS); Fact-Checking; Artificial Intelligence; Grok; Misinformation Detection; AI-Assisted Verification
Depositing User: Editor Engineering Section
Date Deposited: 16 Aug 2025 13:16
Related URLs:
URI: https://eprint.scholarsrepository.com/id/eprint/4813