Impact Factor (2024): 6.21  |  ISSN: 2583-4371
    Email Id: editor.ijtle@gmail.com
    Impact Factor (2024): 6.21  |  ISSN: 2583-4371
    Email Id: editor.ijtle@gmail.com

    Hallucinations in Large Language Models for Education: Challenges and Mitigation

    JOURNAL ARTICLE

    Author: Kamaluddeen Usman Danyaro, Shamsu Abdullahi, Abdallah Saleh Abdallah, Haruna Chiroma

    Keywords: Large Language Models (LLMs), Hallucination, Prompt Engineering, LLM for Education


    Abstract: Large Language Models (LLMs) are increasingly being adopted in education to support teaching, learning, and assessment. While they offer benefits such as personalised learning and automated feedback, their tendency to generate hallucinations (plausible but factually incorrect or fabricated information) poses a critical challenge. In an educational context, hallucinations risk misleading students, compromising academic integrity, and eroding trust in AI-assisted learning. This paper examines hallucinations in education, highlighting their causes, risks, and implications. Unlike prior surveys that address hallucinations broadly, our work focuses specifically on education, where the consequences extend to academic honesty, critical thinking and equitable access. We provide a domain-specific analysis of how hallucinations emerge in tutoring systems, assessment and instructional content. Furthermore, we review technical and pedagogical mitigation strategies, such as prompt engineering, fine-tuning, dynamic course content integration and redesigned assessment practices. The paper contributes a framework that links technical solutions with education safeguards, emphasising that mitigating hallucinations is not limited to algorithmic advances but also requires institutional policies and critical AI literacy. By addressing these challenges, we aim to inform more reliable, equitable and trustworthy deployment of LLMs in education.


    Article Info: Received: 29 Sep 2025, Received in revised form: 25 Oct 2025, Accepted: 03 Nov 2025, Available online: 06 Nov 2025


    Hallucinations in Large Language Models for Education: Challenges and Mitigation DOI: 10.22161/ijtle.4.6.2

    Total View: 447 Downloads: 2 Page No: 13-19

    Cite this Article:
    APA | ACM | Chicago | Harvard | IEEE | MLA | Vancouver | Bibtex

    submit paper ijeel

    Archiving