Ethical Risks of Generative AI in Education: Challenges, Implications, and a Responsible Use Framework

Notice

This is an unedited manuscript accepted for publication and provided as an Article in Press for early access at the author’s request. The article will undergo copyediting, typesetting, and galley proof review before final publication. Please be aware that errors may be identified during production that could affect the content. All legal disclaimers of the journal apply.

Year : 2026 | Volume : 03 | Issue : 01 | Page : 23 30
    By

    Samyo Ranjan Jagdev,

  • Chinmay Giri,

  • G. Subhasmita Prusty,

  • Girija Nandan Das,

  1. Assistant Professor, Department of Computer Science Engineering, GIET, Bhubaneswar, Odisha, India
  2. Student, Department of Computer Science Engineering, GIET, Bhubaneswar, Odisha, India
  3. Student, Department of Computer Science Engineering, GIET, Bhubaneswar, Odisha, India
  4. Student, Department of Computer Science Engineering, GIET, Bhubaneswar, Odisha, India

Abstract

The rapid diffusion of generative artificial intelligence (AI) technologies in educational settings is reshaping how teaching, learning, and assessment are designed and enacted. Large language models and related generative systems offer powerful capabilities for content creation, personalized feedback, and instructional support, promising gains in efficiency and learner engagement. However, their growing use also introduces a complex set of ethical risks that challenge foundational educational values such as integrity, equity, transparency, and trust. This paper critically examines the ethical implications of generative AI in education through a comprehensive synthesis of existing literature and policy discussions. The analysis identifies six interrelated ethical risk dimensions that consistently emerge across educational contexts: academic integrity and authorship, cognitive dependency, bias, and inequality, privacy, and data protection, transparency, and accountability, and assessment fairness. Rather than treating these concerns in isolation, the study demonstrates how they interact and reinforce one another, amplifying potential negative impacts on learning quality, student outcomes, and institutional credibility. In particular, the opacity of generative AI systems and uneven access to AI tools raise significant concerns regarding fairness, explainability, and the legitimacy of assessment practices. Building on this analysis, the paper proposes a four-pillar ethical framework for the responsible integration of generative AI in education, grounded in integrity, equity, transparency, and privacy. The framework translates ethical principles into actionable institutional and pedagogical strategies, including assessment redesign, AI disclosure policies, bias auditing, AI literacy initiatives, and compliance with data protection regulations. By aligning ethical considerations with practical implementation, the framework offers a holistic approach that moves beyond purely technical solutions. The paper contributes to ongoing debates on AI ethics in education by consolidating fragmented ethical concerns into a unified structure and providing guidance for educators, administrators, and policymakers. It argues that the educational value of generative AI ultimately depends not on technological capability alone, but on the ethical frameworks that govern its use, ensuring innovation supports meaningful, equitable, and trustworthy learning experiences.

Keywords: Academic integrity, AI ethics, bias, and fairness, education technology, generative artificial intelligence, responsible AI

[This article belongs to International Journal of Education Sciences ]

How to cite this article:
Samyo Ranjan Jagdev, Chinmay Giri, G. Subhasmita Prusty, Girija Nandan Das. Ethical Risks of Generative AI in Education: Challenges, Implications, and a Responsible Use Framework. International Journal of Education Sciences. 2026; 03(01):23-30.
How to cite this URL:
Samyo Ranjan Jagdev, Chinmay Giri, G. Subhasmita Prusty, Girija Nandan Das. Ethical Risks of Generative AI in Education: Challenges, Implications, and a Responsible Use Framework. International Journal of Education Sciences. 2026; 03(01):23-30. Available from: https://journals.stmjournals.com/ijes/article=2026/view=236581


References

  1. Luckin R, Holmes W, Griffiths M, Forcier L. Intelligence Unleashed: An Argument for AI in Education. 1st edition. London, UK: Pearson; 2016. pp. 1–50.
  2. Holmes W, Bialik M, Fadel C. Artificial Intelligence in Education: Promise and Implications. 1st edition. Boston, US: Educational Technology; 2019. pp. 1–210.
  3. Zawacki-Richter O, Marín VI, Bond M, Gouverneur F. Systematic review of AI in higher education. International Journal of Educational Technology in Higher Education. 2021;18(1):1–25.
  4. Selwyn N. Should Robots Replace Teachers?. 1st edition. Cambridge, UK: Polity Press; 2019. pp. 1–160. 10.1186/s41239–019-0171–0
  5. Kasneci E, et al. ChatGPT for good?. Learning and Individual Differences. 2023;103:102274. 10.1016/j.lindif.2023.102274
  6. Holmes W, Bialik M, Fadel C. Ethics of AI in education. Computers and Education: Artificial Intelligence. 1st edition. New York, US: Elsevier; 2022. pp. 1–35.
  7. Baker RS, Inventado PS. Educational data mining and learning analytics. In: Larusson JA, White B, editors. Learning Analytics. 1st edition. New York, US: Springer; 2014. pp. 61–94.
  8. Cotton D, et al. ChatGPT and academic integrity. Assessment & Evaluation in Higher Education. 2023;48(1):1–15.
  9. McGee P. Academic integrity in the age of AI. Educational Technology Research. 2023;71(2):1–12.
  10. Brynjolfsson E, Rock D, Syverson C. Artificial intelligence and the modern productivity paradox. Journal of Economic Perspectives. 2017;31(2):33–58.
  11. Doran J. Cognitive offloading and AI tools in learning. Computers in Human Behavior. 2023;140:107561.
  12. Bolukbasi T, et al. Man is to computer programmer as woman is to homemaker?. Advances in Neural Information Processing Systems. 2016;29:1–9.
  13. Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A. A survey on bias and fairness in machine learning. ACM Computing Surveys. 2021;54(6):1–35.
  14. Selwyn N. Digital education and inequality. British Journal of Sociology of Education. 2020;41(1):1–14.
  15. Pardo A, Siemens G. Ethical dimensions of learning analytics. British Journal of Educational Technology. 2019;45(3):438–450.
  16. Regan PM, Jesse J. Ethical challenges of edtech, big data and learning analytics. EDUCAUSE Review. 2019;54(3):1–10.
  17. Doshi-Velez F, Kim B. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. 2017.
  18. Arrieta AB, et al. Explainable Artificial Intelligence (XAI). Information Fusion. 2020;58:82–115.

Regular Issue Subscription Original Research
Volume 03
Issue 01
Received 23/01/2026
Accepted 28/01/2026
Published 01/02/2026
Publication Time 9 Days


Login


My IP

PlumX Metrics