AI Evaluator – Automated Examination Evaluation

[{“box”:0,”content”:”[if 992 equals=”Open Access”]n

n

n

n

Open Access

nn

n

n[/if 992]n

n

Year : June 14, 2024 at 4:33 pm | [if 1553 equals=””] Volume :14 [else] Volume :14[/if 1553] | [if 424 equals=”Regular Issue”]Issue[/if 424][if 424 equals=”Special Issue”]Special Issue[/if 424] [if 424 equals=”Conference”][/if 424] : 01 | Page : 1-9

n

n

n

n

n

n

By

n

[foreach 286]n

n

n

Rahul Raj, Srijani Mondal

n

    n t

  • n

n

n[/foreach]

n

n[if 2099 not_equal=”Yes”]n

    [foreach 286] [if 1175 not_equal=””]n t

  1. Student,, Student, Department of Computer Science and Engineering, Cambridge Institute of Technology, North Campus,, Department of Cyber Security, Cambridge Institute of Technology, North Campus, Bengaluru,, Bengaluru, India, India
  2. n[/if 1175][/foreach]

n[/if 2099][if 2099 equals=”Yes”][/if 2099]n

n

Abstract

nAn AI system for automated exam grading is proposed. It tackles inefficiencies in human evaluation. The system uses TrOCR for accurate handwritten text recognition and a GPT model trained on graded responses for evaluation. This approach offers efficiency and reduced bias, but challenges remain. Evaluating open-ended questions and ensuring explainability require further development. It starts by looking at how AI technologies, such as machine learning, deep learning, and natural language processing, have developed and how they are being used to automate different parts of exam evaluation, such grading, providing feedback, and spotting plagiarism. Additionally, it looks at how AI-driven assessment systems might improve learning outcomes, lessen the strain of teachers, and give students tailored feedback. But the research also points to several difficulties, including resolving privacy and data security issues and guaranteeing impartiality, accountability, and openness in AI-based assessments.
Careful training data curation is necessary to mitigate bias. The paper concludes by highlighting the need for the system to handle various question formats, address ambiguities, and integrate human review. This research presents a promising step towards a future of efficient, fair, and AI-powered exam grading.

n

n

n

Keywords: Autograding, TrOCR, GPT, Explainability, Debias

n[if 424 equals=”Regular Issue”][This article belongs to Trends in Opto-electro & Optical Communication(toeoc)]

n

[/if 424][if 424 equals=”Special Issue”][This article belongs to Special Issue under section in Trends in Opto-electro & Optical Communication(toeoc)][/if 424][if 424 equals=”Conference”]This article belongs to Conference [/if 424]

n

n

n

How to cite this article: Rahul Raj, Srijani Mondal. AI Evaluator – Automated Examination Evaluation. Trends in Opto-electro & Optical Communication. June 14, 2024; 14(01):1-9.

n

How to cite this URL: Rahul Raj, Srijani Mondal. AI Evaluator – Automated Examination Evaluation. Trends in Opto-electro & Optical Communication. June 14, 2024; 14(01):1-9. Available from: https://journals.stmjournals.com/toeoc/article=June 14, 2024/view=0

nn[if 992 equals=”Open Access”] Full Text PDF Download[/if 992] n[if 992 not_equal=”Open Access”]

[/if 992]n[if 992 not_equal=”Open Access”] n


nn[/if 992]nn[if 379 not_equal=””]n

Browse Figures

n

n

[foreach 379]n

n[/foreach]n

n

n

n[/if 379]n

n

References

n[if 1104 equals=””]n

  1. Cole, J. (2019). The time and resource implications of large-scale assessment. Assessment in Education: Principles, Policy & Practice, 26(5), 473-487. [doi: 1080/0953813X.2018.1508222]
  2. Ferguson, J., Gottschalk, R., & Roe, B. (2017). Race and gender bias in student discipline. Educational Researcher, 46(3), 130-141. [doi: 10.3102/0013189X17701242]
  3. Gupta, A., Iqbal, U., & Malik, M. A. (2023, January). TrOCR: A Reliable Transformer-Based Optical Character Recognition for Handwritten In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 1234-1245). Association for Computational Linguistics.
  4. Shi, , Wang, X., Wang, H., Zhang, Z., & Luo, W. (2020). Deep learning for handwritten text recognition: A review. Journal of Graphics, Imaging and Vision, 4(2), 178-194. [doi: 10.1007/s11760-020-00203-8]
  5. Zhang, , Zhao, S., & Li, H. (2022, June). A Survey on Automated Essay Scoring with Deep Learning. arXiv preprint arXiv:2206.07223.
  6. Weller, , Rozovskaya, A., & Mayfield, J. (2020). A multi-perspective evaluation of automated essay scoring systems. Journal of Artificial Intelligence in Education, 31(2), 229-252. [doi: 10.1007/s40563-020-00180-7]
  7. Singh B, Kapoor J, Nagpure SM, Kolhatkar SS, Chanore PG, Vishwakarma MM, Kokate RB. An Analysis of Automated Answer Evaluation Systems based on Machine Learning. In: Proceedings of the 3rd International Conference on Intelligent Computing and Control Systems, Madurai, India. 2019;696-702.
  8. Sultan MA, Salazar C, Sumner T, Fast and Easy Short Answer Grading with High Accuracy. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Austin, Texas. 2016;1565-1570
  9. Rahman MA, Mahmud H. Automated Evaluation of Handwritten Answer Script Using Deep Learning Approach. In: Proceedings of the International Conference on Data Science and Information Technology, ACM, New York, NY. 2020;109-115.

Chen, C.-Y., Liou, H.-C., Chang, J.S. (2006). Fast–an automatic generation system for grammar tests. In Proceedings of the COLING/ACL on Interactive Presentation Sessions. Association for Computational Linguistics, Sydney, (pp. 1–4).

nn[/if 1104][if 1104 not_equal=””]n

    [foreach 1102]n t

  1. [if 1106 equals=””], [/if 1106][if 1106 not_equal=””],[/if 1106]
  2. n[/foreach]

n[/if 1104]

nn


nn[if 1114 equals=”Yes”]n

n[/if 1114]

n

n

[if 424 not_equal=””]Regular Issue[else]Published[/if 424] Subscription Original Research

n

n

n

n

n

Trends in Opto-electro & Optical Communication

n

[if 344 not_equal=””]ISSN: 2231-0401[/if 344]

n

n

n

n

n

[if 2146 equals=”Yes”][/if 2146][if 2146 not_equal=”Yes”][/if 2146]n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n[if 1748 not_equal=””]

[else]

[/if 1748]n

n

n

Volume 14
[if 424 equals=”Regular Issue”]Issue[/if 424][if 424 equals=”Special Issue”]Special Issue[/if 424] [if 424 equals=”Conference”][/if 424] 01
Received April 23, 2024
Accepted May 31, 2024
Published June 14, 2024

n

n

n

n

n

n function myFunction2() {n var x = document.getElementById(“browsefigure”);n if (x.style.display === “block”) {n x.style.display = “none”;n }n else { x.style.display = “Block”; }n }n document.querySelector(“.prevBtn”).addEventListener(“click”, () => {n changeSlides(-1);n });n document.querySelector(“.nextBtn”).addEventListener(“click”, () => {n changeSlides(1);n });n var slideIndex = 1;n showSlides(slideIndex);n function changeSlides(n) {n showSlides((slideIndex += n));n }n function currentSlide(n) {n showSlides((slideIndex = n));n }n function showSlides(n) {n var i;n var slides = document.getElementsByClassName(“Slide”);n var dots = document.getElementsByClassName(“Navdot”);n if (n > slides.length) { slideIndex = 1; }n if (n (item.style.display = “none”));n Array.from(dots).forEach(n item => (item.className = item.className.replace(” selected”, “”))n );n slides[slideIndex – 1].style.display = “block”;n dots[slideIndex – 1].className += ” selected”;n }n”}]