Integrated Qualitative Response Assessment System

Year : 2024 | Volume :11 | Issue : 03 | Page : –
By
vector

Prasad A Lahare,

vector

Smita K. Thakare,

  1. Assistant Professor, Department of Computer, Pune Vidyarthi Griha’s College of Engineering & S. S. Dhamankar Institute of Management, Nashik, Maharashtra, India
  2. Assistant Professor, Department of Computer, Pune Vidyarthi Griha’s College of Engineering & S. S. Dhamankar Institute of Management, Nashik, Maharashtra, India

Abstract document.addEventListener(‘DOMContentLoaded’,function(){frmFrontForm.scrollToID(‘frm_container_abs_107814’);});Edit Abstract & Keyword

Examinations for universities and year boards are traditionally administered offline, with a significant number of students opting for subjective exams. This preference stems from the labor-intensive nature of evaluating subjective responses, which requires considerable time and effort from educators. Additionally, subjective grading can be influenced by the evaluator’s mood, leading to inconsistencies. In contrast, multiple-choice and objective questions are prevalent in entrance and competitive exams due to their ease of automated grading, which is error-free and resource-efficient. Currently, there are no systems available to automate the evaluation of descriptive (subjective) questions, which necessitates manual grading. Automating the evaluation process for descriptive responses could transform the education sector by addressing these challenges. An automated system could utilize technologies such as Natural Language Processing (NLP), machine learning, and Optical Character Recognition (OCR) to evaluate and rate written content. This approach would significantly reduce the time and effort required for grading, ensure consistent and unbiased evaluations, and handle large volumes of exam papers efficiently.          Implementing such a system involves several steps: collecting a large dataset of graded papers, developing and training machine learning models, integrating OCR and NLP technologies, and rigorously testing the system for accuracy and reliability. Overcoming challenges such as the complexity and subjectivity of responses, as well as ensuring fairness, is crucial. Ultimately, automated evaluation of descriptive exam responses would enhance the efficiency, consistency, and scalability of the grading process, benefiting both educators and students.

Keywords: Machine Learning, Natural Language Processing (NLP), computer-based evaluation, Natural Language Toolkit (NLTK), Training Phase, Support Vector Machine (SVM),

[This article belongs to Journal of Multimedia Technology & Recent Advancements (jomtra)]

How to cite this article:
Prasad A Lahare, Smita K. Thakare. Integrated Qualitative Response Assessment System. Journal of Multimedia Technology & Recent Advancements. 2024; 11(03):-.
How to cite this URL:
Prasad A Lahare, Smita K. Thakare. Integrated Qualitative Response Assessment System. Journal of Multimedia Technology & Recent Advancements. 2024; 11(03):-. Available from: https://journals.stmjournals.com/jomtra/article=2024/view=0

Full Text PDF

References
document.addEventListener(‘DOMContentLoaded’,function(){frmFrontForm.scrollToID(‘frm_container_ref_107814’);});Edit

  1. Wang J, Dong Y. Measurement of text similarity: a survey. Information. 2020 Aug 31;11(9):421.
  2. Han M, Zhang X, Yuan X, Jiang J, Yun W, Gao C. A survey on the techniques, applications, and performance of short text semantic similarity. Concurrency and Computation: Practice and Experience. 2021 Mar 10;33(5):e5971.
  3. Patil MS, Patil MS. Evaluating student descriptive answers using natural language processing. International Journal of Engineering Research & Technology (IJERT). 2014 Mar;3(3).
  4. Patil P, Patil S, Miniyar V, Bandal A. Subjective answer evaluation using machine learning. International Journal of Pure and Applied Mathematics. 2018 May;118(24):1-3.
  5. Muangprathub J, Kajornkasirat S, Wanichsombat A. Document plagiarism detection using a new concept similarity in formal concept analysis. Journal of Applied Mathematics. 2021;2021(1):6662984.
  6. Hu X, Xia H. Automated assessment system for subjective questions based on LSI. In2010 Third International Symposium on Intelligent Information Technology and Security Informatics 2010 Apr 2 (pp. 250-254). IEEE.
  7. Kusner M, Sun Y, Kolkin N, Weinberger K. From word embeddings to document distances. InInternational conference on machine learning 2015 Jun 1 (pp. 957-966). PMLR.
  8. Ouahrani L, Bennouar D. AR-ASAG an Arabic dataset for automatic short answer grading evaluation. InProceedings of the Twelfth Language Resources and Evaluation Conference 2020 May (pp. 2634-2643).
  9. Xia LZ, Ye JF, Luo DA, Guan MX, Liu J, Cao XM. Short text automatic scoring system based on Bert-BiLSTM model. Journal of Shenzhen University. 2022;39:349-54.
  10. Saha S, Dhamecha TI, Marvaniya S, Sindhgatta R, Sengupta B. Sentence level or token level features for automatic short answer grading?: Use both. InArtificial Intelligence in Education: 19th International Conference, AIED 2018, London, UK, June 27–30, 2018, Proceedings, Part I 19 2018 (pp. 503-517). Springer International Publishing.
  11. Jagadamba G. Online subjective answer verifying system using artificial intelligence. In2020 fourth international conference on I-SMAC (IoT in social, mobile, analytics and cloud)(I-SMAC) 2020 Oct 7 (pp. 1023-1027). IEEE.
  12. Seshathriaathithyan S, Sriram MV, Prasanna S, Venkatesan R. Affective—hierarchical classification of text—an approach using NLP toolkit. In2016 international conference on circuit, power and computing technologies (ICCPCT) 2016 Mar 18 (pp. 1-6). IEEE.
  13. Zhang K. Research on the optimizing method of question answering system in natural language processing. In2019 International Conference on Virtual Reality and Intelligent Systems (ICVRIS) 2019 Sep 14 (pp. 251-254). IEEE.
  14. Tereshchenko G, Gruzdo I. Overview and analysis of existing decisions of determining the meaning of text documents. In2018 International Scientific-Practical Conference Problems of Infocommunications. Science and Technology (PIC S&T) 2018 Oct 9 (pp. 645-653). IEEE.
  15. Ying L, Huidi L. Review of text analysis based on deep learning. In2020 International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI) 2020 Dec 4 (pp. 384-388). IEEE.
  16. Han H, Wang Q, Chen C. Policy text analysis based on text mining and fuzzy cognitive map. In2019 15th International Conference on Computational Intelligence and Security (CIS) 2019 Dec 13 (pp. 142-146). IEEE.

Regular Issue Subscription Review Article
Volume 11
Issue 03
Received 28/06/2024
Accepted 13/09/2024
Published 18/10/2024

function myFunction2() {
var x = document.getElementById(“browsefigure”);
if (x.style.display === “block”) {
x.style.display = “none”;
}
else { x.style.display = “Block”; }
}
document.querySelector(“.prevBtn”).addEventListener(“click”, () => {
changeSlides(-1);
});
document.querySelector(“.nextBtn”).addEventListener(“click”, () => {
changeSlides(1);
});
var slideIndex = 1;
showSlides(slideIndex);
function changeSlides(n) {
showSlides((slideIndex += n));
}
function currentSlide(n) {
showSlides((slideIndex = n));
}
function showSlides(n) {
var i;
var slides = document.getElementsByClassName(“Slide”);
var dots = document.getElementsByClassName(“Navdot”);
if (n > slides.length) { slideIndex = 1; }
if (n (item.style.display = “none”));
Array.from(dots).forEach(
item => (item.className = item.className.replace(” selected”, “”))
);
slides[slideIndex – 1].style.display = “block”;
dots[slideIndex – 1].className += ” selected”;
}