Prasad A. Lahare,
Smita K. Thakare,
- Assistant Professor, Department of Computer, Pune Vidyarthi Griha’s College of Engineering & S. S. Dhamankar Institute of Management, Nashik, Maharashtra, India
- Assistant Professor, Department of Computer, Pune Vidyarthi Griha’s College of Engineering & S. S. Dhamankar Institute of Management, Nashik, Maharashtra, India
Abstract
Examinations for universities and yearboards are traditionally administered offline, with a significant number of students opting for subjective exams. This preference stems from the labor-intensive nature of evaluating subjective responses, which requires considerable time and effort from educators. Additionally, subjective grading can be influenced by the evaluator’s mood, leading to inconsistencies. In contrast, multiple-choice and objective questions are prevalent in entrance and competitive exams due to their ease of automated grading, which is error-free and resource-efficient. Currently, there are no systems available to automate the evaluation of descriptive (subjective) questions, which necessitates manual grading. Automating the evaluation process for descriptive responses could transform the education sector by addressing these challenges. An automated system could utilize technologies such as natural language processing (NLP), machine learning, and optical character recognition (OCR) to evaluate and rate written content. This approach would significantly reduce the time and effort required for grading, ensure consistent and unbiased evaluations, and handle large volumes of exam papers efficiently. Implementing such a system involves several steps: collecting a large dataset of graded papers, developing and training machine learning models, integrating OCR and NLP technologies, and rigorously testing the system for accuracy and reliability. Overcoming challenges such as the complexity and subjectivity of responses, as well as ensuring fairness, is crucial. Ultimately, automated evaluation of descriptive exam responses would enhance the efficiency, consistency, and scalability of the grading process, benefiting both educators and students.
Keywords: Machine learning, natural language processing (NLP), computer-based evaluation, natural language toolkit (NLTK), training phase, support vector machine (SVM)
[This article belongs to Journal of Multimedia Technology & Recent Advancements ]
Prasad A. Lahare, Smita K. Thakare. Integrated Qualitative Response Assessment System. Journal of Multimedia Technology & Recent Advancements. 2024; 11(03):18-24.
Prasad A. Lahare, Smita K. Thakare. Integrated Qualitative Response Assessment System. Journal of Multimedia Technology & Recent Advancements. 2024; 11(03):18-24. Available from: https://journals.stmjournals.com/jomtra/article=2024/view=178756
References
- Wang J, Dong Y. Measurement of text similarity: A survey. Information. 2020;11:421. DOI: 10.3390/info11090421.
- Han M, Zhang X, Yuan X, Jiang J, Yun W, Gao C. A survey on the techniques, applications, and performance of short text semantic similarity. Concurr Comput Pract Exp. 2021;33. DOI: 10.1002/cpe.5971.
- Patil MS, Patil MS. Evaluating student descriptive answers using natural language processing. Int J Eng Res Technol (IJERT). 2014;3.
- Patil P, Patil S, Miniyar V, Bandal A. Subjective answer evaluation using machine learning. Int J Pure Appl Math. 2018;118:1–3.
- Muangprathub J, Kajornkasirat S, Wanichsombat A. Document plagiarism detection using a new concept similarity in formal concept analysis. J Appl Math. 2021;2021:1–10. DOI: 10.1155/2021/6
- Hu X, Xia H. Automated assessment system for subjective questions based on LSI. In: 2010 Third International Symposium on Intelligent Information Technology and Security Informatics. 2010. p. 250–4. DOI: 10.1109/IITSI.2010.76.
- Kusner M, Sun Y, Kolkin N, Weinberger K. From word embeddings to document distances. In: International Conference on Machine Learning; 2015. p. 957–66. PMLR.
- Ouahrani L, Bennouar D. Ar.-ASAG an Arabic dataset for automatic short answer grading evaluation. In: Proceedings of the Twelfth Language Resources and Evaluation Conference; 2020. p. 2634–43.
- Xia LZ, Ye JF, Luo DA, Guan MX, Liu J, Cao XM. Short text automatic scoring system based on Bert-BiLSTM model. J Shenzhen Univ Sci Eng. 2022;39:349–54. DOI: 10.3724/SP.J.1249.2022.
- Saha S, Dhamecha TI, Marvaniya S, Sindhgatta R, SenGupta B. Sentence level or token level features for automatic short answer grading?: Use both. In: Artificial Intelligence in Education. Proceedings, Part I: 19th International Conference, AIED 2018, London, UK, June 27–30, 2018, Vol. 19. Springer International Publishing; 2018. p. 503–17.
- Jagadamba G. Online subjective answer verifying system using artificial intelligence. In: 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). 2020. p. 1023–7.
- Seshathriaathithyan S, Sriram MV, Prasanna S, Venkatesan R. Affective—Hierarchical classification of text—An approach using NLP toolkit. In: 2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT). 2016. p. 1–6. DOI: 10.1109/ICCPCT.2016.7530228.
- Zhang K. Research on the optimizing method of question answering system in natural language processing. In: 2019 International Conference on Virtual Reality and Intelligent Systems (ICVRIS). 2019. p. 251–4. DOI: 10.1109/ICVRIS.2019.00069.
- Tereshchenko G, Gruzdo I. Overview and analysis of existing decisions of determining the meaning of text documents. In: 2018 International Scientific-Practical Conference Problems of Infocommunications. Science and Technology (PIC S&T). 2018. p. 645–53. DOI: 10.1109/INFO2018.8632014.
- Ying L, Huidi L. Review of text analysis based on deep learning. In: 2020 International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI). 2020. p. 384–8. DOI: 10.1109/ICHCI51889.2020.00087.
- Han H, Wang Q, Chen C. Policy text analysis based on text mining and fuzzy cognitive map. In: 2019 15th International Conference on Computational Intelligence and Security (CIS). 2019. p. 142–6. DOI: 10.1109/CIS.2019.00038.

Journal of Multimedia Technology & Recent Advancements
| Volume | 11 |
| Issue | 03 |
| Received | 28/06/2024 |
| Accepted | 13/09/2024 |
| Published | 18/10/2024 |
PlumX Metrics
