Rajendra Khanal,
Raj Kumar Paneru,
Susheel Thapa,
Surendra Shrestha,
- , Department of Electronics and Computer Engineering, Pulchowk Campus, IOE, TU School of Science, Health & Technology, Nepal Open University Lalitpur, Nepal, India
- , Department of Electronics and Computer Engineering, Pulchowk Campus, IOE, TU School of Science, Health & Technology, Nepal Open University Lalitpur, Nepal, India
- , Department of Electronics and Computer Engineering, Pulchowk Campus, IOE, TU School of Science, Health & Technology, Nepal Open University Lalitpur, Nepal, India
- Associate Proofesor, Department of Electronics and Computer Engineering, Pulchowk Campus, IOE, TU School of Science, Health & Technology, Nepal Open University Lalitpur, Nepal, India
Abstract
Emotion recognition from electroencephalogram (EEG) signals has gained significant attention due to its potential in human- computer interaction (HCI), mental health monitoring, and personalized content delivery. This paper presents the use of Convolutional neural networks (CNNs) to classify emotions such as happiness, sadness, fear, neutral, and disgust by leveraging a fusion of EEG signals and eye movements data. Compared to conventional methods of emotion detection, such as those that rely on body language, voice tone, or facial expressions, EEG-based emotion recognition offers a distinct advantage. EEG signals directly record neuronal oscillations linked to affective and cognitive processes in the brain, even if these external cues can be intentionally altered or hidden. Because of this, EEG is now a more objective and trustworthy method of detecting minute emotional changes that may not be readily apparent. Effective feature extraction and pattern detection are still difficult jobs, nevertheless, because of the high complexity and noise present in EEG data. The SEED-V dataset, a reputable benchmark database that contains synchronized EEG and eye-tracking recordings from several participants exposed to emotionally charged video clips, was used for the research. Standard filtering techniques were used to preprocess each participant’s data to eliminate artifacts like eye blinks and muscle noise. After being cleaned, the signals were divided into temporal windows and input into the CNN model for classification and feature learning. Multiple convolutional and pooling layers were used in the CNN architecture to collect localized spatial data. Fully connected layers were then used for the final classification of emotions. The capacity to represent intricate spatial and temporal correlations within EEG signals has greatly increased thanks to recent developments in machine learning and deep neural networks, especially Convolutional Neural Networks (CNNs). Utilizing the SEED-V dataset, our CNN model achieved a training accuracy of 97.92%, a validation accuracy of 94.44%, and a test accuracy of 93%. The integration of eye movements features with EEG signals significantly enhanced the model’s performance. Experimental results validate the effectiveness of CNNs in multimodal emotion recognition, achieving an overall weighted average F1-score of 0.93. This work highlights the promise of combining EEG and eye-tracking data for enhancing interactive systems and lays a foundation for future advancements in affective computing.
Keywords: Electroencephalogram (EEG), SEED-V dataset, Human-Computer Interactions HCI, Convolutional Neural Networks CNNs, Fear, Disgust, Precision
[This article belongs to International Journal of Optical Innovations & Research ]
Rajendra Khanal, Raj Kumar Paneru, Susheel Thapa, Surendra Shrestha. EMOTION RECOGNITION FROM ELECTROENCEPHALOGRAM SIGNAL AND EYE MOVEMENT BASED ON DEEP LEARNING. International Journal of Optical Innovations & Research. 2025; 03(02):8-15.
Rajendra Khanal, Raj Kumar Paneru, Susheel Thapa, Surendra Shrestha. EMOTION RECOGNITION FROM ELECTROENCEPHALOGRAM SIGNAL AND EYE MOVEMENT BASED ON DEEP LEARNING. International Journal of Optical Innovations & Research. 2025; 03(02):8-15. Available from: https://journals.stmjournals.com/ijoir/article=2025/view=235451
References
- Koelstra S, Muhl C, Soleymani M, Lee JS, Yazdani A, Ebrahimi T, Pun T, Nijholt A, Patras I. Deap: A database for emotion analysis; using physiological signals. IEEE transactions on affective computing. 2011 Jun 9;3(1):18-31.
- Islam MR, Moni MA, Islam MM, Rashed-Al-Mahfuz M, Islam MS, Hasan MK, Hossain MS, Ahmad M, Uddin S, Azad A, Alyami SA. Emotion recognition from EEG signal focusing on deep learning and shallow learning techniques. IEEe Access. 2021 Jun 22;9:94601-24.
- Chen JX, Jiang DM, Zhang YN. A hierarchical bidirectional GRU model with attention for EEG-based emotion classification. Ieee Access. 2019 Aug 22;7:118530-40.
- Lu W, Xia L, Tan TP, Ma H. CIT-EmotionNet: convolution interactive transformer network for EEG emotion recognition. PeerJ Computer Science. 2024 Dec 23;10:e2610.
- Feng L, Cheng C, Zhao M, Deng H, Zhang Y. EEG-based emotion recognition using spatial-temporal graph convolutional LSTM with attention mechanism. IEEE Journal of Biomedical and Health Informatics. 2022 Aug 16;26(11):5406-17.
- Lan YT, Liu W, Lu BL. Multimodal emotion recognition using deep generalized canonical correlation analysis with an attention mechanism. In2020 International Joint Conference on Neural Networks (IJCNN) 2020 Jul 19 (pp. 1-6). IEEE.
- Ma W, Zheng Y, Li T, Li Z, Li Y, Wang L. A comprehensive review of deep learning in EEG-based emotion recognition: classifications, trends, and practical implications. PeerJ Computer Science. 2024 May 23;10:e2065.
- Zhang Y, Zhang Y, Wang S. An attention-based hybrid deep learning model for EEG emotion recognition. Signal, image and video processing. 2023 Jul;17(5):2305-13.
- Chaddad A, Wu Y, Kateb R, Bouridane A. Electroencephalography signal processing: A comprehensive review and analysis of methods and techniques. Sensors. 2023 Jul 16;23(14):6434.
- Ranganathan H, Chakraborty S, Panchanathan S. Multimodal emotion recognition using deep learning architectures. In2016 IEEE winter conference on applications of computer vision (WACV) 2016 Mar 7 (pp. 1-9). IEEE.
- Liu W, Qiu JL, Zheng WL, Lu BL. Comparing recognition performance and robustness of multimodal deep learning models for multimodal emotion recognition. IEEE Transactions on Cognitive and Developmental Systems. 2021 Apr 5;14(2):715-29.
| Volume | 03 |
| Issue | 02 |
| Received | 05/11/2025 |
| Accepted | 10/11/2025 |
| Published | 31/12/2025 |
| Publication Time | 56 Days |
Login
PlumX Metrics
