Sachin T
Komala K
M.Z. Kurian
- Student, Sri Siddhartha Institute of Technology, Tumakuru, Karnataka, India
- Assistant Professor, Sri Siddhartha Institute of Technology, Tumakuru, Karnataka, India
- Head of Department, Sri Siddhartha Institute of Technology, Tumakuru, Karnataka, India
Abstract
Researchers in psychology, computer science, linguistics, neurology, and allied fields have become more interested in a human-computer interface system for autonomous face recognition or facial expression recognition. This study has recommended an Automatic Facial Expression Recognition System (AFERS). The proposed methodology consists of face detection, feature extraction, and facial expression identification processes. The initial phases of the face detection procedure include skin color identification using the YCbCr color model, illumination adjustment for face uniformity, and morphological operations for maintaining the required face region. Using the AAM (Active Appearance Model) approach, the first phase’s output is utilized to extract facial features such as the mouth, nose, and eyes. Automatic facial expression recognition is the third stage, and it is straightforward. Method of Euclidean Distance: This method compares the Euclidean distance between the feature points on the query image and the training images. The output picture expression is chosen based on the minimal Euclidean distance. This approach has a true recognition rate of between 90 and 95%. Utilizing the Artificial Neuro-Fuzzy Inference System (ANFIS), this method is further modified. In comparison to previous systems, this non-linear recognition system provides a recognition rate of close to 100%, which is satisfactory.
Keywords: Facial expression recognition (FER), multimodal sensor data, emotional expression recognition, spontaneous expression, real-world conditions
[This article belongs to International Journal of Electronics Automation(ijea)]
var fieldValue = “[user_role]”;
if (fieldValue == ‘indexingbodies’) {
document.write(‘Full Text PDF‘);
}
else if (fieldValue == ‘administrator’) { document.write(‘Full Text PDF‘); }
else if (fieldValue == ‘ijea’) { document.write(‘Full Text PDF‘); }
else { document.write(‘ ‘); }
Browse Figures
References
- Peng Zhao-Yi, Zhu Yan-hui, Zhou Yu, Real-time Facial Expression Recognition Based on Adaptive Canny Operator Edge IEEE, Multimedia, and Information Technology (MMIT), 2nd International Conference on. 2010; 2: 154–157.
- Abdat F, Maaoui C, Pruski A. Human-computer interaction using emotion recognition from facial IEEE, Computer Modeling and Simulation (EMS), 5th UK Sim European Symposium. 2011; 196–201.
- Valstar Michel F, Maja Pantic. Fully Automatic Recognition of The Temporal Phases of Facial IEEE Trans Syst Man Cybern B: Cybernetics. 2012; 42(1): 28–43.
- Ralph Gross, Iain Matthews, Jeffrey Cohn, Takeo Kanade, Simon Baker. Multi-PIE. Proc Int Conf Autom Face Gesture 2010 May 1;28(5):807-813.
- Kotsia I, Zafeiriou S, Pitas Texture and shape information fusion for facial expression and facial action unit recognition. Pattern Recognit. 2008; 41(3): 833–851.
- Peng Zhao-Yi, Zhou Yu, Wang Ping. Multi-pose face detection based on adaptive skin color and structure model. Proceedings of the 5th International Conference on Computational Intelligence and Security (CIS 2009). 2009; 325–329.
- Wang Zhi, He Sai-Xian. An adaptive edge-detection method based on a canny algorithm. Chinese Journal of Image and Graphics. 2004; 9(8): 957–962.
- Lyons M, Akamatsu S, Kamachi M, et al. Coding facial expressions with Gabor Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition. 1998; 200–205.
Volume | 01 |
Issue | 01 |
Received | May 16, 2023 |
Accepted | July 18, 2023 |
Published | December 16, 2023 |
function myFunction2() {
var x = document.getElementById(“browsefigure”);
if (x.style.display === “block”) {
x.style.display = “none”;
}
else { x.style.display = “Block”; }
}
document.querySelector(“.prevBtn”).addEventListener(“click”, () => {
changeSlides(-1);
});
document.querySelector(“.nextBtn”).addEventListener(“click”, () => {
changeSlides(1);
});
var slideIndex = 1;
showSlides(slideIndex);
function changeSlides(n) {
showSlides((slideIndex += n));
}
function currentSlide(n) {
showSlides((slideIndex = n));
}
function showSlides(n) {
var i;
var slides = document.getElementsByClassName(“Slide”);
var dots = document.getElementsByClassName(“Navdot”);
if (n > slides.length) { slideIndex = 1; }
if (n (item.style.display = “none”));
Array.from(dots).forEach(
item => (item.className = item.className.replace(” selected”, “”))
);
slides[slideIndex – 1].style.display = “block”;
dots[slideIndex – 1].className += ” selected”;
}