2024-04-30
2024-06-28
2024-06-06
Manuscript received September 5, 2022; revised October 5, 2022; accepted November 15, 2022.
Abstract—Nowadays, humans can communicate easily with others by recognizing speech and text characters, particularly facial expressions. In human communication, it is critical to comprehend their emotion or implicit expression. Indeed, facial expression recognition is vital for analyzing the emotions of conversation partners, which can contribute to a series of matters, including mental health consulting. This technique enables psychiatrists to select appropriate questions based on their patients’ current emotional state. The purpose of this study was to develop a deep learningbased model for detecting and recognizing emotions on human faces. We divided the experiment into two parts: Faster R-CNN and mini-Xception architecture. We concentrated on four distinct emotional states: angry, sad, happy, and neutral. Both models implemented using the Faster R-CNN and the mini-Xception architectures were compared during the evaluation process. The findings indicate that the mini-Xception architecture model produced a better result than the Faster R-CNN. This study will be expanded in the future to include the detection of complex emotions such as sadness. Keywords—facial expression, emotion recognition, deep learning, mini-Xception, faster R-CNN Cite: Sarunya Kanjanawattana, Piyapong Kittichaiwatthana, Komsan Srivisut, and Panchalee Praneetpholkrang, "Deep Learning-Based Emotion Recognition through Facial Expressions," Journal of Image and Graphics, Vol. 11, No. 2, pp. 140-145, June 2023. Copyright © 2023 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.