Real Time Classification of Facial Expressions for Effective and Intelligent Video Communication

Document Type


Degree Name

Master of Science (MS)


Computer Science and Info Sys

Date of Award

Spring 2023


Real-time video communication has become a bigger part of everyone's lives lately as webconferencing, video conferencing, and video calls become increasingly common, and this development is not just because of the COVID-19 pandemic. For instance, the business world now relies less on the use of offices and more on real-time online communication for day-to-day work activities. This is because it is more cost-effective due to the advancement in communication technology. Video and web conferencing are now being deployed for business meetings, training events, lectures, presentations, conferences, and many more. They are also used for important events like medical consultations, counselling sessions, school lectures, friends and family communication, and so much more. However, one huge problem with real-time video communication is that in most cases, especially when communicating with more than one person, it becomes difficult to read and understand facial expressions, which are important for effective communication. To tackle this problem, there have been relevant studies on how to make the computer recognize facial expressions during real time video communication but most of these studies have only considered seven facial expressions namely “Happy”, “Sad”, “Fear”, “Anger”, “Disgust”, “Surprise”, and “Neutral” because the dataset is readily available. So, this research proposed a system that not only recognizes the seven most common facial expressions listed above, but also recognizes four more facial expressions namely “Pain”, “Tired/Exhausted”, “Lack of Interest”, and “Showing Interest” during a real-time video communication. The system would recognize eleven facial expressions in total. To achieve this, I built two convolutional neural network (CNN) models to be able to recognize these eleven facial expressions. The first model (which I named CNNM9 because it has nine classifications) classifies facial expressions as either “Happy”, “Sad”, “Fear”, “Anger”, “Disgust”, “Surprise”, “Pain”, “Tired/Exhausted”, or “Neutral” with an accuracy of about 78%. While the second model (which I named CNNM3 because it has three classifications), classifies facial expression as either “Showing Interest”, “Lack of Interest”, or “Neutral” with an accuracy of about 90%.


Omar El Ariss

Subject Categories

Computer Sciences | Physical Sciences and Mathematics