Facial Expression

Facial expression is one of the most powerful, natural and universal signals for human beings to convey their emotional states and intentions. Numerous studies have been conducted on automatic facial expression recognition because of its practical importance in sociable robotics, medical treatment, driver fatigue surveillance, and many other human-computer interaction systems.

With the transition of facial expression recognition from laboratory-controlled to challenging in-the-wild conditions and the recent success of deep learning techniques in various fields, deep neural networks have increasingly been leveraged to learn discriminative representations for automatic FER. To facilitate deep facial expression recognition, we constructed two real-world facial expression datasets colleted from the Internet via crowdsourcing: RAF-DB with basic and compound emotions and RAF-ML with blended emotions. We further provide RAF-AU dataset with action unit coding on blended facial expressions in the wild.


Real-world Affective Faces Database (RAF-DB) is a large-scale facial expression database with around 30K great-diverse facial images downloaded from the Internet. Based on the crowdsourcing annotation, each image has been independently labeled by about 40 annotators. Images in this database are of great variability in subjects' age, gender and ethnicity, head poses, lighting conditions, occlusions, (e.g. glasses, facial hair or self-occlusion), post-processing operations, etc.

RAF-DB has large diversities, large quantities, and rich annotations, including: large number of real-world images; a 7-dimensional expression distribution vector for each image; two different subsets: single-label subset and two-tab subset; 5 accurate landmark locations, 37 automatic landmark locations, bounding box, race, age range and gender attributes annotations per image; baseline classifier outputs for basic emotions and compound emotions.


Real-world Affective Faces Multi Label (RAF-ML) is a multi-label facial expression dataset with around 5K great-diverse facial images downloaded from the Internet with blended emotions and variability in subjects' identity, head poses, lighting conditions and occlusions. During annotation, 315 well-trained annotators are employed to ensure each image can be annotated enough independent times. And images with multi-peak label distribution are selected out to constitute the RAF-ML.

In RAF-ML, we provide 4908 number of real-world images with blended emotions, 6-dimensional expression distribution vector for each image, 5 accurate landmark locations and 37 automatic landmark locations, and baseline classifier outputs for multi-label emotion recognition.



Real-world Affective Faces Action Unit Database (RAF-AU) is an extended dataset of RAF-ML with manual action unit coding. It employs a sign-based (i.e., AUs) and judgement-based (i.e., perceived emotion) approach to annotating blended facial expressions in the wild.

During annotation, two experienced coders independently FACS-coded the face images and arbitrated any disagreement. They also carefully checked and discussed if AUs emerged due to other AUs. In RAF-AU, we provide 4601 number of real-world images with 26 kinds of AUs been annotated, and baseline detection outputs for action unit detection.


† Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild
Shan Li and Weihong Deng, Computer Vision and Pattern Recognition (CVPR), 2017   PDF1
‡ Reliable crowdsourcing and deep locality-preserving learning for unconstrained facial expression recognition
Shan Li and Weihong Deng, IEEE Transactions on Image Processing (TIP), 2019   PDF2

Past research on facial expressions have used relatively limited datasets, which makes it unclear whether current methods can be employed in real world. In this paper, we present a novel database, RAF-DB, which contains about 30000 facial images from thousands of individuals. Each image has been individually labeled about 40 times, then EM algorithm was used to filter out unreliable labels. Based on RAF-DB, we propose a new DLP-CNN method, which aims to enhance the discriminative power of deep features by preserving the locality closeness while maximizing the inter-class scatters.
  


Blended Emotion in-the-wild: Multi-label Facial Expression Recognition Using Crowdsourced Annotations and Deep Locality Feature Learning
Shan Li and Weihong Deng, International Journal of Computer Vision (IJCV), 2019

Comprehending different categories of facial expressions plays a great role in the design of computational model analyzing human perceived and affective state. Authoritative studies have revealed that facial expressions in human daily life are in multiple or co-occurring mental states. However, due to the lack of valid datasets, most previous studies are still restricted to basic emotions with single label. In this paper, we present a novel multi-label facial expression database, RAF-ML, along with a new deep learning algorithm, to address this problem.
PDF


Deep Facial Expression Recognition: A Survey
Shan Li and Weihong Deng, IEEE Transactions on Affective Computing (2020)

In this paper, we provide a comprehensive survey on deep FER, including datasets and algorithms that provide insights into these intrinsic problems. First, we describe the standard pipeline of a deep FER system with the related background knowledge and suggestions of applicable implementations for each stage. We then introduce the available datasets that are widely used in the literature and provide accepted data selection and evaluation principles for these datasets. For the state of the art in deep FER, we review existing novel deep neural networks and related training strategies that are designed for FER based on both static images and dynamic image sequences, and discuss their advantages and limitations. Competitive performances on widely used benchmarks are also summarized in this section. We then extend our survey to additional related issues and application scenarios. Finally, we review the remaining challenges and corresponding opportunities in this field as well as future directions for the design of robust deep FER systems.
PDF