Face it! How reliable is emotional facial expression coding within and across raters? Skip to main content
Utah's Foremost Platform for Undergraduate Research Presentation
2024 Abstracts

Face it! How reliable is emotional facial expression coding within and across raters?

Authors: Anna Norman, Chloe Houghton, Macall Walker, Audrey Saunders
Mentors: Tyson Harmon
Insitution: Brigham Young University

Face it! How reliable is emotional facial expression coding within and across raters?

Background

Emotion, described as “physiological forces, located within individuals, that bolster our sense of uniqueness....” (Katriel, 2015, p. 57) is a critical aspect of day-to-day communication. For people with acquired language disorders post-stroke (i.e., aphasia), this interaction is particularly important due to relatively spared emotional processing, which has the potential to either facilitate or interfere with language processing (see e.g., Harmon et al., 2022; Ramsberger, 1996). The present study is part of a larger project, which seeks to determine whether people with aphasia exhibit more emotional facial expressions during personal narrative discourse than adults who do not have aphasia and whether these expressions are more emotionally arousing. The present study specifically seeks to investigate the reliability of facial coding by comparing average frequency and intensity of emotional facial expressions both within and across undergraduate student coders.

Methods

In order to quantify emotional facial expression frequency and intensity, undergraduate research assistants are trained to code facial expressions using a modified FACES protocol (Kring and Sloan, 2007). The modified protocol will be used to code emotional facial expressions of video footage that was obtained from participants while they told personal narratives (e.g., talking about an illness they experienced or an important life event). First, research assistants identify the baseline facial expression for each participant. Next, research assistants code transitions from a neutral expression to an emotional facial expression for valence (positive/negative) and intensity. Intensity ratings are scaled from 1 to 4 depending on how many units of the face are involved within the corresponding facial expression. Using this protocol, research assistants will begin facial coding after they are trained and demonstrate mastery by attaining 80% agreement with a master code. Upon completing initial data coding, research assistants will be assigned to recode 10% of previously completed video samples as well as 10% of samples that were previously coded by other coders. This secondary coding will be used to measure intra- and inter-rater reliability across dependent variables: frequency of emotional facial expressions, intensity of positive facial expressions, and intensity of negative facial expressions. Average frequency of emotional facial expressions will be calculated as the number of facial expressions produced per minute within a given sample. Intensity of positive and negative facial expressions will be calculated as the mean intensity within each valence respectively. The average frequency and intensity of initial and reliability codes will then be compared using Pearson’s correlation coefficient.

Anticipated Results

We anticipate that intra- and inter-rater reliability will be above 0.8. Through a strict training process, research assistants will calibrate their coding to achieve 80% agreement with the master code. We anticipate this training process to produce effective intra- and inter-rater reliability. Findings will be important for determining the reliability of facial coding procedures and trustworthiness of data for answering questions related to the longer-term project.

References

Harmon, T.G., Jacks, A., Haley, K. L., & Bailliard, A. (2020). How responsiveness from a communication partner affects story retell in aphasia: Quantitative and qualitative findings. American Journal of Speech-Language Pathology, 29(1), 142-156. https://doi.org/10.1044/2019_AJSLP-19-0091

Harmon, T.G., Nielsen, C., Loveridge, C., Williams, C. (2022). Effects of positive and negative emotion on picture naming for people with mild to moderate aphasia: A prelimariny investigation. Journal of Speech, Language, and Hearing Research, 64(3), 1025-1043. https://doi.org/10.1044/2021_JSLHR-21-00190

Katriel, T. (2015). Exploring emotion discourse. In H. Flam & J. Kleres (eds.), Methods of exploring emotions (1st ed., pp.57-66). Taylor & Francis Group.

Kring, A.M., & Sloan, D.M. (2007). The facial expression coding system (FACES): Development, validation, and utility. Psychological Assessment, 19(2), 210-224. https://doi.org/10/1037/1040-3590/19.2.120