Investigating Emotional Facial Facial Expression Production in Fluent and Non-fluent Aphasia Skip to main content
Utah's Foremost Platform for Undergraduate Research Presentation
2025 Abstracts

Investigating Emotional Facial Facial Expression Production in Fluent and Non-fluent Aphasia

Author(s): Anna Norman, Olivia Shepherd, Mari Merrill
Mentor(s): Tyson Harmon
Institution BYU

Emotion, described as “physiological forces, located within individuals, that bolster our sense of uniqueness...” (Katriel, 2015, p. 57) is a critical aspect of day-to-day communication. For people with acquired language disorders post-stroke (i.e., aphasia), this interaction is particularly important due to relatively spared emotional processing, which has the potential to either facilitate or interfere with language processing (see e.g., Blackett et al., 2024; Harmon et al., 2022; Ramsberger, 1996). Depending on the site of brain lesion and subsequent communication deficits, aphasia can be classified into two categories – fluent or nonfluent. For both categories, the communication deficits caused by aphasia heighten the importance of extralinguistic communication (i.e., gestures or facial expressions). In a case study conducted in 1982, researchers discovered that two individuals, one with fluent aphasia and one with nonfluent aphasia, produced varying frequencies of gestures (Feyereisen and Seron, 1982). The individual with fluent aphasia produced more gestures during a given time period, while the individual with nonfluent aphasia produced a higher frequency of gestures per word. The present study seeks to investigate another facet of extralinguistic communication. Specifically, we seek to investigate whether individuals with fluent or nonfluent aphasia produce more frequent emotional facial expressions during discourse. Participants were randomly selected from Aphasia Talk Bank, with 10 participants selected for each aphasia type – anomic, conduction, Wernicke’s, Broca’s, and global. Each individual participated in two narrative discourse tasks. Video and audio recordings of participants completing these tasks will be analyzed for the present study. First, recording transcripts will be analyzed to determine a participant’s words produced per sample. Second, research assistants will code the emotional facial expressions produced during these narrative discourse samples using a modified FACES protocol (Kring and Sloan, 2007). Undergraduate research assistants code facial expressions according to valence, intensity, and duration. Dependent variables investigated in the final analysis will include (1) frequency of emotional facial expressions per minute and (2) frequency of emotional facial expressions per word. We anticipate that, based on Feyereisen and Seron’s gesture study, we will see a higher frequency of facial expressions for those with fluent aphasia due to their ability to produce more language in a given time period. However, we also hypothesize, based on Feyereisen and Seron’s study, that when comparing the number of facial expressions to the number of words produced per minute, we will see a higher proportion of facial expressions per word for those with nonfluent aphasia.