Reschke, Peter; Cooper, Alexandra; Clayton, Colter K. (Brigham Young University)
Faculty Advisor: Reschke, Peter (Family, Home, and Social Sciences; Family Life)
Research has demonstrated that context plays a significant role in emotion perception (Reschke et al., 2018). However, it is unclear whether context produces differences in visual scanning of facial expressions of emotion. The present study explored whether variability within facial expressions differentially interacts with posture context to produce changes in emotion perception.
Participants (N = 86) viewed 96 images comprised of "open" (i.e., scrunched nose, open mouth, protruding tongue) and "closed" (i.e., scrunched nose, closed mouth) versions of disgust faces photoshopped onto postural expressions of emotion (disgust, anger, fear, sadness, joy, neutral). Subjects saw each image for 5000 ms while their eye movements were recorded by an eye tracker and were then asked to categorize the emotion in the facial expression.
The results support prior research demonstrating that contextual influence (i.e., categorizations matching the context) differed significantly within emotion contexts as a function of face type, F(3, 5436) = 29.65, p < .001 (Reschke et al., 2019). Specifically, contextual influence increased significantly for closed and open disgust facial expressions in anger and sadness contexts (ps < .001) and decreased significantly in fear contexts (p < .001). However, these differences in behavioral responses were not reflected in participants' visual scanning behaviors. Specifically, subjects fixated significantly more in the lower face region (M = 3.29) than the upper face region (M = .76), t(2268) = 37.56, p < .001, regardless of face type. Additionally, participants took significantly more time to fixate on the upper face region (M = 1046.77) than the lower face region (M = 580.07), t(1652) = 10.59, p < .001, also regardless of face type.
These findings suggest that behavioral differences in context-dependent emotion categorization are not due to reliable differences in visual scanning. Future directions and alternative explanations will be discussed.