Affective synchrony

The reproduction of another individual’s emotions in the self – the embodiment of perceived emotions – has been shown in our research (see 2016 for a review) to constitute one basis for emotional information processing. That is, seeing someone’s emotion expressions and using one’s own face to make the same expression helps the perceiver better understand and even empathize with the emotion of the other. When two people start to mimic each other’s emotion expressions, and sometimes also converge in their underlying physiology over time, we say that the people have synced their emotions. This might be what people even mean when they say that they are “in sync.”

 How do people come into affective synchrony? How does a precisely timed call and response of emotions between two people serve to help people get together, stay together and pursue common goals? Based on an integration of the literature, we work with a functional framework according to which affective synchrony provides 1) an efficient means for finding and meeting challenges and opportunities in the environment, 2) a common basis for information processing, and 3) stronger social connectedness through the generation of feelings of similarity and closeness.

 
Screen Shot 2020-07-18 at 2.57.55 PM.png

Syncing when Speaking isn’t possible

In recent research, we found that expressive synchrony increased when pairs of participants working on a joint task were prevented from using spoken language. Participants took turns completing trials of four different tasks in which they could earn points. Two of the tasks, a risk-taking task and a Jenga tower-building task, were designed to elicit emotion. Pairs were assigned to either a spoken language permitted or spoken language not permitted condition. As expected, in the latter condition, in which pairs could not use spoken language, both facial expressiveness and also facial expressive synchrony were higher than in the spoken language permitted condition. We think that pairs synchronized in order to compensate for the loss of prediction based on information in the verbal channel.

REPRESENTATIVE PUBLICATIONS

Zhao, F., Wood, A., Mutlu, B., & Niedenthal, P. (2022). Faces synchronize when communication through spoken language is prevented. Emotion, 23(1), 87–96 http://dx.doi.org/10.1037/emo0000799

Wood, A., Lipson, J., Zhao, O., & Niedenthal, P. (2021). Forms and functions of affective synchrony. In Handbook of Embodied Psychology (pp. 381-402). Springer, Cham.

Korb, S., Malsert, J., Strathearn, L., Vuilleumier, P., & Niedenthal, P. (2016). Sniff and mimic — Intranasal oxytocin increases facial mimicry in a sample of men. Hormones and Behavior, 84, 64–74