When face-to-face communication becomes effortful due to background noise and interfering talkers, the role of visual cues becomes increasingly important for communication success. While previous research has selectively investigated head or hand movements, here we explore the combination of movements of head, hand and the whole body in acoustically adverse conditions. We hypothesize that with increasing background noise level, the frequency of typical conversational movements of hand, head, trunk, and legs increases to support the speakers role while the listeners support their role by increased use of confirmative head gestures and head and trunk movements to increase the signal-to-noise ratio. We conducted a dyadic conversation experiment in which (n=8) normal hearing participants stood freely in an audiovisual virtual environment. The conversational movements were described by a newly developed labeling system for typical conversational movements, and the frequency of individual types was analyzed. Increased levels of background noise led to increased hand-gesture complexity and modulation of head movements without a clear pattern. People leaned forward slightly more and used less head movements during listening than during speaking. Additional analysis of hand-speech synchrony with hypothesized loss of synchrony due to the background noise showed a modest decrease of synchrony in terms of increased standard deviation at moderate sound levels. The results support previous findings in terms of the gesturing frequency, and we found a limited support for the changes in speech-gesture synchrony. The work reveals communication patterns of the whole body and exemplifies interactive communication in context of multimodal adaptation to communication needs.
翻译:当面对面的交流因背景噪音和干扰性说话者而变得费力时,视觉线索对于沟通成功的作用日益重要。以往研究多选择性探讨头部或手部动作,本文则探究在声学不利条件下头部、手部及全身动作的组合。我们假设,随着背景噪音水平的增加,手部、头部、躯干和腿部典型对话动作的频率会上升,以支持说话者的角色;而听者则通过增加确认性头部动作以及头部和躯干移动来提升信噪比,以辅助其角色。我们进行了一项双人对话实验,其中(n=8)正常听力参与者在视听虚拟环境中自由站立。对话动作通过新开发的典型对话动作标注系统进行描述,并对各类动作的频率进行分析。背景噪音水平的增加导致手势复杂性提高及头部动作的调制,但未呈现明确模式。人们在倾听时比说话时略微前倾,且头部动作较少。针对手部-言语同步性的附加分析假设背景噪音会导致同步性丧失,结果显示在中等声级下同步性略有下降,表现为标准偏差的增加。结果在手势频率方面支持了先前发现,而关于言语-手势同步性变化的支持较为有限。本研究揭示了全身的沟通模式,并例证了在多模态适应沟通需求背景下的交互式交流。