Abstract :
[en] EUROCOGSCI 2019: SUBMISSION TEMPLATE
PAPER - POSTER SUBMISSION
The implication of working memory in gesture/speech integration: Validation study of iconic gesture videos among French speakers
Kendra Kandana Arachchige, Department of Cognitive Psychology & Neuropsychology, University of Mons
Isabelle Simoes Loureiro, Department of Cognitive Psychology & Neuropsychology, University of Mons
Mandy Rossignol, Department of Cognitive Psychology & Neuropsychology, University of Mons
Laurent Lefebvre, Department of Cognitive Psychology & Neuropsychology, University of Mons
Submission type
Paper: Poster: Paper or Poster: y
Abstract:
Gestures constitute an important part of nonverbal communication. Among them, iconic gestures occur during verbal conversation and typically convey information that is semantically and formally related to the simultaneous verbal utterance. Several studies have agreed on the impact of iconic gestures on language comprehension [1-5]. Because of their semantic connection, the involvement of the verbal working memory (vWM) in gesture/speech integration was suggested [6-9]. However, a clear relation between vWM and gesture/speech integration hasn't been observed. One explanation can be found when analysing the methodology applied by previous studies. Their experimental task involved an implied congruency judgement of videos of iconic gestures (main task) while maintaining information (letters or words) in memory (vWM loading task). The main aim of these studies was to observe an effect of the vWM cognitive load on reaction times in the congruency judgement of iconic gesture videos. In the study by Wu & Coulson (2014), the low complexity of the vWM task (i.e. remembering 1 to 4 digits) could explain the absence of effect [6]. In our previous study, although we've decided to switch from digits to words to increase the weight on the phonological loop, all participants were asked to remember the same amount of information [7]. As the individual span was not taken into account, this could have led to a lack of sensitivity of the task. In this study, we've decided to assess each participant at their own level (with a high word span ranging from 4 to 8 words), rather than using a fixed-span. However, for the main task, because past studies were conducted among English-speakers, a validation study
was required to create a new database of video stimuli for French-speakers. This validation study was required to ensure the congruent and incongruent nature of the pair of video and sound. Thirty-four different gestures were considered and assembled into 17 pairs in order to create the incongruent condition. One pair includes the video of a gesture and an accompanying sound. The sound in each video conveyed either congruent (ex. action of 'breaking' and sound 'break') or incongruent information (ex. action of 'breaking' and sound 'stir') in relation to the gesture presented. The videos were either enacted by a man or a woman. Finally, the voice heard was either the voice of a man, or the voice of a woman. In order to validate these items, 49 healthy, French-speaking participants (13 men; Mage = 23.7; SD = 2.7) were recruited. All participants were Native-French speakers, did not present any neurological or psychiatric condition and gave informed consent. They were asked to judge the semantic congruency and incongruency of 102 ((17 videos x 2 gestures x 2 gender) + 34 videos for incongruent condition) videos on a 5-level Likert scale ranging from 1 (totally incongruent) to 5 (totally congruent). They were also asked to judge the voice in 34 recordings as belonging to either a man or a woman. Participants were divided into 4 groups for condition counterbalancing. Each group either first saw; (1) the congruent videos enacted by a man, (2) the congruent videos enacted by a woman, (3) the incongruent videos or (4) the recordings without video. After analysis, 16 pairs of videos were chosen, congruent pairs having been considered congruent by participants at an average of 4.45/5 and incongruent pairs considered incongruent at an average of 1.16/5. The excluded pair (i.e. shaking-hammering) was deemed to be too semantically similar (incongruency of 2.7/5) and was thus excluded from our stimuli set. Furthermore, participants classified the gender of the voice recordings correctly in 100% of cases. These results allowed us to create a database with 256 (16 different pairs x 16 assembly possibilities between enacted gesture and heard sound) different combinations of iconic gestures and sound ready to use in our main study.
References
[1] Beattie, G, & Shovelton, H. (1999). Mapping the range of information contained in the iconic hand gestures that accompany spontaneous speech. Journal of language and social Psychology, 18(4), p.483-462. doi: 10.1177/0261927X99018004005
[2] Wu, Y., & Coulson, S. (2015). Iconic gestures facilitate discours comprehension in individuals with superior immediate memory for body configurations. Psychological Science, 26(11), p.1717-27. doi: 10.1177/0956797615597671
[3] Ozyurek, A., Willems, R., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), p.605-16. doi: 10.1162/jocn.2007.19.4.605
[4] Holle, H., & Gunter, T. (2007). The role of iconic gestures in speech disambiguation: ERP evidence. Journal of Cognitive Neuroscience, 19(7), p.1175-92. doi: 10.1162/jocn.2007.19.7.1175 [5] Dick, A., Goldin-Meadow, S., Hasson, U., Skipper, J., & Small, S. (2009). Co-speech gestures influence neural activity in brain regions associated with processing semantic information. Human Brain Mapping, 30(11), p.3509-3526. doi : 10.1002/hbm.20774. [6] Wu, Y. & Coulson, S. (2014), Co-speech iconic gestures and visuo-spatial working memory. Acta Psychologica, 153, p.39-50. doi : 10.1016/j.actpsy.2014.09.0023
[7] Kandana Arachchige, K., Lefebvre, L., Simoes Loureiro, I., Blekic, W., Rossignol, M., & Holle, H. 'The effect of verbal working memory load in speech/gesture integration processing'. in. 'Front. Neurosci. Conference Abstract: Belgian Brain Congress 2018-Belgian Brain Council. Liège, Belgique (2018). doi: 10.3389/conf.fnins.2018.95.00043
[8] Gillepsie, M., James, A., Federmeier, K., & Watson, D. (2014). Verbal working memory predicts co-speech gesture: Evidence from individual differences. Cognition, 132(2), p.174-180. doi: 10.1016/j.cognition.2014.03.012
[9] De Ruiter, J.-P. (1998). Gesture and speech production (Doctoral dissertation, Max Planck Institute for Psycholinguistics, 1998).