What kind of research do you do?

In the Gesture and Cognition Lab, we believe gesture is part of language. That is, gesture isn't just an expressive addition to speech, it's actually part of the same system that gives rise to spoken language. Our projects ask how the vocal (speech) and manual (gesture) aspects of this multimodal human language system are coordinated.

Current project

Our current research project focuses on the encoding of perspective (or point of view) in gesture. When people are describing events, they sometimes take on the point of view of a character in the scene being described. At other times, they take the point of view of someone observing the event from a distance. For example, the video stills below show two people describing the same event from a cartoon clip (shown on the far left), but from different points of view. In the first case, the speaker uses her own body as though she is the character, while in the second case, the speaker simply traces the character's upward path, as though he were seeing the event from a distance.

 

We're interested in finding out what factors cause participants to take a first-person or third-person perspective when describing events. (We think it might have something to do with wearing a striped shirt and having a grey circle for a face, but we're not sure about the details.) We are also interested in finding out how such gestural phenomena relate to similar uses of the hands and body in signed languages. Finally, we're interested in relating these behaviors to embodied or simulation based theories of language. Such theories claim that during language production or comprehension, we generate modality-specific reconstructions of whatever the talk is about, using the same motor and visual parts of the brain that are involved in perception and action. We believe gesture has the potential to offer support for such theories, because gestures appear to be physical simulations of speech content.

Relevant publications

Parrill, F. (in press). Interactions between discourse status and viewpoint in co-speech gesture. In B. Dancygier & E. Sweetser (Eds.), Viewpoint in Language: A Multimodal Perspective. Cambridge: Cambridge University Press.

Parrill, F. (2011). The relation between the encoding of motion event information and viewpoint in English-accompanying gestures. Gesture 11 (1), 61-80.

Parrill, F., Bullen, J., & Hoburg, H. (2010). Effects of input modality on speech-gesture integration. Journal of Pragmatics 42(11), 3130-3137.

Parrill, F. (2010). Viewpoint in speech-gesture integration: Linguistic structure, discourse structure, and event structure. Language and Cognitive Processes, 25(5), 650-668.

Parrill, F. (2009). Dual viewpoint gestures. Gesture 9(3), 271-289.