Recent research suggests that two processes may coexist in the brain's representation of the human body: one based on visuo-spatial information and the other on linguistic information. This model postulates that, in addition to relying on visual perception to understand the relationships between body parts, we also use information derived from language. A study conducted by Luca Rinaldi and collaborators explored this idea by directly investigating whether these two forms of representation - perceptual and linguistic - truly coexist. To do so, the researchers used distributional semantic models (DSMs), computational tools that analyse texts to identify how words (and, in this case, body parts) are related in language. For instance, words like "hand" often appear associated with "arm" or "finger" in texts. These linguistic relationships were used to construct a body map based on the semantic distances between different body parts. To test this linguistic map, the researchers conducted two behavioural experiments. In both, participants had to evaluate the proximity between body parts presented as words or as images. The results showed that both perceptual information (based on vision) and linguistic information (extracted from language) influenced participants' performance. In other words, linguistic representations complemented visual ones, suggesting that the brain uses both sources of information when mentally processing and organizing the human body. These findings support theories arguing that mental representations combine perceptual and linguistic information. Thus, our understanding of the human body depends not only on what we see but also on how language structures that understanding. This study was supported by the BIAL Foundation, in the scope of the research project 13/22 - RE-thinking the role of the spatial memory system in cognitive MAPs (acronym: REMAP), and published in the Journal of Cognition, in the article A Body Map Beyond Perceptual Experience.
ABSTRACT
The human body is perhaps the most ubiquitous and salient visual stimulus that we encounter in our daily lives. Given the prevalence of images of human bodies in natural scene statistics, it is no surprise that our mental representations of the body are thought to strongly originate from visual experience. Yet, little is still known about high-level cognitive representations of the body. Here, we retrieved a body map from natural language, taking this as a window into high-level cognitive processes. We first extracted a matrix of distances between body parts from natural language data and employed this matrix to extrapolate a body map. To test the effectiveness of this high-level body map, we then conducted a series of experiments in which participants were asked to classify the distance between pairs of body parts, presented either as words or images. We found that the high-level body map was systematically activated when participants were making these distance judgments. Crucially, the linguistic map explained participants' performance over and above the visual body map, indicating that the former cannot be simply conceived as a by-product of perceptual experience. These findings, therefore, establish the existence of a behaviourally relevant, high-level representation of the human body.