Spoken and Signed Language Use the Same Neural Pathways

A new study finds that ASL users and English speakers show nearly identical brain patterns when producing complex phrases.

Sign language (Credit: Zeichnung von R.A. Olea/Wikimedia Commons)
Sign language (Credit: Zeichnung von R.A. Olea/Wikimedia Commons)
“The research shows for the first time that...the neural timing and localization of the planning of phrases is comparable between American Sign Language and English.”
The neural processes that underlie complex linguistic expressions in both spoken and signed languages are quite similar to one another, according to a new study involving San Diego State University professor Karen Emmorey, a leading expert in sign language.

“Although there are many reasons to believe that signed and spoken languages should be neurobiologically quite similar, evidence of overlapping computations at this level of detail is still a striking demonstration of the fundamental core of human language,” said the study’s senior author Liina Pylkkanen, a professor in New York University’s Department of Linguistics and Department of Psychology.

The study was led by Esti Blanco-Elorrieta, a doctoral student in New York University’s Department of Psychology and NYU Abu Dhabi Institute, and was reported in the latest issue of the journal Scientific Reports.

Past research has shown that structurally in the brain, signed and spoken languages are fundamentally similar. Emmorey’s work has focused not only on the neurology underpinning language but also on the differences and similarities between sign languages’ and spoken languages’ syntax and grammar. Last year, she and her team won the People’s Choice Award at the National Science Foundation’s Visualization Challenge for their project, ASL-LEX, depicting the phonological and syntactic details about hundreds of ASL signs.

What has been less clear to researchers, however, is whether the same circuitry in the brain underlies the construction of complex linguistic structures in sign and speech, such as phrases that describe objects using multiple adjectives and nouns.

To address this question, the scientists studied the production of multiple two-word phrases in American Sign Language (ASL) as well as speech by deaf ASL signers residing in and around New York and hearing English speakers living in Abu Dhabi.

Signers and speakers viewed the same pictures and named them using identical expressions—“white lamp” or “green bell,” for example. In order to gauge the study subjects’ neurological activity during this experiment, the researchers deployed a brain-imaging technique known as magnetoencephalography (MEG) that uses magnetic fields to map the electrical current produced by our brains.

For both signers and speakers, phrase-building engaged the same parts of the brain: the left anterior temporal and ventromedial cortices, despite different linguistic articulators (the vocal tract vs. the hands).

The researchers point out that this neurobiological similarity between sign and speech goes beyond basic similarities and into more intricate processes—the same parts of the brain are used at the same time for the specific computation of combining words or signs into more complex expressions.

“This research shows for the first time that despite obvious physical differences in how signed and spoken languages are produced and comprehended, the neural timing and localization of the planning of phrases is comparable between American Sign Language and English,” explained Blanco-Elorrieta.

Researching the neural commonalities between languages shines a light on the fundamental building blocks of language, Emmorey added.

“We can only discover what is universal to all human languages by studying sign languages,” she said.

The research was supported by grants from National Science Foundation (BCS-1221723) (LP), the National Institutes of Health (R01-DC010997) (KE), and the NYUAD Institute as well as a La Caixa Foundation Fellowship for Post-Graduate Studies (EBE).
Categorized As