9.1 C
Niagara Falls
Friday, April 19, 2024
Dr. Brown: Technology can restore speech to those unable to speak

No one knows when storytelling began but it would have required symbolic thought and language and the cognitive ability to imagine and literally “talk about” the past, present and future.

Modern humans were probably not the first hominins to evolve sophisticated symbolic oral language. Based on the art they created, neanderthals surely thought and spoke symbolically too. 

The same was probably true for their close cousins, the denisovans and perhaps in prototypical form, as early as 800,000 years ago, about the time when the homo brain had doubled in size. But however deep the evolutionary roots for symbolic language might have been, acquiring it was central to who we became. Losing speech, even for a short time, makes communicating with others far harder and isolating in humans, the most social of species.

Stephen Hawking, perhaps the most famous physicist of our time, developed slowly progressive amyotrophic lateral sclerosis (ALS) as a young adult. Eventually the disease left him unable to speak without help from a computer to translate signals recorded using an infrared camera from small contractions of his facial muscles, into words, phrases and even whole sentences. The process used computer algorithms capable of anticipating Hawking’s intended words based on what he had said in the past. 

It worked brilliantly. He and his many fans learned to love the trademark mechanical tone of his computerized speech. And graduate students learned to be patient with the inherent delays in the computerized speech system. Most even found the delays helped them keep up with Hawking’s train of thought.

Another way to restore speech to patients with ALS and others suffering from paralytic strokes, which spare the speech areas in the brain, is to harness the brain’s signals related to speech articulation to create the patient’s intended words, and even sentences, on a computer screen. 

One case involved a young man who, 15 years earlier had developed a post-traumatic brainstem stroke that left him quadriplegic and unable to make any intelligible sounds. Fortunately, functional MRI (fMRI) studies of the brain regions related to his speech, revealed they behaved normally.

The method involved implanting a multielectrode array within the subdural space on the left side of his brain (in regions previously identified to be closely related to speech articulation by fMRI studies) and direct stimulation of the brain at the time of the implantation procedure. 

Through a connecting hub embedded in the skull, electrical signals were transferred to a remote computer for analysis. The latter incorporated self-learning algorithms to first extract word-specific signals from the otherwise noisy background activity of the brain and, second, to predict the next words in a phrase or sentence based on the previous words chosen by the patient.

In 50 sessions, spread over 83 weeks and many hours of tiring training, the patient learned 50 common useful words that could be rearranged into word sequences of up to eight words, such as, “bring my glasses please” – words and phrases of which were considered by the patient and his caregivers to be useful. 

The subject was able to achieve a success rate of three in four for single words and roughly half of his sentences could be decoded without error. Each word took about four seconds to decode for a median rate of 12.4 words per minute. Those numbers are impressive, even if the rate is much slower compared to most of us (120 to 150 words per minute). Even so, the rate was several times faster than earlier systems for translating brain signals into everyday useful conversation speech.

I chose this example to highlight the burgeoning field of hybrid human-computer linked systems that are designed to solve challenges in neurology posed, in this case, by the total loss of articulated speech in a man with a stroke and in other, more common examples, such as the loss of speech in paralytic diseases, like ALS.

In the case cited here, only 128 electrodes were embedded in the array. That may seem a lot, but compared to the millions of nerve cells that participate in translating an intended word into a spoken word, the system is crude. What was surprising to me was just how successful such a simple system was in restoring practical work-a-day speech.

The whole international effort, of which this study is part, has led to a much better understanding of how language is encoded in the brain. This study also highlights how important artificial intelligence (AI) has become in unravelling the mysteries of the brain. That’s quite an achievement in an area of neuroscience where the complexity of the brain makes it very difficult to mount useful studies.

Looking back, it’s amazing that only one species, and not the one with the most brain cells, has crossed the threshold for sophisticated symbolic language. So far, at least.

Dr. William Brown is a professor of neurology at McMaster University and co-founder of the Infohealth series at the Niagara-on-the-Lake Public Library.  

Subscribe to our mailing list