Mélange Accessibility for All Magazine July 2023 | Page 76

time is it ?” — on at least one of two tries . A next step , Metzger says , could be combining this spelling-based approach with a words-based approach they developed previously to enable users to communicate more quickly and with less effort .
‘ Still in the early stage ’
Today , close to 40 people worldwide have been implanted with microelectrode arrays , with more coming online . Many of these volunteers — people paralyzed by strokes , spinal cord injuries or ALS — spend hours hooked up to computers helping researchers develop new brain-machine interfaces to allow others , one day , to regain functions they have lost . Jun Wang , a computer and speech scientist at the University of Texas at Austin , says he is excited about recent progress in creating devices to restore speech , but cautions there is a long way to go before practical application . “ At this moment , the whole field is still in the early stage .”
Wang and other experts would like to see upgrades to hardware and software that make the devices less cumbersome , more accurate and faster . For example , the device pioneered by the UCSF lab worked at a pace of about seven words per minute , whereas natural speech moves at about 150 words a minute . And even if the technology evolves to mimic human speech , it is unclear whether approaches developed in patients with some ability to move or speak will work in those who are completely locked in . “ My intuition is it would scale , but I can ’ t say that for sure ,” says Metzger . “ We would have to verify that .”
Another open question is whether it is possible to design brain-machine interfaces that do not require brain surgery . Attempts to create noninvasive approaches have faltered because such devices have tried to make sense of signals that have traveled through layers of tissue and bone , like trying to follow a football game from the parking lot .
Wang has made headway using an advanced imaging technique called magnetoencephalography ( MEG ), which records magnetic fields on the outside of the skull that are generated by the electric currents in the brain , and then translating those signals into text . Right now , he is trying to build a device that uses MEG to recognize the 44 phonemes , or speech sounds , in the English language — like ph or oo — which could be used to construct syllables , then words , then sentences .
Ultimately , the biggest challenge to restoring speech in locked-in patients may have more to do with biology than with technology . The way speech is encoded , particularly internal speech , could vary depending on the individual or the situation . One person might imagine scrawling a word on a sheet of paper in their mind ’ s eye ; another might hear the word , still unspoken , echoing in their ears ; yet another might associate a word with its meaning , evoking a particular feeling-state . Because different brain waves could be associated with different words in different people , different techniques might have to be adapted to each person ’ s individual nature .
“ I think this multipronged approach by the different groups is our best way to cover all of our bases ,” says Bjånes , “ and have approaches that work in a bunch of different contexts .”
Marla Broadfoot is a freelance science writer who lives in Wendell , North Carolina . She has a PhD in genetics and molecular biology . Follow her @ mvbroadfoot and see more of her work at marlabroadfoot . com .
This article was originally published in Knowable Magazine . Read the original article
76 Accessibilty for All To Table of Contents