Facebook has released an update on its ambitious plans for a brain-reading computer interface, thanks to a team of Facebook Reality Labs-backed scientists at the University of California, San Francisco. The UCSF researchers just published the results of an experiment in decoding peopleâs speech using implanted electrodes. Their work demonstrates a method of quickly âreadingâ whole words and phrases from the brain â" getting Facebook slightly closer to its dream of a noninvasive thought-typing system.
People can already type with brain-computer interfaces, but those systems often ask them to spell out individual words with a virtual keyboard. In this experiment, which was published in Nature Communications today, subjects listened to multiple-choice questions and spoke the answers out loud. An electrode array recorded activity in parts of the brain associated with understanding and producing speech, looking for patterns that matched with specific words and phrases in real time.
If participants heard someone ask âWhich musical instrument do you like listening to,â for example, theyâd respond with one of several options like âviolinâ or âdrumsâ while their brain activity was recorded. The system would guess when they were asking a question and when they were answering it, then guess the content of both speech events. The predictions were shaped by prior context â" so once the system determined which question subjects were hearing, it would narrow the set of likely answers. The system could produce results with 61 to 76 percent accuracy, compared with the 7 to 20 percent accuracy expected by chance.
âHere we show the value of decoding both sides of a conversation â" both the questions someone hears and what they say in response,â said lead author and UCSF neurosurgery professor Edward Chang, in a statement. But Chang noted that this system only recognizes a very limited set of words so far; participants were only asked nine questions with 24 total answer options. The studyâs subjects â" who were being prepped for epilepsy surgery â" used highly invasive implants. And they were speaking answers aloud, not simply thinking them.
Thatâs very different from the system Facebook described in 2017: a noninvasive, mass-market cap that lets people type more than 100 words per minute without manual text entry or speech-to-text transcription. Facebook also highlights a Reality Labs-backed headset that reads brain activity with near-infrared light, potentially making a noninvasive interface more likely.
As Facebook says, virtual and augmented reality glasses could use brain reading even in very limited capacities. âBeing able to decode even just a handful of imagined words â" like âselectâ or âdeleteâ â" would provide entirely new ways of interacting with todayâs VR systems and tomorrowâs AR glasses,â the Reality Labs post reads. Facebook isnât the only big company working on brain-computer interfaces: Elon Muskâs Neuralink recently revealed new work on a threadlike brain-reading implant.
https://adstoppipro.com/blog/facebook-just-published-an-update-on-its-futuristic-brain-typing-projectMore blog here
Comments
Post a Comment