


AI helps brain-computer interface research, New York University's breakthrough neural speech decoding technology, published in Nature sub-journal
Apr 17, 2024 am 08:40 AMAuthor | Chen Xupeng
Aphasia due to defects in the nervous system can lead to serious life disabilities, and it may limit people professional and social life.
In recent years, the rapid development of deep learning and brain-computer interface (BCI) technology has provided the feasibility of developing neural voice prostheses that can help aphasic people communicate. However, speech decoding of neural signals faces challenges.
Recently, researchers from VideoLab and Flinker Lab at the University of Jordan have developed a new type of differentiable speech synthesizer that can use a lightweight convolutional neural network to encode speech into a series of interpretable speech parameters (such as pitch, loudness, formant frequency, etc.), and these parameters are synthesized into speech through a differentiable neural network. This synthesizer can also parse speech parameters (such as pitch, loudness, formant frequencies, etc.) through a lightweight convolutional neural network, and resynthesize speech through a differentiable speech synthesizer.
The researchers established a neural signal decoding system that is highly interpretable and applicable to situations with small data volumes, by mapping neural signals to these speech parameters without changing the meaning of the original content.
The research is titled "A neural speech decoding framework leveraging deep learning and speech synthesis" and was published in "Nature Machine Intelligence on April 8, 2024 ” magazine.
Paper link: https://www.nature.com/articles/s42256-024-00824-8
Research Background
Most attempts to develop neuro-speech decoders rely on a special kind of data: electrocorticography (ECoG) recordings from patients undergoing epilepsy surgery. Using electrodes implanted in patients with epilepsy to collect cerebral cortex data during speech production, these data have high spatiotemporal resolution and have helped researchers achieve a series of remarkable results in the field of speech decoding, helping to promote brain-computer interfaces. development of the field.
Speech decoding of neural signals faces two major challenges.
First of all, the data used to train personalized neural to speech decoding models is very limited in time, usually only about ten minutes, while deep learning models often require a large amount of training data to drive.
Secondly, human pronunciation is very diverse. Even if the same person speaks the same word repeatedly, the speech speed, intonation and pitch will change, which adds complexity to the representation space built by the model.
Early attempts to decode neural signals into speech mainly relied on linear models. The models usually did not require huge training data sets and were highly interpretable, but the accuracy was low.
Recent deep neural networks, especially the use of convolutional and recurrent neural network architectures, are developed in two key dimensions: the intermediate latent representation of simulated speech and the quality of synthesized speech. For example, there are studies that decode cerebral cortex activity into mouth movement space and then convert it into speech. Although the decoding performance is powerful, the reconstructed voice sounds unnatural.
On the other hand, some methods successfully reconstruct natural-sounding speech by using wavenet vocoder, generative adversarial network (GAN), etc., but their accuracy is limited. Recently, in a study of patients with implanted devices, speech waveforms that were both accurate and natural were achieved by using quantized HuBERT features as an intermediate representation space and a pretrained speech synthesizer to convert these features into speech.
However, HuBERT features cannot represent speaker-specific acoustic information and can only generate a fixed and unified speaker's voice, so additional models are needed to convert this universal voice into a specific patient's voice. Furthermore, this study and most previous attempts adopted a non-causal architecture, which may limit its use in practical brain-computer interface applications that require temporal causal operations.
Main model framework
To address these challenges, researchers introduce a new decoding framework from electroencephalogram (ECoG) signals to speech in this article. The researchers build a low-dimensional intermediate Representation (low dimension latent representation), which is generated by a speech encoding and decoding model using only speech signals (Figure 1).
The framework proposed in the study consists of two parts: one is the ECoG decoder, which converts the ECoG signal into acoustic speech parameters that we can understand (such as pitch, whether the sound is uttered, loudness, and formant frequency, etc. ); the other part is the speech synthesizer, which converts these speech parameters into a spectrogram.
研究人員建構(gòu)了一個(gè)可微分語(yǔ)音合成器,這使得在訓(xùn)練ECoG解碼器的過(guò)程中,語(yǔ)音合成器也可以參與訓(xùn)練,共同優(yōu)化以減少頻譜圖重建的誤差。這個(gè)低維度的潛在空間具有很強(qiáng)的可解釋性,加上輕量級(jí)的預(yù)訓(xùn)練語(yǔ)音編碼器產(chǎn)生參考用的語(yǔ)音參數(shù),幫助研究者建立了一個(gè)高效的神經(jīng)語(yǔ)音解碼框架,克服了數(shù)據(jù)稀缺的問(wèn)題。
該框架能產(chǎn)生非常接近說(shuō)話者自己聲音的自然語(yǔ)音,並且ECoG解碼器部分可以插入不同的深度學(xué)習(xí)模型架構(gòu),也支援因果操作(causal operations)。研究人員共收集並處理了48名神經(jīng)外科病人的ECoG數(shù)據(jù),使用多種深度學(xué)習(xí)架構(gòu)(包括卷積、循環(huán)神經(jīng)網(wǎng)路和Transformer)作為ECoG解碼器。
該框架在各種模型上都展現(xiàn)出了高準(zhǔn)確度,其中以卷積(ResNet)架構(gòu)獲得的性能最好,原始與解碼頻譜圖之間的皮爾森相關(guān)係數(shù)(PCC)達(dá)到了0.806。研究者提出的框架僅透過(guò)因果操作和相對(duì)較低的採(cǎi)樣率(low-density, 10mm spacing)就能達(dá)到高準(zhǔn)確度。
研究者也展示了能夠從大腦的左右半球都進(jìn)行有效的語(yǔ)音解碼,將神經(jīng)語(yǔ)音解碼的應(yīng)用擴(kuò)展到了右腦。
研究相關(guān)程式碼開(kāi)源:https://github.com/flinkerlab/neural_speech_decoding
該研究的重要?jiǎng)?chuàng)新是提出了一個(gè)可微分的語(yǔ)音合成器(speech synthesizer),這使得語(yǔ)音的重合成任務(wù)變得非常高效,可以用很小的語(yǔ)音合成高保真的貼合原聲的音訊。
可微分語(yǔ)音合成器的原理借鑒了人的發(fā)生系統(tǒng)原理,將語(yǔ)音分為Voice(用於建模元音)和Unvoice(用於建模輔音)兩部分:
Voice部分可以先用基頻訊號(hào)產(chǎn)生諧波,由F1-F6的共振峰組成的濾波器濾波得到母音部分的頻譜特徵;對(duì)於Unvoice部分,研究者則是將白噪聲用對(duì)應(yīng)的濾波器濾波得到對(duì)應(yīng)的頻譜,一個(gè)可學(xué)習(xí)的參數(shù)可以調(diào)控兩部分在每個(gè)時(shí)刻的混合比例;在此之後透過(guò)響度訊號(hào)放大,加入背景雜訊來(lái)得到最終的語(yǔ)音頻譜?;洞苏Z(yǔ)音合成器,本文設(shè)計(jì)了一個(gè)高效率的語(yǔ)音重合成框架以及神經(jīng)-語(yǔ)音解碼框架。
研究結(jié)果
具有時(shí)序因果性的語(yǔ)音解碼結(jié)果
首先,研究者直接比較不同模型架構(gòu)(卷積(ResNet)、循環(huán)(LSTM)和Transformer(3D Swin)在語(yǔ)音解碼性能上的差異。值得注意的是,這些模型都可以執(zhí)行時(shí)間上的非因果(non-causal)或因果操作。森相關(guān)係數(shù)(PCC),非因果和因果的平均PCC分別為0.806和0.797,緊接而來(lái)的是Swin模型(非因果和因果的平均PCC分別為0.792和0.798)(圖2a)。
透過(guò)STOI 指標(biāo)的評(píng)估也得到了相似的發(fā)現(xiàn)。也會(huì)使用未來(lái)的神經(jīng)訊號(hào)。 #研究發(fā)現(xiàn),即使是因果版本的ResNet模型也能與非因果版本媲美,二者之間沒(méi)有顯著差異。非因果版本,因此研究者後續(xù)主要關(guān)注ResNet和Swin模型。交叉驗(yàn)證,這意味著相同單字的不同試驗(yàn)不會(huì)同時(shí)出現(xiàn)在訓(xùn)練集和測(cè)試集中。在訓(xùn)練期間未見(jiàn)過(guò)的單詞,模型也能夠很好地進(jìn)行解碼,這主要得益於本文構(gòu)建的模型在進(jìn)行音素(phoneme)或類似水平的語(yǔ)音解碼。進(jìn)一步,研究者展示了ResNet因果解碼器在單字層級(jí)的表現(xiàn),展示了兩位參與者(低密度取樣率ECoG)的數(shù)據(jù)。解碼後的頻譜圖準(zhǔn)確地保留了原始語(yǔ)音的頻譜-時(shí)間結(jié)構(gòu)(圖2c,d)。
研究人員也比較了神經(jīng)解碼器預(yù)測(cè)的語(yǔ)音參數(shù)與語(yǔ)音編碼器編碼的參數(shù)(作為參考值),研究者展示了幾個(gè)關(guān)鍵語(yǔ)音參數(shù)的平均PCC值(N=48),包括聲音權(quán)重(用於區(qū)分母音和子音)、響度、音高f0、第一共振峰f1和第二共振峰f2。準(zhǔn)確地重建這些語(yǔ)音參數(shù),尤其是音高、聲音權(quán)重和前兩個(gè)共振峰,對(duì)於實(shí)現(xiàn)精確的語(yǔ)音解碼和自然地模仿參與者聲音的重建至關(guān)重要。
研究發(fā)現(xiàn)表明,無(wú)論是非因果或因果模型,都能得到合理的解碼結(jié)果,這為未來(lái)的研究和應(yīng)用提供了積極的指引。
對(duì)左右大腦神經(jīng)訊號(hào)語(yǔ)音解碼以及空間取樣率的研究
研究者進(jìn)一步對(duì)左右大腦半球的語(yǔ)音解碼結(jié)果進(jìn)行了比較。多數(shù)研究集中關(guān)注主導(dǎo)語(yǔ)音和語(yǔ)言功能的左腦半球。然而,我們對(duì)於如何從右腦半球解碼語(yǔ)言訊息所知甚少。針對(duì)這一點(diǎn),研究者比較了參與者左右大腦半球的解碼表現(xiàn),以驗(yàn)證使用右腦半球進(jìn)行語(yǔ)音恢復(fù)的可能性。
在研究收集的48位受試者中,有16位受試者的ECoG訊號(hào)是從右腦中擷取。透過(guò)比較ResNet 與Swin 解碼器的表現(xiàn),研究者發(fā)現(xiàn)右腦半球也能夠穩(wěn)定地進(jìn)行語(yǔ)音解碼(ResNet 的PCC值為0.790,Swin 的PCC值為0.798),與左腦半球的解碼效果相差較?。ㄈ鐖D3a 所示)。
這項(xiàng)發(fā)現(xiàn)同樣適用於 STOI 的評(píng)估。這意味著,對(duì)於左腦半球受損、失去語(yǔ)言能力的患者來(lái)說(shuō),利用右腦半球的神經(jīng)訊號(hào)恢復(fù)語(yǔ)言也許是可行的方案。
接著,研究者探討了電極取樣密度對(duì)語(yǔ)音解碼效果的影響。先前的研究多採(cǎi)用較高密度的電極網(wǎng)格(0.4 mm),而臨床上通常使用的電極網(wǎng)格密度較低(LD 1 cm)。
有五位參與者使用了混合類型(HB)的電極網(wǎng)格(見(jiàn)圖 3b),這類網(wǎng)格雖然主要是低密度採(cǎi)樣,但其中加入了額外的電極。剩餘的四十三位參與者都採(cǎi)用低密度採(cǎi)樣。這些混合取樣(HB)的解碼表現(xiàn)與傳統(tǒng)的低密度取樣(LD)相似,但在 STOI 上表現(xiàn)稍好。
研究者比較了僅利用低密度電極與使用所有混合電極進(jìn)行解碼的效果,發(fā)現(xiàn)兩者之間的差異並不顯著(參見(jiàn)圖3d),這表明模型能夠從不同空間採(cǎi)樣密度的大腦皮層中學(xué)習(xí)到語(yǔ)音訊息,這也暗示臨床通常使用的採(cǎi)樣密度對(duì)於未來(lái)的腦機(jī)介面應(yīng)用也許是足夠的。
對(duì)於左右腦不同腦區(qū)對(duì)語(yǔ)音解碼貢獻(xiàn)度的研究

最後,研究者檢視了大腦的語(yǔ)音相關(guān)區(qū)域在語(yǔ)音解碼過(guò)程中的貢獻(xiàn)程度,這對(duì)於未來(lái)在左右腦半球植入語(yǔ)音恢復(fù)設(shè)備提供了重要的參考。研究者採(cǎi)用了遮蔽技術(shù)(occlusion analysis)來(lái)評(píng)估不同腦區(qū)對(duì)語(yǔ)音解碼的貢獻(xiàn)度。
簡(jiǎn)而言之,如果某個(gè)區(qū)域?qū)獯a至關(guān)重要,那麼遮蔽該區(qū)域的電極訊號(hào)(即將訊號(hào)設(shè)為零)會(huì)降低重構(gòu)語(yǔ)音的準(zhǔn)確率(PCC值)。
透過(guò)這種方法,研究者測(cè)量了遮蔽每個(gè)區(qū)域時(shí),PCC值的減少情況。透過(guò)對(duì)比ResNet 和Swin 解碼器的因果與非因果模型發(fā)現(xiàn),聽(tīng)覺(jué)皮層在非因果模型中的貢獻(xiàn)更大;這強(qiáng)調(diào)了在即時(shí)語(yǔ)音解碼應(yīng)用中,必須使用因果模型;因?yàn)樵诩磿r(shí)語(yǔ)音解碼中,我們無(wú)法利用神經(jīng)回饋訊號(hào)。
此外,無(wú)論是在右腦或左腦半球,感測(cè)運(yùn)動(dòng)皮質(zhì)尤其是腹部區(qū)域的貢獻(xiàn)度相似,這暗示在右半球植入神經(jīng)義肢也許是可行的。
結(jié)論&啟發(fā)展望
研究者開(kāi)發(fā)了一個(gè)新型的可微分語(yǔ)音合成器,可以利用一個(gè)輕型的捲積神經(jīng)網(wǎng)路將語(yǔ)音編碼為一系列可解釋的語(yǔ)音參數(shù)(如音高,響度,共振峰頻率等)並透過(guò)可微分語(yǔ)音合成器重新合成語(yǔ)音。
透過(guò)將神經(jīng)訊號(hào)映射到這些語(yǔ)音參數(shù),研究者建構(gòu)了一個(gè)高度可解釋且可應(yīng)用於小數(shù)據(jù)量情形的神經(jīng)語(yǔ)音解碼系統(tǒng),可產(chǎn)生聽(tīng)起來(lái)自然的語(yǔ)音。此方法在參與者間高度可重複(共48人),研究者成功展示了利用卷積和Transformer(3D Swin)架構(gòu)進(jìn)行因果解碼的有效性,均優(yōu)於循環(huán)架構(gòu)(LSTM)。
該框架能夠處理高低不同空間取樣密度,並且可以處理左、右半球的腦電訊號(hào),顯示出了強(qiáng)大的語(yǔ)音解碼潛力。
大多數(shù)先前的研究沒(méi)有考慮到即時(shí)腦機(jī)介面應(yīng)用中解碼操作的時(shí)序因果性。許多非因果模型依賴聽(tīng)覺(jué)感覺(jué)回饋訊號(hào)。研究者的分析顯示,非因果模型主要依賴顳上回(superior temporal gyrus)的貢獻(xiàn),而因果模型則基本上消除了這一點(diǎn)。研究者認(rèn)為,由於過(guò)度依賴回饋訊號(hào),非因果模型在即時(shí)BCI應(yīng)用中的通用性受限。
有些方法嘗試避免訓(xùn)練中的回饋,如解碼受試者想像中的語(yǔ)音。儘管如此,大多數(shù)研究仍採(cǎi)用非因果模型,無(wú)法排除訓(xùn)練和推論過(guò)程中的回饋影響。此外,文獻(xiàn)中廣泛使用的循環(huán)神經(jīng)網(wǎng)路通常是雙向的,導(dǎo)致非因果行為和預(yù)測(cè)延遲,而研究者的實(shí)驗(yàn)表明,單向訓(xùn)練的循環(huán)網(wǎng)路表現(xiàn)最差。
儘管研究並沒(méi)有測(cè)試即時(shí)解碼,但研究者實(shí)現(xiàn)了從神經(jīng)訊號(hào)合成語(yǔ)音小於50毫秒的延遲,幾乎不影響聽(tīng)覺(jué)延遲,允許正常語(yǔ)音產(chǎn)出。
研究中探討了是否更高密度的覆蓋能改善解碼性能。研究者發(fā)現(xiàn)低密度和高(混合)密度網(wǎng)格覆蓋都能達(dá)到高解碼效能(見(jiàn)圖 3c)。此外,研究者發(fā)現(xiàn)使用所有電極的解碼性能與僅使用低密度電極的性能沒(méi)有顯著差異(圖3d)。
這證明了只要圍顳覆蓋足夠,即使在低密度參與者中,研究者提出的ECoG解碼器也能夠從神經(jīng)訊號(hào)中提取語(yǔ)音參數(shù)用於重建語(yǔ)音。另一個(gè)顯著的發(fā)現(xiàn)是右半球皮質(zhì)結(jié)構(gòu)以及右圍顳皮質(zhì)對(duì)語(yǔ)音解碼的貢獻(xiàn)。儘管先前的一些研究展示了對(duì)元音和句子的解碼中,右半球可能提供貢獻(xiàn),研究者的結(jié)果提供了右半球中魯棒的語(yǔ)音表示的證據(jù)。
研究者也提到了目前模型的一些限制,例如解碼流程需要有與ECoG記錄配對(duì)的語(yǔ)音訓(xùn)練數(shù)據(jù),這對(duì)失語(yǔ)癥患者可能不適用。未來(lái),研究者也希望開(kāi)發(fā)能處理非網(wǎng)格資料的模型架構(gòu),以及更好地利用多病人、多模態(tài)腦電資料。
本文第一作者:Xupeng Chen, Ran Wang,通訊作者:Adeen Flinker。
基金支持:National Science Foundation under Grant No. IIS-1912286, 2309057 (Y.W., A.F.) and National Institute of Health R01NS109367, R01NS115929, R01DC018805 (A.F.)##。
更多關(guān)於神經(jīng)語(yǔ)音解碼中的因果性討論,可以參考作者們的另一篇論文《Distributed feedforward and feedback cortical processing supports human speech production 》:https ://www.pnas.org/doi/10.1073/pnas.2300255120
來(lái)源:腦機(jī)介面社群The above is the detailed content of AI helps brain-computer interface research, New York University's breakthrough neural speech decoding technology, published in Nature sub-journal. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

When you open PyCharm for the first time, you should first create a new project and select a virtual environment, and then be familiar with the editor area, toolbar, navigation bar, and status bar. Set up Darcula themes and Consolas fonts, use smart tips and debugging tools to get more efficient, and learn Git integration.

Social security number verification is implemented in PHP through regular expressions and simple logic. 1) Use regular expressions to clean the input and remove non-numeric characters. 2) Check whether the string length is 18 bits. 3) Calculate and verify the check bit to ensure that it matches the last bit of the input.

The steps to effectively use graphical tools to compare the differences in Git versions include: 1. Open GitKraken and load the repository, 2. Select the version to compare, 3. View the differences, and 4. In-depth analysis. Graphical tools such as GitKraken provide intuitive interfaces and rich features to help developers understand the evolution of code more deeply.

The gitstatus command is used to display the status of the working directory and temporary storage area. 1. It will check the current branch, 2. Compare the working directory and the temporary storage area, 3. Compare the temporary storage area and the last commit, 4. Check untracked files to help developers understand the state of the warehouse and ensure that there are no omissions before committing.

To develop a complete Python Web application, follow these steps: 1. Choose the appropriate framework, such as Django or Flask. 2. Integrate databases and use ORMs such as SQLAlchemy. 3. Design the front-end and use Vue or React. 4. Perform the test, use pytest or unittest. 5. Deploy applications, use Docker and platforms such as Heroku or AWS. Through these steps, powerful and efficient web applications can be built.

Verifying an IMEISV string in PHP requires the following steps: 1. Verify the 16-bit numeric format using regular expressions. 2. Verify the validity of the IMEI part through the Luhn algorithm. 3. Check the validity of the software version number. The complete verification process includes format verification, Luhn checking and software version number checking to ensure the validity of IMEISV.

Create tags on remote repository using gitpushorigin, delete tags using gitpushorigin--delete. The specific steps include: 1. Create a local tag: gittagv1.0. 2. Push to remote: gitpushoriginv1.0. 3. Delete local tag: gittag-dv1.0. 4. Delete remote tag: gitpushorigin--deletev1.0.

VSCode solves the problems of multilingual project coding and garbled code including: 1. Ensure that the file is saved with correct encoding and use the "redetection encoding" function; 2. Set the file encoding to UTF-8 and automatically detect the encoding; 3. Control whether to add BOM; 4. Use the "EncodingConverter" plug-in to convert encoding; 5. Use the multiple workspace functions to set encoding for different sub-projects; 6. Optimize performance and ignore unnecessary file monitoring. Through these steps, the coding problem of multilingual projects can be effectively dealt with.
