Wearable silent speech systems hold significant potential for restoring communication in patients with speech impairments. However, seamless, coherent speech remains elusive, and clinical efficacy is still unproven. Here, we present an AI-driven intelligent throat (IT) system that integrates throat muscle vibrations and carotid pulse signal sensors with large language model (LLM) processing to enable fluent, emotionally expressive communication. The system utilizes ultrasensitive textile strain sensors to capture high-quality signals from the neck area and supports token-level processing for real-time, continuous speech decoding, enabling seamless, delay-free communication. In tests with five stroke patients with dysarthria, IT's LLM agents intelligently corrected token errors and enriched sentence-level emotional and logical coherence, achieving low error rates (4.2% word error rate, 2.9% sentence error rate) and a 55% increase in user satisfaction. This work establishes a portable, intuitive communication platform for patients with dysarthria with the potential to be applied broadly across different neurological conditions and in multi-language support systems.
翻译:可穿戴无声语音系统在恢复言语障碍患者的交流能力方面具有重要潜力。然而,实现流畅、连贯的语音仍面临挑战,其临床疗效尚未得到充分验证。本研究提出一种人工智能驱动的智能喉部系统,该系统通过集成喉部肌肉振动与颈动脉搏动信号传感器,并引入大语言模型处理模块,实现了流畅且富有情感表达能力的交流。该系统利用超灵敏纺织应变传感器采集颈部区域的高质量信号,支持令牌级处理以实现实时连续的语音解码,从而达成无缝、无延迟的通信。在针对五名构音障碍卒中患者的测试中,智能喉部系统的大语言模型代理能够智能修正令牌错误,并增强句子层面的情感与逻辑连贯性,实现了较低的错误率(词错误率4.2%,句错误率2.9%),同时将用户满意度提升了55%。本工作为构音障碍患者建立了一个便携、直观的交流平台,未来有望广泛应用于不同神经系统疾病场景及多语言支持系统中。