Passive brain-computer interfaces offer a potential source of implicit feedback for alignment of large language models, but most mental state decoding has been done in controlled tasks. This paper investigates whether established EEG classifiers for mental workload and implicit agreement can be transferred to spoken human-AI dialogue. We introduce two conversational paradigms - a Spelling Bee task and a sentence completion task- and an end-to-end pipeline for transcribing, annotating, and aligning word-level conversational events with continuous EEG classifier output. In a pilot study, workload decoding showed interpretable trends during spoken interaction, supporting cross-paradigm transfer. For implicit agreement, we demonstrate continuous application and precise temporal alignment to conversational events, while identifying limitations related to construct transfer and asynchronous application of event-based classifiers. Overall, the results establish feasibility and constraints for integrating passive BCI signals into conversational AI systems.
翻译:被动式脑脑接口为大型语言模型的对齐提供了潜在的隐式反馈源,但多数心理状态解码研究均在受控任务中完成。本文探讨已建立的认知负荷与隐含认同脑电信号分类器能否迁移至人机语音对话场景。我们引入两种对话范式——拼字游戏任务与句子补全任务——以及一套端到端处理流程,用于转写、标注并将词级对话事件与连续的脑电分类器输出进行时序对齐。试点研究表明,认知负荷解码在语音交互过程中呈现出可解释的趋势,支持跨范式迁移。针对隐含认同,我们展示了分类器在对话事件中的连续应用与精确时序对齐,同时指出了与构念迁移及基于事件的分类器异步应用相关的局限性。总体而言,研究结果证实了将被动式脑脑接口信号整合到对话式人工智能系统中的可行性及其约束条件。