Passive brain-computer interfaces offer a potential source of implicit feedback for alignment of large language models, but most mental state decoding has been done in controlled tasks. This paper investigates whether established EEG classifiers for mental workload and implicit agreement can be transferred to spoken human-AI dialogue. We introduce two conversational paradigms - a Spelling Bee task and a sentence completion task- and an end-to-end pipeline for transcribing, annotating, and aligning word-level conversational events with continuous EEG classifier output. In a pilot study, workload decoding showed interpretable trends during spoken interaction, supporting cross-paradigm transfer. For implicit agreement, we demonstrate continuous application and precise temporal alignment to conversational events, while identifying limitations related to construct transfer and asynchronous application of event-based classifiers. Overall, the results establish feasibility and constraints for integrating passive BCI signals into conversational AI systems.
翻译:被动式脑脑接口为大型语言模型的校准提供了潜在的隐式反馈源,但多数心理状态解码研究均在受控任务中完成。本文探究已建立的脑电图认知负荷与隐含认同分类器能否迁移至人机语音对话场景。我们引入了两种对话范式——拼字游戏任务与句子补全任务,并构建了端到端流程用于语音转写、标注及单词级对话事件与连续脑电图分类器输出的时序对齐。在预研实验中,认知负荷解码在语音交互过程中呈现出可解释的趋势,支持跨范式迁移。针对隐含认同,我们展示了分类器在对话事件中的连续应用与精确时间对齐,同时指出了与构念迁移及基于事件的分类器异步应用相关的局限性。总体而言,研究结果证实了将被动式脑机接口信号整合到对话式人工智能系统中的可行性及其约束条件。