We study timestamped speaker-attributed ASR for long-form, multi-party speech with overlap, where chunk-wise inference must preserve meeting-level speaker identity consistency while producing time-stamped, speaker-labeled transcripts. Previous Speech-LLM systems tend to prioritize either local diarization or global labeling, but often lack the ability to capture fine-grained temporal boundaries or robust cross-chunk identity linking. We propose G-STAR, an end-to-end system that couples a time-aware speaker-tracking module with a Speech-LLM transcription backbone. The tracker provides structured speaker cues with temporal grounding, and the LLM generates attributed text conditioned on these cues. G-STAR supports both component-wise optimization and joint end-to-end training, enabling flexible learning under heterogeneous supervision and domain shift. Experiments analyze cue fusion, local versus long-context trade-offs and hierarchical objectives.
翻译:本研究针对长时、多说话人重叠语音的带时间戳说话人归属自动语音识别问题,其中分块推理必须在生成带时间戳说话人标注转写的同时保持会议级别的说话人身份一致性。现有语音大语言模型系统往往侧重局部说话人日志或全局标注,但通常难以同时捕捉细粒度时间边界和鲁棒的跨区块身份关联。我们提出G-STAR系统,将时间感知的说话人追踪模块与语音大语言模型转写主干进行端到端耦合。追踪器提供具有时序锚点的结构化说话人线索,大语言模型基于这些线索生成属性文本。G-STAR支持组件级优化与联合端到端训练,能够在异构监督和领域偏移下实现灵活学习。实验分析了线索融合机制、局部与长上下文权衡以及分层优化目标。