Self-annotation is the gold standard for collecting affective state labels in affective computing. Existing methods typically rely on full annotation, requiring users to continuously label affective states across entire sessions. While this process yields fine-grained data, it is time-consuming, cognitively demanding, and prone to fatigue and errors. To address these issues, we present PREFAB, a low-budget retrospective self-annotation method that targets affective inflection regions rather than full annotation. Grounded in the peak-end rule and ordinal representations of emotion, PREFAB employs a preference-learning model to detect relative affective changes, directing annotators to label only selected segments while interpolating the remainder of the stimulus. We further introduce a preview mechanism that provides brief contextual cues to assist annotation. We evaluate PREFAB through a technical performance study and a 25-participant user study. Results show that PREFAB outperforms baselines in modeling affective inflections while mitigating workload (and conditionally mitigating temporal burden). Importantly PREFAB improves annotator confidence without degrading annotation quality.
翻译:自标注是情感计算中收集情感状态标签的金标准。现有方法通常依赖完整标注,要求用户在整个会话过程中持续标注情感状态。虽然这一过程能产生细粒度数据,但耗时耗力、认知负荷高,且易导致疲劳和错误。为解决这些问题,我们提出PREFAB——一种针对情感拐点区域而非完整标注的低预算回顾式自标注方法。基于峰终法则和情感的序数表示,PREFAB采用偏好学习模型检测相对情感变化,引导标注者仅标注选定片段,同时对刺激材料的其余部分进行插值。我们进一步引入预览机制,通过简短的上下文提示辅助标注。通过技术性能研究和25人参与的用户研究对PREFAB进行评估。结果表明,PREFAB在建模情感拐点方面优于基线方法,同时减轻了工作负荷(并在特定条件下缓解了时间负担)。重要的是,PREFAB在保持标注质量的前提下显著提升了标注者的置信度。