Data annotation is essential for supervised learning, yet producing accurate, unbiased, and scalable labels remains challenging as datasets grow in size and modality. Traditional human-centric pipelines are costly, slow, and prone to annotator variability, motivating reliability-aware automated annotation. We present AURA (Agentic AI for Unified Reliability Modeling and Annotation Aggregation), an agentic AI framework for large-scale, multi-modal data annotation. AURA coordinates multiple AI agents to generate and validate labels without requiring ground truth. At its core, AURA adapts a classical probabilistic model that jointly infers latent true labels and annotator reliability via confusion matrices, using Expectation-Maximization to reconcile conflicting annotations and aggregate noisy predictions. Across the four benchmark datasets evaluated, AURA achieves accuracy improvements of up to 5.8% over baseline. In more challenging settings with poor quality annotators, the improvement is up to 50% over baseline. AURA also accurately estimates the reliability of annotators, allowing assessment of annotator quality even without any pre-validation steps.
翻译:数据标注对于监督学习至关重要,然而随着数据集规模和模态的增长,生成准确、无偏且可扩展的标签仍然具有挑战性。传统以人为中心的标注流程成本高昂、速度缓慢且易受标注者差异性的影响,这促使了面向可靠性的自动化标注方法的发展。本文提出AURA(用于统一可靠性建模与标注聚合的智能体人工智能框架),这是一个用于大规模多模态数据标注的智能体人工智能框架。AURA协调多个AI智能体来生成和验证标签,且无需依赖真实标注。其核心在于,AURA采用了一个经典的联合推断潜在真实标签与标注者可靠性的概率模型,该模型通过混淆矩阵,并利用期望最大化算法来调和冲突标注并聚合噪声预测。在评估的四个基准数据集上,AURA相比基线方法实现了高达5.8%的准确率提升。在标注者质量较差的更具挑战性的场景中,其提升幅度可达50%。AURA还能准确估计标注者的可靠性,从而无需任何预验证步骤即可评估标注者质量。