Pre-trained language models have profoundly impacted the field of extractive question-answering, leveraging large-scale textual corpora to enhance contextual language understanding. Despite their success, these models struggle in complex scenarios that demand nuanced interpretation or inferential reasoning beyond immediate textual cues. Furthermore, their size poses deployment challenges on resource-constrained devices. Addressing these limitations, we introduce an adapted two-stage Learning-to-Defer mechanism that enhances decision-making by enabling selective deference to human experts or larger models without retraining language models in the context of question-answering. This approach not only maintains computational efficiency but also significantly improves model reliability and accuracy in ambiguous contexts. We establish the theoretical soundness of our methodology by proving Bayes and $(\mathcal{H}, \mathcal{R})$--consistency of our surrogate loss function, guaranteeing the optimality of the final solution. Empirical evaluations on the SQuADv2 dataset illustrate performance gains from integrating human expertise and leveraging larger models. Our results further demonstrate that deferring a minimal number of queries allows the smaller model to achieve performance comparable to their larger counterparts while preserving computing efficiency, thus broadening the applicability of pre-trained language models in diverse operational environments.
翻译:预训练语言模型通过利用大规模文本语料库增强上下文语言理解能力,已对抽取式问答领域产生深远影响。尽管取得了成功,这些模型在需要超越直接文本线索的细微语义解释或推理能力的复杂场景中仍面临困难。此外,其模型规模给资源受限设备上的部署带来了挑战。针对这些局限性,我们提出一种改进的两阶段学习延迟机制,该机制通过在问答场景中实现向人类专家或更大模型的选择性延迟决策来增强系统判断能力,且无需重新训练语言模型。该方法不仅保持了计算效率,还在模糊语境中显著提升了模型的可靠性与准确性。我们通过证明代理损失函数的贝叶斯一致性及$(\mathcal{H}, \mathcal{R})$一致性,从理论上验证了方法的合理性,从而确保最终解的最优性。在SQuADv2数据集上的实证评估表明,整合人类专业知识与利用更大模型可带来性能提升。研究结果进一步证明:仅需延迟处理极少量的查询,较小模型即可在保持计算效率的同时达到与更大模型相当的性能,从而拓宽了预训练语言模型在不同操作环境中的适用性。