In this study, we aim to reduce generation latency for Named Entity Recognition (NER) with Large Language Models (LLMs). The main cause of high latency in LLMs is the sequential decoding process, which autoregressively generates all labels and mentions for NER, significantly increase the sequence length. To this end, we introduce Parallel Decoding in LLM for NE} (PaDeLLM-NER), a approach that integrates seamlessly into existing generative model frameworks without necessitating additional modules or architectural modifications. PaDeLLM-NER allows for the simultaneous decoding of all mentions, thereby reducing generation latency. Experiments reveal that PaDeLLM-NER significantly increases inference speed that is 1.76 to 10.22 times faster than the autoregressive approach for both English and Chinese. Simultaneously it maintains the quality of predictions as evidenced by the performance that is on par with the state-of-the-art across various datasets.
翻译:在本研究中,我们旨在降低使用大语言模型进行命名实体识别时的生成延迟。LLMs中高延迟的主要原因是其顺序解码过程,该过程以自回归方式生成NER任务的所有标签和提及,显著增加了序列长度。为此,我们提出了用于命名实体识别的大语言模型并行解码方法,该方法可无缝集成到现有的生成式模型框架中,无需添加额外模块或修改架构。PaDeLLM-NER允许同时解码所有实体提及,从而降低生成延迟。实验表明,对于英文和中文任务,PaDeLLM-NER显著提升了推理速度,比自回归方法快1.76至10.22倍。同时,其在多个数据集上的性能与现有最优方法相当,证明了该方法能保持预测质量。