Small language models (SLMs) have emerged as efficient alternatives to large language models for task-specific applications. However, they are often employed in high-volume, low-latency settings, where efficiency is crucial. We propose TASC, Task-Adaptive Sequence Compression, a framework for SLM acceleration comprising two use-cases: When performing SLM fine-tuning, we propose TASC-ft, which iteratively enriches the tokenizer vocabulary with high-frequency output n-grams and then fine-tunes the model to utilize the expanded vocabulary. Next, we propose an inference-time method, termed TASC-spec. TASC-spec is a lightweight, training-free speculative decoding method that constructs an n-gram draft model from the task's output corpus, mixing task and context n-gram information.TASC-spec avoids any additional training, while bypassing draft-target vocabulary alignment constraints. We demonstrate the effectiveness of both methods across multiple low output-variability generation tasks. Our methods show consistent improvements in inference efficiency while maintaining task performance.
翻译:小型语言模型已成为面向特定任务应用时,相较于大型语言模型的高效替代方案。然而,它们通常被部署在高吞吐量、低延迟的场景中,此时效率至关重要。我们提出了TASC(任务自适应序列压缩)框架,这是一个用于加速小型语言模型的框架,包含两个应用场景:在进行小型语言模型微调时,我们提出了TASC-ft方法,该方法迭代地将高频输出n-gram加入分词器词汇表,然后微调模型以利用扩展后的词汇表。其次,我们提出了一种推理时方法,称为TASC-spec。TASC-spec是一种轻量级、无需训练的推测解码方法,它从任务的输出语料库中构建一个n-gram草稿模型,混合了任务和上下文的n-gram信息。TASC-spec避免了任何额外的训练,同时绕过了草稿模型与目标模型词汇对齐的限制。我们在多个低输出可变性的生成任务上验证了两种方法的有效性。我们的方法在保持任务性能的同时,持续提升了推理效率。