The advancement of Large Language Models (LLMs) has significantly boosted performance in natural language processing (NLP) tasks. However, the deployment of high-performance LLMs incurs substantial costs, primarily due to the increased number of parameters aimed at enhancing model performance. This has made the use of state-of-the-art LLMs more expensive for end-users. AI service providers, such as OpenAI and Anthropic, often offer multiple versions of LLMs with varying prices and performance. However, end-users still face challenges in choosing the appropriate LLM for their tasks that balance result quality with cost. We introduce SMART, Scaling Models Adaptively for Reduced Token Fees, a novel LLM framework designed to minimize the inference costs of NLP tasks while ensuring sufficient result quality. It enables users to specify an accuracy constraint in terms of the equivalence of outputs to those of the most powerful LLM. SMART then generates results that deviate from the outputs of this LLM only with a probability below a user-defined threshold. SMART employs a profiling phase that evaluates the performance of multiple LLMs to identify those that meet the user-defined accuracy level. SMART optimizes the tradeoff between profiling overheads and the anticipated cost savings resulting from profiling. Moreover, our approach significantly reduces inference costs by strategically leveraging a mix of LLMs. Our experiments on three real-world datasets show that, based on OpenAI models, SMART achieves significant cost savings, up to 25.6x in comparison to GPT-4.
翻译:大型语言模型的进步显著提升了自然语言处理任务的性能。然而,高性能LLM的部署因参数量持续增长而产生巨大成本,导致终端用户使用顶尖模型的费用日益高昂。OpenAI与Anthropic等AI服务商虽提供多版本LLM(具有差异化的定价与性能),终端用户仍难以在结果质量与成本之间做出最优选择。本文提出SMART(自动适配裁剪模型降低Token费用框架),该框架通过最小化推理成本同时保障结果质量,使用户能够基于最强LLM的输出等价性设定精度约束。SMART确保最终结果与最强LLM输出产生偏差的概率低于用户定义的阈值,并采用性能剖析阶段评估多个LLM以筛选满足精度要求的模型。该方法通过优化剖析开销与预期成本节省之间的权衡,实现LLM混合调用的策略性部署。基于OpenAI模型在三组真实数据集上的实验表明,相较于GPT-4,SMART实现了最高25.6倍的成本节约。