In this paper, we introduce LiveMind, a novel low-latency inference framework for large language model (LLM) inference which enables LLMs to perform inferences with incomplete user input. By reallocating computational processes to the input phase, a substantial reduction in latency is achieved, thereby significantly enhancing the interactive experience for users of LLMs. The framework adeptly manages the visibility of the streaming input to the model, allowing it to infer from incomplete user input or await additional content. Compared with traditional inference methods on complete user input, our approach demonstrates an average reduction in response latency of 84.0% on the MMLU dataset and 71.6% on the MMLU-Pro dataset, while maintaining comparable accuracy. Additionally, our framework facilitates collaborative inference and output across different models. By employing an large LLM for inference and a small LLM for output, we achieve an average 37% reduction in response latency, alongside a 4.30% improvement in accuracy on the MMLU-Pro dataset compared with the baseline. The proposed LiveMind framework advances the field of human-AI interaction by enabling more responsive and efficient communication between users and AI systems.
翻译:本文提出LiveMind,一种用于大语言模型(LLM)推理的新型低延迟推理框架,该框架使得LLM能够在用户输入未完成时即开始推理。通过将计算过程重新分配至输入阶段,本框架实现了延迟的显著降低,从而极大提升了LLM用户的交互体验。该框架巧妙地管理流式输入对模型的可见性,使模型能够基于不完整的用户输入进行推理或等待更多内容。与基于完整用户输入的传统推理方法相比,我们的方法在MMLU数据集上平均降低了84.0%的响应延迟,在MMLU-Pro数据集上平均降低了71.6%的响应延迟,同时保持了相当的准确性。此外,本框架支持不同模型间的协同推理与输出。通过采用大LLM进行推理、小LLM进行输出的策略,与基线相比,我们在MMLU-Pro数据集上实现了平均37%的响应延迟降低以及4.30%的准确率提升。所提出的LiveMind框架通过实现用户与AI系统之间更敏捷、更高效的通信,推动了人机交互领域的发展。