Deploying Large Language Models (LLMs) on edge devices enhances privacy but faces performance hurdles due to limited resources. We introduce a systematic methodology to evaluate on-device LLMs, balancing capability, efficiency, and resource constraints. Through an extensive analysis of models (0.5B-14B) and seven post-training quantization (PTQ) methods on commodity hardware, we demonstrate that: 1) Heavily quantized large models consistently outperform smaller, high-precision models, with a performance threshold at ~3.5 effective bits-per-weight (BPW); 2) Resource utilization scales linearly with BPW, though power and memory footprints vary by quantization algorithm; and 3) With a reduction in model size, the primary constraint on throughput transitions from communication overhead to computational latency. We conclude by offering guidelines for optimizing LLMs in resource-constrained edge environments. Our codebase is available at https://anonymous.4open.science/r/LLMOnDevice/.
翻译:在边缘设备上部署大语言模型(LLMs)虽能增强隐私保护,但受限于设备资源,其性能面临挑战。本文提出一种系统化方法论,用于评估在设备上运行的LLMs,在模型能力、效率与资源约束之间寻求平衡。通过对不同规模模型(0.5B-14B)在商用硬件上应用七种训练后量化(PTQ)方法的广泛分析,我们得出以下结论:1)经过深度量化的大型模型始终优于高精度的小型模型,其性能阈值约在每权重3.5有效比特(BPW)处;2)资源利用率随BPW线性增长,但功耗与内存占用因量化算法而异;3)随着模型尺寸减小,吞吐量的主要瓶颈从通信开销转变为计算延迟。最后,我们为资源受限的边缘环境中的LLM优化提供了实用指南。相关代码库已发布于 https://anonymous.4open.science/r/LLMOnDevice/。