In machine learning (ML), the inference phase is the process of applying pre-trained models to new, unseen data with the objective of making predictions. During the inference phase, end-users interact with ML services to gain insights, recommendations, or actions based on the input data. For this reason, serving strategies are nowadays crucial for deploying and managing models in production environments effectively. These strategies ensure that models are available, scalable, reliable, and performant for real-world applications, such as time series forecasting, image classification, natural language processing, and so on. In this paper, we evaluate the performances of five widely-used model serving frameworks (TensorFlow Serving, TorchServe, MLServer, MLflow, and BentoML) under four different scenarios (malware detection, cryptocoin prices forecasting, image classification, and sentiment analysis). We demonstrate that TensorFlow Serving is able to outperform all the other frameworks in serving deep learning (DL) models. Moreover, we show that DL-specific frameworks (TensorFlow Serving and TorchServe) display significantly lower latencies than the three general-purpose ML frameworks (BentoML, MLFlow, and MLServer).
翻译:在机器学习(ML)中,推理阶段是指将预训练模型应用于新的、未见过的数据以进行预测的过程。在推理阶段,终端用户通过与ML服务交互,基于输入数据获取洞察、推荐或执行操作。因此,服务策略对于在生产环境中有效部署和管理模型至关重要。这些策略确保模型在现实应用(如时间序列预测、图像分类、自然语言处理等)中具备可用性、可扩展性、可靠性和高性能。本文评估了五种广泛使用的模型服务框架(TensorFlow Serving、TorchServe、MLServer、MLflow和BentoML)在四种不同场景(恶意软件检测、加密货币价格预测、图像分类和情感分析)下的性能。我们证明TensorFlow Serving在服务深度学习(DL)模型方面能够超越所有其他框架。此外,我们发现DL专用框架(TensorFlow Serving和TorchServe)的延迟显著低于三种通用ML框架(BentoML、MLFlow和MLServer)。