The deployment of Machine Learning models in the cloud has grown among tech companies. Hardware requirements are higher when these models involve Deep Learning techniques, and the cloud providers' costs may be a barrier. We explore deploying Deep Learning models, using for experiments the GECToR model, a Deep Learning solution for Grammatical Error Correction, across three of the major cloud providers (Amazon Web Services, Google Cloud Platform, and Microsoft Azure). We evaluate real-time latency, hardware usage, and cost at each cloud provider in 7 execution environments with 10 experiments reproduced. We found that while Graphics Processing Units (GPUs) excel in performance, they had an average cost 300% higher than solutions without a GPU. Our analysis also suggests that processor cache memory size is a key variable for CPU-only deployments, and setups with sufficient cache achieved a 50% cost reduction compared to GPU-based deployments. This study indicates the feasibility and affordability of cloud-based Deep Learning inference solutions without a GPU, benefiting resource-constrained users such as startups and small research groups.
翻译:机器学习模型在云端的部署已在科技公司中日益普及。当这些模型涉及深度学习技术时,硬件需求更高,而云服务提供商的成本可能构成障碍。本研究探索了深度学习模型在三大主流云提供商(Amazon Web Services、Google Cloud Platform 和 Microsoft Azure)中的部署,并以语法纠错深度学习解决方案 GECToR 模型作为实验对象。我们在 7 种执行环境中对每个云提供商评估了实时延迟、硬件使用率和成本,并复现了 10 组实验。研究发现,虽然图形处理器(GPU)在性能上表现优异,但其平均成本比无 GPU 解决方案高出 300%。分析还表明,处理器缓存大小是仅使用 CPU 部署的关键变量,具备足够缓存的配置相比基于 GPU 的部署可实现 50% 的成本降低。本研究表明,无需 GPU 的云端深度学习推理解决方案具有可行性和经济性,可为资源受限的用户(如初创企业和小型研究团队)带来益处。