In today's Function-as-a-Service offerings, a programmer is usually responsible for configuring function memory for its successful execution, which allocates proportional function resources such as CPU and network. However, right-sizing the function memory force developers to speculate performance and make ad-hoc configuration decisions. Recent research has highlighted that a function's input characteristics, such as input size, type and number of inputs, significantly impact its resource demand, run-time performance and costs with fluctuating workloads. This correlation further makes memory configuration a non-trivial task. On that account, an input-aware function memory allocator not only improves developer productivity by completely hiding resource-related decisions but also drives an opportunity to reduce resource wastage and offer a finer-grained cost-optimised pricing scheme. Therefore, we present MemFigLess, a serverless solution that estimates the memory requirement of a serverless function with input-awareness. The framework executes function profiling in an offline stage and trains a multi-output Random Forest Regression model on the collected metrics to invoke input-aware optimal configurations. We evaluate our work with the state-of-the-art approaches on AWS Lambda service to find that MemFigLess is able to capture the input-aware resource relationships and allocate upto 82% less resources and save up to 87% run-time costs.
翻译:在当今的函数即服务产品中,程序员通常需要为函数配置内存以确保其成功执行,该配置会按比例分配CPU和网络等函数资源。然而,恰当地调整函数内存大小迫使开发人员推测性能并做出临时配置决策。近期研究指出,函数的输入特征(如输入大小、类型和数量)会显著影响其资源需求、运行时性能及波动工作负载下的成本。这种关联性进一步使得内存配置成为一项复杂的任务。因此,一个具备输入感知能力的函数内存分配器不仅能通过完全隐藏资源相关决策来提高开发效率,还能为减少资源浪费和提供更细粒度的成本优化定价方案创造机会。为此,我们提出了MemFigLess——一种具备输入感知能力的无服务器解决方案,用于估算无服务器函数的内存需求。该框架在离线阶段执行函数性能剖析,并基于收集的指标训练多输出随机森林回归模型,以调用具备输入感知能力的最优配置。我们在AWS Lambda服务上使用最先进的方法评估了我们的工作,发现MemFigLess能够捕捉输入感知的资源关联关系,分配资源最多减少82%,并节省高达87%的运行时成本。