Linux kernel is a huge code base with enormous number of subsystems and possible configuration options that results in unmanageable complexity of elaborating an efficient configuration. Machine Learning (ML) is approach/area of learning from data, finding patterns, and making predictions without implementing algorithms by developers that can introduce a self-evolving capability in Linux kernel. However, introduction of ML approaches in Linux kernel is not easy way because there is no direct use of floating-point operations (FPU) in kernel space and, potentially, ML models can be a reason of significant performance degradation in Linux kernel. Paper suggests the ML infrastructure architecture in Linux kernel that can solve the declared problem and introduce of employing ML models in kernel space. Suggested approach of kernel ML library has been implemented as Proof Of Concept (PoC) project with the goal to demonstrate feasibility of the suggestion and to design the interface of interaction the kernel-space ML model proxy and the ML model user-space thread.
翻译:Linux内核是一个庞大的代码库,包含众多子系统和可能的配置选项,这导致构建高效配置的复杂性难以管理。机器学习是一种从数据中学习、发现模式并进行预测的方法/领域,无需开发者实现算法,可为Linux内核引入自我演进的能力。然而,在Linux内核中引入机器学习方法并非易事,因为内核空间无法直接使用浮点运算单元,且机器学习模型可能导致Linux内核性能显著下降。本文提出了一种Linux内核中的机器学习基础设施架构,可解决上述问题并实现在内核空间使用机器学习模型。所提出的内核机器学习库方法已作为概念验证项目实现,旨在验证建议的可行性,并设计内核空间机器学习模型代理与用户空间机器学习模型线程的交互接口。