Motivated by the large progress made by large language models (LLMs), we introduce the framework of verbalized machine learning (VML). In contrast to conventional machine learning models that are typically optimized over a continuous parameter space, VML constrains the parameter space to be human-interpretable natural language. Such a constraint leads to a new perspective of function approximation, where an LLM with a text prompt can be viewed as a function parameterized by the text prompt. Guided by this perspective, we revisit classical machine learning problems, such as regression and classification, and find that these problems can be solved by an LLM-parameterized learner and optimizer. The major advantages of VML include (1) easy encoding of inductive bias: prior knowledge about the problem and hypothesis class can be encoded in natural language and fed into the LLM-parameterized learner; (2) automatic model class selection: the optimizer can automatically select a concrete model class based on data and verbalized prior knowledge, and it can update the model class during training; and (3) interpretable learner updates: the LLM-parameterized optimizer can provide explanations for why each learner update is performed. We conduct several studies to empirically evaluate the effectiveness of VML, and hope that VML can serve as a stepping stone to stronger interpretability and trustworthiness in ML.
翻译:受大型语言模型(LLM)所取得的重大进展启发,我们提出了语言化机器学习(VML)框架。与传统机器学习模型通常在连续参数空间上进行优化不同,VML将参数空间约束为人类可解释的自然语言。这种约束引出了函数逼近的新视角:带有文本提示的LLM可被视为由该文本提示参数化的函数。在此视角指导下,我们重新审视了经典的机器学习问题(如回归与分类),发现这些问题可通过LLM参数化的学习器与优化器求解。VML的主要优势包括:(1)归纳偏置易于编码:关于问题与假设类的先验知识能以自然语言形式编码并输入至LLM参数化学习器;(2)自动模型类选择:优化器能依据数据与语言化先验知识自动选择具体模型类,并在训练过程中动态更新模型类;(3)可解释的学习器更新:LLM参数化的优化器能为每次学习器更新提供解释说明。我们通过多项实验研究实证评估了VML的有效性,并期望VML能成为增强机器学习可解释性与可信赖性的基石。