In this paper we describe an efficient method for providing a regression model with a sense of curiosity about its data. In the field of machine learning, our framework for representing curiosity is called Active Learning, which concerns the problem of automatically choosing data points for which to query labels in the semi-supervised setting. The methods we propose are based on computing a "regularity tangent" vector that can be calculated (with only a constant slow-down) together with the model's parameter vector during training. We then take the inner product of this tangent vector with the gradient vector of the model's loss at a given data point to obtain a measure of the influence of that point on the complexity of the model. In the simplest instantiation, there is only a single regularity tangent vector, of the same dimension as the parameter vector. Thus, in the proposed technique, once training is complete, evaluating our "curiosity" about a potential query data point can be done as quickly as calculating the model's loss gradient at that point. The new vector only doubles the amount of storage required by the model. We show that the quantity computed by our technique is an example of an "influence function", and that it measures the expected squared change in model complexity incurred by up-weighting a given data point. We propose a number of ways for using this and other related quantities to choose new training data points for a regression model.
翻译:本文提出了一种高效方法,使回归模型能够对其数据产生"好奇心"。在机器学习领域,我们表示好奇心的框架称为主动学习,该框架关注在半监督环境下自动选择需要查询标签的数据点的问题。我们提出的方法基于计算"正则切线"向量,该向量可在训练过程中与模型参数向量同步计算(仅产生恒定级的速度衰减)。随后,我们将该切向量与模型在给定数据点处的损失梯度向量进行内积运算,以此衡量该数据点对模型复杂度的影响。在最简单的实例中,仅存在单一正则切向量,其维度与参数向量相同。因此,在提出的技术中,训练完成后评估对潜在查询数据点的"好奇心"所需时间,仅相当于计算模型在该点损失梯度的耗时。新向量仅使模型所需存储量翻倍。我们证明了该技术计算出的量是"影响函数"的一个实例,它衡量了通过提升给定数据点权重所引起的模型复杂度期望平方变化。我们提出了若干利用该量及相关量度为回归模型选择新训练数据点的方法。