Decision Trees (DTs) are commonly used for many machine learning tasks due to their high degree of interpretability. However, learning a DT from data is a difficult optimization problem, as it is non-convex and non-differentiable. Therefore, common approaches learn DTs using a greedy growth algorithm that minimizes the impurity locally at each internal node. Unfortunately, this greedy procedure can lead to inaccurate trees. In this paper, we present a novel approach for learning hard, axis-aligned DTs with gradient descent. The proposed method uses backpropagation with a straight-through operator on a dense DT representation, to jointly optimize all tree parameters. Our approach outperforms existing methods on binary classification benchmarks and achieves competitive results for multi-class tasks. The method is available under: https://github.com/s-marton/GradTree
翻译:决策树(DTs)因其高度可解释性而广泛应用于众多机器学习任务。然而,从数据中学习决策树是一个困难的优化问题,因其非凸且不可微。因此,常见方法采用贪心生长算法学习决策树,该算法在每一个内部节点处局部最小化不纯度。遗憾的是,这种贪心过程可能导致生成不准确的树。本文提出一种基于梯度下降学习硬性轴对齐决策树的新方法。该方法在稠密决策树表示上结合直通算子进行反向传播,以联合优化所有树参数。我们的方法在二分类基准任务上优于现有方法,并在多分类任务中取得了具有竞争力的结果。该方法发布于:https://github.com/s-marton/GradTree