Decades after their inception, random forests continue to provide state-of-the-art accuracy in a variety of learning problems, outperforming in this respect alternative machine learning algorithms such as decision trees or even neural networks. However, being an ensemble method, the one aspect where random forests tend to severely underperform decision trees is interpretability. In the present work, we propose a post-hoc approach that aims to have the best of both worlds: the accuracy of random forests and the interpretability of decision trees. To this end, we present two forest-pruning methods to find an optimal sub-forest within a given random forest, and then, when applicable, combine the selected trees into one. Our first method relies on constrained exhaustive search, while our second method is based on an adaptation of the LASSO methodology. Extensive experiments over synthetic and real world datasets show that, in the majority of scenarios, at least one of the two methods proposed is more accurate than the original random forest, while just using a small fraction of the trees, aiding result interpretability. Compared to current state-of-the-art forest pruning methods, namely sequential forward selection and (a variation of) sequential backward selection, our methods tend to outperform both of them, whether in terms of accuracy, number of trees employed, or both.
翻译:自提出数十年来,随机森林在多种学习问题中持续展现出最先进的准确性,在此方面优于决策树甚至神经网络等替代机器学习算法。然而,作为集成方法,随机森林在可解释性方面往往显著逊色于决策树。本研究提出一种后处理方法,旨在兼具两者优势:随机森林的准确性与决策树的可解释性。为此,我们提出两种森林剪枝方法,用于在给定随机森林中寻找最优子森林,并在适用时合并所选决策树。第一种方法基于约束穷举搜索,第二种方法基于LASSO方法的适应性改进。在合成数据集与真实数据集上的大量实验表明,在大多数场景下,两种方法中至少有一种的准确性优于原始随机森林,且仅使用少量决策树即有助于结果的可解释性。与当前最先进的森林剪枝方法(即顺序前向选择及其变体顺序后向选择)相比,我们的方法在准确性、所用决策树数量或两者兼有的方面均更优。