Bagging and boosting are two popular ensemble methods in machine learning (ML) that produce many individual decision trees. Due to the inherent ensemble characteristic of these methods, they typically outperform single decision trees or other ML models in predictive performance. However, numerous decision paths are generated for each decision tree, increasing the overall complexity of the model and hindering its use in domains that require trustworthy and explainable decisions, such as finance, social care, and health care. Thus, the interpretability of bagging and boosting algorithms, such as random forest and adaptive boosting, reduces as the number of decisions rises. In this paper, we propose a visual analytics tool that aims to assist users in extracting decisions from such ML models via a thorough visual inspection workflow that includes selecting a set of robust and diverse models (originating from different ensemble learning algorithms), choosing important features according to their global contribution, and deciding which decisions are essential for global explanation (or locally, for specific cases). The outcome is a final decision based on the class agreement of several models and the explored manual decisions exported by users. We evaluated the applicability and effectiveness of VisRuler via a use case, a usage scenario, and a user study. The evaluation revealed that most users managed to successfully use our system to explore decision rules visually, performing the proposed tasks and answering the given questions in a satisfying way.
翻译:装袋(Bagging)与提升(Boosting)是机器学习(ML)中两种流行的集成方法,可生成大量独立的决策树。由于这些方法固有的集成特性,其预测性能通常优于单一决策树或其他ML模型。然而,每棵决策树会产生众多决策路径,这增加了模型的整体复杂性,阻碍了其在需要可信与可解释决策的领域(如金融、社会福利与医疗健康)中的应用。因此,随机森林与自适应提升等装袋与提升算法的可解释性会随着决策数量的增加而降低。本文提出一种可视化分析工具,旨在通过一套全面的视觉探查工作流,帮助用户从此类ML模型中提取决策:该工作流包括选择一组鲁棒且多样化的模型(源自不同的集成学习算法)、依据全局贡献度筛选重要特征,以及确定哪些决策对于全局解释(或针对特定案例的局部解释)至关重要。最终基于多个模型的类别一致性及用户导出的手动探索决策,形成最终决策。我们通过用例、使用场景及用户研究评估了VisRuler的适用性与有效性。评估结果显示,大多数用户能够成功使用该系统可视化探索决策规则,以令人满意的方式完成指定任务并回答所提问题。