Although the widespread use of AI systems in today's world is growing, many current AI systems are found vulnerable due to hidden bias and missing information, especially in the most commonly used forecasting system. In this work, we explore the robustness and explainability of AI-based forecasting systems. We provide an in-depth analysis of the underlying causality involved in the effect prediction task and further establish a causal graph based on treatment, adjustment variable, confounder, and outcome. Correspondingly, we design a causal interventional prediction system (CIPS) based on a variational autoencoder and fully conditional specification of multiple imputations. Extensive results demonstrate the superiority of our system over state-of-the-art methods and show remarkable versatility and extensibility in practice.
翻译:尽管当今世界中人工智能系统的应用日益广泛,但许多现有AI系统因存在隐含偏差与信息缺失而显得脆弱,这一问题在最为常用的预测系统中尤为突出。本研究深入探讨了基于人工智能的预测系统的稳健性与可解释性。我们对效应预测任务中涉及的潜在因果关系进行了深度解析,并基于处理变量、调整变量、混杂因子及结果变量构建了因果图。相应地,我们设计了一种基于变分自编码器与多重插补完全条件规范的因果干预预测系统(CIPS)。大量实验结果表明,我们的系统在性能上超越了现有最优方法,并在实践中展现出卓越的通用性与可扩展性。