When used in high-stakes settings, AI systems are expected to produce decisions that are transparent, interpretable and auditable, a requirement increasingly expected by regulations. Decision trees such as CART provide clear and verifiable rules, but they are restricted to structured tabular data and cannot operate directly on unstructured inputs such as text. In practice, large language models (LLMs) are widely used for such data, yet prompting strategies such as chain-of-thought or prompt optimization still rely on free-form reasoning, limiting their ability to ensure trustworthy behaviors. We present the Agentic Classification Tree (ACT), which extends decision-tree methodology to unstructured inputs by formulating each split as a natural-language question, refined through impurity-based evaluation and LLM feedback via TextGrad. Experiments on text benchmarks show that ACT matches or surpasses prompting-based baselines while producing transparent and interpretable decision paths.
翻译:在关键应用场景中,人工智能系统被要求提供透明、可解释且可审计的决策,这一需求正日益受到法规的重视。诸如CART等决策树能提供清晰且可验证的规则,但其仅适用于结构化表格数据,无法直接处理文本等非结构化输入。实践中,大型语言模型(LLMs)被广泛用于此类数据,然而思维链或提示优化等提示策略仍依赖自由形式的推理,限制了其确保可信行为的能力。本文提出智能分类树(ACT),通过将每个分裂节点表述为自然语言问题,并借助基于不纯度的评估与通过TextGrad实现的LLM反馈进行优化,将决策树方法扩展至非结构化输入。在文本基准测试上的实验表明,ACT在生成透明可解释决策路径的同时,其性能达到或超越了基于提示的基线方法。