A large language model (LLM) can map a feedback causal fuzzy cognitive map (FCM) into text and then reconstruct the FCM from the text. This explainable AI system approximates an identity map from the FCM to itself and resembles the operation of an autoencoder (AE). Both the encoder and the decoder explain their decisions in contrast to black-box AEs. Humans can read and interpret the encoded text in contrast to the hidden variables and synaptic webs in AEs. The LLM agent approximates the identity map through a sequence of system instructions that does not compare the output to the input. The reconstruction is lossy because it removes weak causal edges or rules while it preserves strong causal edges. The encoder preserves the strong causal edges even when it trades off some details about the FCM to make the text sound more natural.
翻译:大语言模型(LLM)能够将反馈因果模糊认知图(FCM)映射为文本,并随后从该文本中重建FCM。这一可解释的人工智能系统近似实现了从FCM到其自身的恒等映射,其运作机制类似于自编码器(AE)。与黑盒自编码器不同,该系统的编码器和解码器均能对其决策过程提供解释。相较于自编码器中隐藏的变量和突触网络,人类能够阅读并理解编码后生成的文本。LLM代理通过一系列不直接比较输出与输入的系统指令来近似实现恒等映射。该重建过程是有损的,因为它会移除较弱的因果边或规则,同时保留较强的因果边。编码器即使在牺牲部分FCM细节以使文本读起来更自然的情况下,仍能保持对强因果边的保留。