Biological systems, particularly the human brain, achieve remarkable energy efficiency by abstracting information across multiple hierarchical levels. In contrast, modern artificial intelligence and communication systems often consume significant energy overheads in transmitting low-level data, with limited emphasis on abstraction. Despite its implicit importance, a formal and computational theory of information abstraction remains absent. In this work, we introduce the Degree of Information Abstraction (DIA), a general metric that quantifies how well a representation compresses input data while preserving task-relevant semantics. We derive a tractable information-theoretic formulation of DIA and propose a DIA-based information abstraction framework. As a case study, we apply DIA to a large language model (LLM)-guided video transmission task, where abstraction-aware encoding significantly reduces transmission volume by $99.75\%$, while maintaining semantic fidelity. Our results suggest that DIA offers a principled tool for rebalancing energy and information in intelligent systems and opens new directions in neural network design, neuromorphic computing, semantic communication, and joint sensing-communication architectures.
翻译:生物系统,特别是人脑,通过多层次的信息抽象实现了卓越的能源效率。相比之下,现代人工智能与通信系统在传输底层数据时往往消耗大量能源,对信息抽象的重视有限。尽管其重要性不言而喻,但目前仍缺乏一个形式化且可计算的信息抽象理论。在本研究中,我们引入了信息抽象度(Degree of Information Abstraction, DIA),这是一个通用度量,用于量化一个表征在压缩输入数据的同时保留任务相关语义的程度。我们推导出了DIA的一个易于处理的信息论公式,并提出了一个基于DIA的信息抽象框架。作为案例研究,我们将DIA应用于一个大语言模型(LLM)引导的视频传输任务,其中具备抽象感知的编码将传输数据量显著降低了$99.75\%$,同时保持了语义保真度。我们的结果表明,DIA为智能系统中能源与信息的再平衡提供了一个原则性工具,并为神经网络设计、神经形态计算、语义通信以及感通一体化架构开辟了新的研究方向。