Artificial Intelligence (AI) has achieved transformative success across a wide range of domains, revolutionizing fields such as healthcare, education, and human-computer interaction. However, the mechanisms driving AI's performance often remain opaque, particularly in the context of large language models (LLMs), which have advanced at an unprecedented pace in recent years. Multi-modal large language models (MLLMs) like GPT-4o exemplify this evolution, integrating text, audio, and visual inputs to enable interaction across diverse domains. Despite their remarkable capabilities, these models remain largely "black boxes," offering limited insight into how they process multi-modal information internally. This lack of transparency poses significant challenges, including systematic biases, flawed associations, and unintended behaviors, which require careful investigation. Understanding the decision-making processes of MLLMs is both beneficial and essential for mitigating these challenges and ensuring their reliable deployment in critical applications. GPT-4o was chosen as the focus of this study for its advanced multi-modal capabilities, which allow simultaneous processing of textual and visual information. These capabilities make it an ideal model for investigating the parallels and distinctions between machine-driven and human-driven visual perception. While GPT-4o performs effectively in tasks involving structured and complete data, its reliance on bottom-up processing, which involves a feature-by-feature analysis of sensory inputs, presents challenges when interpreting complex or ambiguous stimuli. This limitation contrasts with human vision, which is remarkably adept at resolving ambiguity and reconstructing incomplete information through high-level cognitive processes.
翻译:人工智能(AI)已在医疗、教育、人机交互等诸多领域取得变革性成功,彻底改变了这些领域的发展格局。然而,驱动AI性能的内在机制往往仍不透明,这在近年来以空前速度发展的大型语言模型(LLMs)中尤为明显。以GPT-4o为代表的多模态大语言模型(MLLMs)体现了这一演进趋势,它们通过整合文本、音频与视觉输入,实现了跨领域的交互能力。尽管这些模型展现出卓越的性能,其内部多模态信息处理机制仍主要处于“黑箱”状态,可解释性极为有限。这种透明度的缺失带来了系统性偏见、错误关联与意外行为等严峻挑战,亟需深入探究。理解MLLMs的决策过程对于缓解这些挑战、确保其在关键应用中的可靠部署,既具有重要价值又不可或缺。本研究选择GPT-4o作为焦点,是基于其先进的多模态能力——能够同步处理文本与视觉信息。这些特性使其成为探究机器驱动与人类驱动视觉感知之间异同的理想模型。虽然GPT-4o在处理结构完整的数据任务时表现优异,但其依赖自下而上的处理方式(即对感官输入进行逐特征分析),在解析复杂或模糊刺激时面临显著挑战。这一局限性恰与人类视觉形成对比:人类通过高层认知过程,能够出色地消解歧义并重建不完整信息。