Large-scale pre-trained machine learning models have reshaped our understanding of artificial intelligence across numerous domains, including our own field of geography. As with any new technology, trust has taken on an important role in this discussion. In this chapter, we examine the multifaceted concept of trust in foundation models, particularly within a geographic context. As reliance on these models increases and they become relied upon for critical decision-making, trust, while essential, has become a fractured concept. Here we categorize trust into three types: epistemic trust in the training data, operational trust in the model's functionality, and interpersonal trust in the model developers. Each type of trust brings with it unique implications for geographic applications. Topics such as cultural context, data heterogeneity, and spatial relationships are fundamental to the spatial sciences and play an important role in developing trust. The chapter continues with a discussion of the challenges posed by different forms of biases, the importance of transparency and explainability, and ethical responsibilities in model development. Finally, the novel perspective of geographic information scientists is emphasized with a call for further transparency, bias mitigation, and regionally-informed policies. Simply put, this chapter aims to provide a conceptual starting point for researchers, practitioners, and policy-makers to better understand trust in (generative) GeoAI.
翻译:大规模预训练机器学习模型重塑了我们对人工智能在众多领域(包括我们地理学领域)的理解。与任何新技术一样,信任在此讨论中占据了重要地位。在本章中,我们探讨了基础模型中信任这一多层面概念,尤其在地理学语境下。随着对这些模型的依赖日益加深,且它们被用于关键决策,信任虽至关重要,却已成为一个割裂的概念。在此,我们将信任分为三类:对训练数据的认知信任、对模型功能的操作信任,以及对模型开发者的人际信任。每种信任类型都对地理学应用带来独特的影响。文化背景、数据异质性和空间关系等主题是空间科学的基础,并在建立信任过程中扮演重要角色。本章进一步讨论了不同形式偏见带来的挑战、透明度与可解释性的重要性,以及模型开发中的伦理责任。最后,本章强调了地理信息科学家的新颖视角,并呼吁进一步提高透明度、减轻偏见以及制定基于区域认知的政策。简而言之,本章旨在为研究人员、从业者和政策制定者提供一个概念起点,以更好地理解(生成式)地理人工智能中的信任问题。