Large Language Models (LLMs) are poised to play an increasingly important role in our lives, providing assistance across a wide array of tasks. In the geospatial domain, LLMs have demonstrated the ability to answer generic questions, such as identifying a country's capital; nonetheless, their utility is hindered when it comes to answering fine-grained questions about specific places, such as grocery stores or restaurants, which constitute essential aspects of people's everyday lives. This is mainly because the places in our cities haven't been systematically fed into LLMs, so as to understand and memorize them. This study introduces a novel framework for fine-tuning a pre-trained model on city-specific data, to enable it to provide accurate recommendations, while minimizing hallucinations. We share our model, LAMP, and the data used to train it. We conduct experiments to analyze its ability to correctly retrieving spatial objects, and compare it to well-known open- and closed- source language models, such as GPT-4. Finally, we explore its emerging capabilities through a case study on day planning.
翻译:大型语言模型(LLMs)将在我们的生活中扮演日益重要的角色,为广泛的任务提供协助。在地理空间领域,LLMs已展现出回答通用问题的能力,例如识别国家首都;然而,在回答关于特定地点(如杂货店或餐馆)的细粒度问题时,其实用性受到限制,而这些地点构成了人们日常生活的重要方面。这主要是因为城市中的地点尚未被系统性地输入LLMs,使其能够理解和记忆这些信息。本研究提出了一种新颖的框架,用于在特定城市数据上对预训练模型进行微调,使其能够提供准确推荐,同时最大限度地减少幻觉。我们分享了我们的模型LAMP及其训练数据。我们通过实验分析了其正确检索空间对象的能力,并将其与GPT-4等知名开源和闭源语言模型进行了比较。最后,我们通过一个关于日常规划的案例研究,探讨了其新兴能力。