Predicting the locations an individual will visit in the future is crucial for solving many societal issues like disease diffusion and reduction of pollution among many others. The models designed to tackle next-location prediction, however, require a significant amount of individual-level information to be trained effectively. Such data may be scarce or even unavailable in some geographic regions or peculiar scenarios (e.g., cold-start in recommendation systems). Moreover, the design of a next-location predictor able to generalize or geographically transfer knowledge is still an open research challenge. Recent advances in natural language processing have led to a rapid diffusion of Large Language Models (LLMs) which have shown good generalization and reasoning capabilities. These insights, coupled with the recent findings that LLMs are rich in geographical knowledge, allowed us to believe that these models can act as zero-shot next-location predictors. This paper evaluates the capabilities of many popular LLMs in this role, specifically Llama, GPT-3.5 and Mistral 7B. After designing a proper prompt, we tested the models on three real-world mobility datasets. The results show that LLMs can obtain accuracies up to 32.4%, a significant relative improvement of over 600% when compared to sophisticated DL models specifically designed for human mobility. Moreover, we show that other LLMs are unable to perform the task properly. To prevent positively biased results, we also propose a framework inspired by other studies to test data contamination. Finally, we explored the possibility of using LLMs as text-based explainers for next-location prediction showing that can effectively provide an explanation for their decision. Notably, 7B models provide more generic, but still reliable, explanations compared to larger counterparts. Code: github.com/ssai-trento/LLM-zero-shot-NL
翻译:预测个体未来将访问的位置对于解决疾病传播、污染减少等诸多社会问题至关重要。然而,为解决下一位置预测而设计的模型需要大量个体层面的信息才能有效训练。此类数据在某些地理区域或特殊场景(例如推荐系统中的冷启动问题)中可能稀缺甚至无法获取。此外,设计一个能够泛化或进行地理知识迁移的下一位置预测器仍然是一个开放的研究挑战。自然语言处理领域的最新进展推动了大语言模型(LLMs)的迅速普及,这些模型已展现出良好的泛化与推理能力。这些见解,结合近期关于LLMs富含地理知识的发现,使我们相信这些模型可以充当零样本下一位置预测器。本文评估了多种流行LLM在此角色中的能力,特别是Llama、GPT-3.5和Mistral 7B。在设计了合适的提示后,我们在三个真实世界移动数据集上测试了这些模型。结果表明,LLMs可以获得高达32.4%的准确率,与专门为人类移动性设计的复杂深度学习模型相比,实现了超过600%的显著相对提升。此外,我们发现其他LLMs无法正确执行此任务。为防止结果出现正向偏差,我们还提出了一个受其他研究启发的框架来测试数据污染问题。最后,我们探索了使用LLMs作为基于文本的解释器来为下一位置预测提供解释的可能性,结果表明它们能有效为其决策提供解释。值得注意的是,与更大的模型相比,7B模型提供的解释更为通用,但仍具可靠性。代码:github.com/ssai-trento/LLM-zero-shot-NL