Large Language Models (LLMs) are widely applied to domain-specific tasks due to their massive general knowledge and remarkable inference capacities. Current studies on LLMs have shown immense potential in applying LLMs to model individual mobility prediction problems. However, most LLM-based mobility prediction models only train on specific datasets or use single well-designed prompts, leading to difficulty in adapting to different cities and users with diverse contexts. To fill these gaps, this paper proposes a unified fine-tuning framework to train a foundational open source LLM-based mobility prediction model. We conducted extensive experiments on six real-world mobility datasets to validate the proposed model. The results showed that the proposed model achieved the best performance in prediction accuracy and transferability over state-of-the-art models based on deep learning and LLMs.
翻译:大语言模型(LLMs)凭借其海量通用知识与卓越推理能力,被广泛应用于特定领域任务。当前针对LLMs的研究已展现出将其应用于个体移动预测建模的巨大潜力。然而,现有基于LLM的移动预测模型大多仅在特定数据集上训练或采用单一精心设计的提示,导致模型难以适应不同城市及多样化情境的用户。为填补上述空白,本文提出一种统一的微调框架,用于训练基于开源LLM的基础性移动预测模型。我们在六个真实世界移动数据集上进行了广泛实验以验证所提模型。结果表明,相较于基于深度学习和LLM的现有最优模型,所提模型在预测精度与可迁移性方面均取得了最佳性能。