Deep neural networks (DNNs) have made significant strides in tackling challenging tasks in wireless systems, especially when an accurate wireless model is not available. However, when available data is limited, traditional DNNs often yield subpar results due to underfitting. At the same time, large language models (LLMs) exemplified by GPT-3, have remarkably showcased their capabilities across a broad range of natural language processing tasks. But whether and how LLMs can benefit challenging non-language tasks in wireless systems is unexplored. In this work, we propose to leverage the in-context learning ability (a.k.a. prompting) of LLMs to solve wireless tasks in the low data regime without any training or fine-tuning, unlike DNNs which require training. We further demonstrate that the performance of LLMs varies significantly when employed with different prompt templates. To solve this issue, we employ the latest LLM calibration methods. Our results reveal that using LLMs via ICL methods generally outperforms traditional DNNs on the symbol demodulation task and yields highly confident predictions when coupled with calibration techniques.
翻译:深度神经网络(DNNs)在解决无线系统中的挑战性任务方面取得了显著进展,尤其是在缺乏精确无线模型的情况下。然而,当可用数据有限时,传统的DNNs常因欠拟合而导致结果不佳。与此同时,以GPT-3为代表的大语言模型(LLMs)在广泛的自然语言处理任务中已展现出卓越能力。但LLMs是否以及如何能有益于无线系统中具有挑战性的非语言任务,目前尚未得到探索。在本工作中,我们提出利用LLMs的上下文学习能力(亦称提示)来解决低数据量下的无线任务,无需任何训练或微调,这与需要训练的DNNs不同。我们进一步证明,LLMs在使用不同提示模板时,其性能差异显著。为解决此问题,我们采用了最新的LLM校准方法。我们的结果表明,在符号解调任务上,通过ICL方法使用LLMs通常优于传统DNNs,并且结合校准技术后能产生高度可信的预测。