Click-through rate (CTR) prediction has become increasingly indispensable for various Internet applications. Traditional CTR models convert the multi-field categorical data into ID features via one-hot encoding, and extract the collaborative signals among features. Such a paradigm suffers from the problem of semantic information loss. Another line of research explores the potential of pretrained language models (PLMs) for CTR prediction by converting input data into textual sentences through hard prompt templates. Although semantic signals are preserved, they generally fail to capture the collaborative information (e.g., feature interactions, pure ID features), not to mention the unacceptable inference overhead brought by the huge model size. In this paper, we aim to model both the semantic knowledge and collaborative knowledge for accurate CTR estimation, and meanwhile address the inference inefficiency issue. To benefit from both worlds and close their gaps, we propose a novel model-agnostic framework (i.e., ClickPrompt), where we incorporate CTR models to generate interaction-aware soft prompts for PLMs. We design a prompt-augmented masked language modeling (PA-MLM) pretraining task, where PLM has to recover the masked tokens based on the language context, as well as the soft prompts generated by CTR model. The collaborative and semantic knowledge from ID and textual features would be explicitly aligned and interacted via the prompt interface. Then, we can either tune the CTR model with PLM for superior performance, or solely tune the CTR model without PLM for inference efficiency. Experiments on four real-world datasets validate the effectiveness of ClickPrompt compared with existing baselines.
翻译:点击率(Click-through Rate, CTR)预测在各类互联网应用中日益不可或缺。传统CTR模型通过独热编码将多类别字段数据转化为ID特征,并提取特征间的协同信号。此类范式面临语义信息损失的问题。另一研究方向尝试利用预训练语言模型(PLMs)进行CTR预测:通过硬提示模板将输入数据转化为文本句子。尽管语义信号得以保留,但它们通常无法捕捉协同信息(如特征交互、纯ID特征),更不必提庞大模型规模带来的不可接受的推理开销。本文旨在同时建模语义知识与协同知识以实现精准CTR估计,并解决推理效率低下的问题。为兼顾两者优势并弥合差距,我们提出一种新型模型无关框架(即ClickPrompt),将CTR模型集成用于生成面向交互的软提示以适配PLM。我们设计了提示增强型掩码语言建模(PA-MLM)预训练任务:PLM需基于语言上下文以及CTR模型生成的软提示来恢复被掩码的标记。ID特征与文本特征中的协同知识与语义知识将通过提示接口显式对齐并交互。随后,我们可选择联合调优CTR模型与PLM以获得卓越性能,也可仅调优CTR模型(不依赖PLM)以提升推理效率。在四个真实数据集上的实验验证了ClickPrompt相比现有基准方法的有效性。