With the development of deep learning techniques, deep recommendation models also achieve remarkable improvements in terms of recommendation accuracy. However, due to the large number of candidate items in practice and the high cost of preference computation, these methods also suffer from low efficiency of recommendation. The recently proposed tree-based deep recommendation models alleviate the problem by directly learning tree structure and representations under the guidance of recommendation objectives. However, such models have shortcomings. The max-heap assumption in the hierarchical tree, in which the preference for a parent node should be the maximum between the preferences for its children, is difficult to satisfy in their binary classification objectives. To this end, we propose Tree-based Deep Retrieval (TDR for short) for efficient recommendation. In TDR, all the trees generated during the training process are retained to form the forest. When learning the node representation of each tree, we have to satisfy the max-heap assumption as much as possible and mimic beam search behavior over the tree in the training stage. This is achieved by TDR to regard the training task as multi-classification over tree nodes at the same level. However, the number of tree nodes grows exponentially with levels, making us train the preference model with the guidance of the sampled-softmax technique. The experiments are conducted on real-world datasets, validating the effectiveness of the proposed preference model learning method and tree learning method.
翻译:随着深度学习技术的发展,深度推荐模型在推荐准确性方面取得了显著提升。然而,由于实际应用中候选物品数量庞大且偏好计算成本高昂,这些方法也面临着推荐效率低下的问题。最近提出的基于树结构的深度推荐模型通过在推荐目标指导下直接学习树结构和表征,缓解了这一问题。但此类模型存在缺陷:层次树中的最大堆假设(即父节点的偏好应为其子节点偏好的最大值)在其二分类目标中难以满足。为此,我们提出基于树结构的深度检索(简称TDR)以实现高效推荐。在TDR中,训练过程中生成的所有树均被保留以形成森林。在学习每棵树的节点表征时,我们必须尽可能满足最大堆假设,并在训练阶段模拟树上的束搜索行为。TDR通过将训练任务视为同一层级树节点的多分类问题来实现这一目标。然而,树节点数量随层级呈指数增长,这促使我们借助采样-softmax技术指导偏好模型的训练。实验在真实数据集上进行,验证了所提偏好模型学习方法与树学习方法的有效性。