Recent advances in deep learning and large language models (LLMs) have facilitated the deployment of the mixture-of-experts (MoE) mechanism in the stock investment domain. While these models have demonstrated promising trading performance, they are often unimodal, neglecting the wealth of information available in other modalities, such as textual data. Moreover, the traditional neural network-based router selection mechanism fails to consider contextual and real-world nuances, resulting in suboptimal expert selection. To address these limitations, we propose LLMoE, a novel framework that employs LLMs as the router within the MoE architecture. Specifically, we replace the conventional neural network-based router with LLMs, leveraging their extensive world knowledge and reasoning capabilities to select experts based on historical price data and stock news. This approach provides a more effective and interpretable selection mechanism. Our experiments on multimodal real-world stock datasets demonstrate that LLMoE outperforms state-of-the-art MoE models and other deep neural network approaches. Additionally, the flexible architecture of LLMoE allows for easy adaptation to various downstream tasks.
翻译:深度学习和大语言模型(LLM)的最新进展促进了专家混合(MoE)机制在股票投资领域的应用。尽管这些模型已展现出良好的交易性能,但它们通常是单模态的,忽略了其他模态(如文本数据)中蕴含的丰富信息。此外,传统的基于神经网络的路由器选择机制未能考虑上下文与现实世界的细微差别,导致专家选择欠佳。为应对这些局限性,我们提出了LLMoE,一种在MoE架构中采用LLM作为路由器的新颖框架。具体而言,我们以LLM替代传统的基于神经网络的路由器,利用其广泛的世界知识与推理能力,依据历史价格数据和股票新闻来选择专家。这种方法提供了一种更有效且可解释的选择机制。我们在多模态真实世界股票数据集上的实验表明,LLMoE优于最先进的MoE模型及其他深度神经网络方法。此外,LLMoE的灵活架构使其能够轻松适应各种下游任务。