We consider small-data, large-scale decision problems in which a firm must make many operational decisions simultaneously (e.g., across a large product portfolio) while observing only a few, potentially noisy, data points per instance. Inspired by the success of large language models (LLMs), we propose a pretrain-then-finetune approach built on a designed Transformer model to address this challenge. The model is first pretrained on large-scale, domain-informed synthetic data that encode managerial knowledge and structural features of the decision environment, and is then fine-tuned on real observations. This new pipeline offers two complementary advantages: pretraining injects domain knowledge into the learning process and enables the training of high-capacity models using abundant synthetic data, while finetuning adapts the pretrained model to the operational environment and improves alignment with the true data-generating regime. While we have leveraged the Transformer's state-of-the-art representational capacity, particularly its attention mechanism, to efficiently extract cross-task structure, our approach is not an off-the-shelf application. Instead, it relies on problem-specific architectural design and a tailored training procedure to match the decision setting. Theoretically, we develop the first comprehensive error analysis regarding Transformer learning in relevant contexts, establishing nonasymptotic guarantees that validate the method's effectiveness. Critically, our analysis reveals how pretraining and fine-tuning jointly determine performance, with the dominant contribution governed by whichever is more favorable. In particular, finetuning exhibits an economies-of-scale effect, whereby transfer learning becomes increasingly effective as the number of instances grows.
翻译:本文研究小样本大规模决策问题,即企业需同时做出大量运营决策(例如跨大规模产品组合),而每个决策实例仅能观测到少量可能存在噪声的数据点。受大语言模型(LLMs)成功经验的启发,我们提出基于定制化Transformer模型的预训练-微调方法以应对这一挑战。该模型首先在编码了管理知识与决策环境结构特征的大规模领域合成数据上进行预训练,随后在真实观测数据上进行微调。这一新流程具有两个互补优势:预训练将领域知识注入学习过程,并利用丰富合成数据训练高容量模型;而微调使预训练模型适应运营环境,提升与真实数据生成机制的契合度。虽然我们利用了Transformer最先进的表征能力(特别是其注意力机制)来高效提取跨任务结构,但本方法并非现成应用。相反,它依赖于针对具体问题的架构设计和匹配决策场景的定制化训练流程。在理论层面,我们首次建立了Transformer在相关情境下学习的完整误差分析,通过非渐近性保证验证了方法的有效性。关键的是,我们的分析揭示了预训练与微调如何共同决定性能,其主导贡献取决于两者中更具优势的环节。特别地,微调呈现出规模经济效应:随着实例数量增长,迁移学习效果持续增强。