Iterative improvement of model architectures is fundamental to deep learning: Transformers first enabled scaling, and recent advances in model hybridization have pushed the quality-efficiency frontier. However, optimizing architectures remains challenging and expensive. Current automated or manual approaches fall short, largely due to limited progress in the design of search spaces and due to the simplicity of resulting patterns and heuristics. In this work, we propose a new approach for the synthesis of tailored architectures (STAR). Our approach combines a novel search space based on the theory of linear input-varying systems, supporting a hierarchical numerical encoding into architecture genomes. STAR genomes are automatically refined and recombined with gradient-free, evolutionary algorithms to optimize for multiple model quality and efficiency metrics. Using STAR, we optimize large populations of new architectures, leveraging diverse computational units and interconnection patterns, improving over highly-optimized Transformers and striped hybrid models on the frontier of quality, parameter size, and inference cache for autoregressive language modeling.
翻译:模型架构的迭代改进是深度学习的核心:Transformer 首次实现了规模化扩展,而近期模型混合技术的进展则进一步推动了质量-效率前沿。然而,架构优化仍然具有挑战性且成本高昂。当前自动化或手动方法存在不足,主要源于搜索空间设计的进展有限,以及所得模式和启发式方法的简单性。在本工作中,我们提出了一种用于合成定制化架构的新方法(STAR)。该方法结合了基于线性时变系统理论的新型搜索空间,支持将架构编码为层次化的数值基因组。STAR 基因组通过无梯度进化算法自动优化与重组,以针对多项模型质量与效率指标进行优化。利用 STAR,我们优化了大量新架构,融合了多样化的计算单元与互连模式,在自回归语言建模任务中,于质量、参数量及推理缓存方面超越了高度优化的 Transformer 及条带化混合模型。