We introduce randomized algorithms to Clifford's Geometric Algebra, generalizing randomized linear algebra to hypercomplex vector spaces. This novel approach has many implications in machine learning, including training neural networks to global optimality via convex optimization. Additionally, we consider fine-tuning large language model (LLM) embeddings as a key application area, exploring the intersection of geometric algebra and modern AI techniques. In particular, we conduct a comparative analysis of the robustness of transfer learning via embeddings, such as OpenAI GPT models and BERT, using traditional methods versus our novel approach based on convex optimization. We test our convex optimization transfer learning method across a variety of case studies, employing different embeddings (GPT-4 and BERT embeddings) and different text classification datasets (IMDb, Amazon Polarity Dataset, and GLUE) with a range of hyperparameter settings. Our results demonstrate that convex optimization and geometric algebra not only enhances the performance of LLMs but also offers a more stable and reliable method of transfer learning via embeddings.
翻译:我们将随机算法引入克利福德几何代数,将随机线性代数推广至超复数向量空间。这一新颖方法在机器学习领域具有广泛意义,包括通过凸优化训练神经网络至全局最优。此外,我们以大规模语言模型嵌入的微调作为关键应用领域,探索几何代数与现代人工智能技术的交叉。特别地,我们通过对比分析,研究了基于嵌入的迁移学习(如OpenAI GPT模型和BERT)在传统方法与基于凸优化的新方法下的鲁棒性。我们在多种案例研究中测试了所提出的凸优化迁移学习方法,采用不同嵌入(GPT-4和BERT嵌入)和不同文本分类数据集(IMDb、亚马逊极性数据集和GLUE),并设置了一系列超参数。结果表明,凸优化与几何代数不仅能提升大规模语言模型的性能,还为基于嵌入的迁移学习提供了更稳定可靠的方法。