Symbolic Regression remains an NP-Hard problem, with extensive research focusing on AI models for this task. Transformer models have shown promise in Symbolic Regression, but performance suffers with smaller datasets. We propose applying k-fold cross-validation to a transformer-based symbolic regression model trained on a significantly reduced dataset (15,000 data points, down from 500,000). This technique partitions the training data into multiple subsets (folds), iteratively training on some while validating on others. Our aim is to provide an estimate of model generalization and mitigate overfitting issues associated with smaller datasets. Results show that this process improves the model's output consistency and generalization by a relative improvement in validation loss of 53.31%. Potentially enabling more efficient and accessible symbolic regression in resource-constrained environments.
翻译:符号回归仍然是一个NP难问题,大量研究聚焦于解决该任务的人工智能模型。Transformer模型在符号回归中展现出潜力,但在较小数据集上性能会受到影响。我们提出将k折交叉验证应用于一个基于Transformer的符号回归模型,该模型在一个显著缩减的数据集(15,000个数据点,原为500,000个)上进行训练。该技术将训练数据划分为多个子集(折),迭代地在部分子集上训练,同时在其余子集上进行验证。我们的目标是提供模型泛化能力的估计,并缓解与小数据集相关的过拟合问题。结果表明,该过程通过验证损失相对改善53.31%,提高了模型输出的稳定性和泛化能力。这有望在资源受限的环境中实现更高效、更易获取的符号回归。