The commonly cited rule of thumb for regression analysis, which suggests that a sample size of $n \geq 30$ is sufficient to ensure valid inferences, is frequently referenced but rarely scrutinized. This research note evaluates the lower bound for the number of observations required for regression analysis by exploring how different distributional characteristics, such as skewness and kurtosis, influence the convergence of t-values to the t-distribution in linear regression models. Through an extensive simulation study involving over 22 billion regression models, this paper examines a range of symmetric, platykurtic, and skewed distributions, testing sample sizes from 4 to 10,000. The results show that it is sufficient that either the dependent or independent variable follow a symmetric distribution for the t-values to converge at much smaller sample sizes than $n=30$, unless the other variable is extremely skewed. This is contrary to previous guidance which suggests that the error term needs to be normally distributed for this convergence to happen at low $n$. However, when both variables are highly skewed, much larger sample sizes are required. These findings suggest the $n \geq 30$ rule is overly conservative in some cases and insufficient in others, offering revised guidelines for determining minimum sample sizes.
翻译:回归分析中常被引用的经验法则——即样本量 $n \geq 30$ 足以保证有效的统计推断——虽被频繁提及,却鲜有深入审视。本研究通过探讨偏度与峰度等不同分布特征如何影响线性回归模型中 t 值向 t 分布的收敛,评估了回归分析所需观测数量的下界。通过一项涵盖超过 220 亿个回归模型的广泛模拟研究,本文检验了包括对称分布、低峰态分布及偏态分布在内的一系列分布,测试样本量从 4 到 10,000 不等。结果表明,只要因变量或自变量之一服从对称分布,t 值在远小于 $n=30$ 的样本量下即可收敛,除非另一变量呈现极端偏态。这与以往认为误差项需服从正态分布才能在较小 $n$ 下实现收敛的指导原则相悖。然而,当两个变量均高度偏态时,则需要更大的样本量。这些发现表明,$n \geq 30$ 的规则在某些情况下过于保守,而在另一些情况下则不足,从而为确定最小样本量提供了修订后的指导原则。