We present a result according to which certain functions of covariance matrices are maximized at scalar multiples of the identity matrix. In a statistical context in which such functions measure loss, this says that the least favourable form of dependence is in fact independence, so that a procedure optimal for i.i.d.\ data can be minimax. In particular, the ordinary least squares (\textsc{ols}) estimate of a correctly specified regression response is minimax among generalized least squares (\textsc{gls}) estimates, when the maximum is taken over certain classes of error covariance structures and the loss function possesses a natural monotonicity property. An implication is that it can be not only safe, but optimal to ignore such departures from the usual assumption of i.i.d.\ errors. We then consider regression models in which the response function is possibly misspecified, and show that \textsc{ols} is minimax if the design is uniform on its support, but that this often fails otherwise. We go on to investigate the interplay between minimax \textsc{gls} procedures and minimax designs, leading us to extend, to robustness against dependencies, an existing observation -- that robustness against model misspecifications is increased by splitting replicates into clusters of observations at nearby locations.
翻译:我们提出一个结果,表明协方差矩阵的某些函数在单位矩阵的标量倍数处达到最大值。在统计背景下,当此类函数用于衡量损失时,这意味着最不利的依赖形式实际上是独立性,因此针对独立同分布数据的最优程序可以是极小极大最优的。具体而言,当在特定类别的误差协方差结构上取最大值且损失函数具有自然单调性时,正确设定的回归响应之普通最小二乘估计在广义最小二乘估计中是极小极大最优的。这一结论意味着,忽略对独立同分布误差通常假设的此类偏离不仅可能是安全的,甚至可能是最优的。随后,我们考虑响应函数可能存在设定错误的回归模型,并证明当设计在其支撑集上均匀分布时,普通最小二乘估计是极小极大最优的,但若非如此则往往不成立。我们进一步探究极小极大广义最小二乘程序与极小极大设计之间的相互作用,从而将现有观察——通过将重复观测拆分为邻近位置的观测簇可以增强对模型设定错误的稳健性——扩展至对依赖关系的稳健性。