We investigate the theoretical foundations of a recently introduced entropy-based formulation of weighted least squares for the approximation of overdetermined linear systems, motivated by robust data fitting in the presence of sparse gross errors. The weight vector is interpreted as a discrete probability distribution and is determined by maximizing Shannon entropy under normalization and a prescribed mean squared error (MSE) constraint. Unlike classical ordinary least squares, where the error level is an output of the minimization process, here the MSE value plays the role of a control parameter, and entropy selects the least biased weight distribution achieving the prescribed accuracy. The resulting optimization problem is nonconvex due to the nonlinear coupling between the weights and the solution induced by the residual constraint. We analyze the associated optimality system and characterize stationary points through first- and second-order conditions. We prove the existence and local uniqueness of a smooth branch of entropy-maximizing configurations emanating from the ordinary least squares solution and establish its global continuation under suitable nondegeneracy conditions. Furthermore, we investigate the asymptotic regime as the prescribed MSE tends to zero and show that, under appropriate assumptions, the limiting configuration concentrates on a largest subset of data consistent with the linear model, thus suppressing the influence of outliers. Two numerical experiments illustrate the theoretical findings and confirm the robustness properties of the method.
翻译:暂无翻译