We study the assessment of semiparametric and other highly-parametrised models from the perspective of foundational principles of parametric statistical inference. In doing so, we highlight the possibility of avoiding the usual semiparametric considerations, which typically require estimation of nuisance components through kernel smoothing or basis expansion, with the associated difficulties of tuning-parameter choice that blur the distinction between estimation and model assessment. A key aspect is the inducement of replication under the postulated model. This can be cast in terms of some non-standard inferential separations, in the vein of Fisherian ancillarity/co-ancillarity and sufficiency/co-sufficiency separations, allowing the replacement of out-of-sample prediction error as a criterion for semiparametric model assessment by a type of within-sample prediction error. Framed in this light are new methodological contributions in multiple example settings, including model assessment for the proportional hazards model, for a time-dependent Poisson process with semiparametric intensity function, and for matched-pair and two-group examples. Also subsumed within the framework is a post-reduction inference approach to the construction of confidence sets of sparse regression models. Numerical work confirms recovery of nominal error rates under the postulated model and high sensitivity to departures in the direction of semiparametric alternatives. We conclude by emphasising open challenges and unifying perspectives.
翻译:暂无翻译