Scientific discovery begins with ideas, yet evaluating early-stage research concepts is a subtle and subjective human judgment. As large language models (LLMs) are increasingly tasked with generating scientific hypotheses, most systems implicitly assume that scientists' evaluations form a fixed gold standard, assuming that scientists' judgments do not change. Here we challenge this assumption. In a two-wave study with 7,938 ratings from 63 active researchers across six scientific departments, each participant repeatedly evaluated a constant "control" research idea alongside AI-generated ideas. We find that expert evaluations are not stable: test-retest reliability of overall quality is only moderate (ICC~0.59-0.74), indicating substantial within-participant variability even for identical ideas. Yet the internal structure of judgment remained stable, such as the relative importance placed on originality, feasibility, clarity, and other criteria. We then aligned an LLM-based ideation system to first-wave human ratings and used it to select new ideas. Although alignment improved agreement with Wave-1 evaluations, its apparent gains disappeared once drift in human standards was accounted for. Thus, tuning to a fixed human snapshot produced improvements that were transient rather than persistent. These findings reveal that human evaluation of scientific ideas is not static but a dynamic process with stable priorities and requires shifting calibration. Treating one-time human ratings as immutable ground truth risks overstating progress in AI-assisted ideation and obscuring the challenge of co-evolving with changing expert standards. Drift-aware evaluation protocols and longitudinal benchmarks may therefore be essential for building AI systems that reliably augment, rather than overfit to, human scientific judgment.
翻译:暂无翻译