Evaluating whether text-to-image models follow explicit spatial instructions is difficult to automate. Object detectors may miss targets or return multiple plausible detections, and simple geometric tests can become ambiguous in borderline cases. Spatial evaluation is naturally a selective prediction problem, the checker may abstain when evidence is weak and report confidence so that results can be interpreted as a risk coverage tradeoff rather than a single score. We introduce SpatialBench-UC, a small, reproducible benchmark for pairwise spatial relations. The benchmark contains 200 prompts (50 object pairs times 4 relations) grouped into 100 counterfactual pairs obtained by swapping object roles. We release a benchmark package, versioned prompts, pinned configs, per-sample checker outputs, and report tables, enabling reproducible and auditable comparisons across models. We also include a lightweight human audit used to calibrate the checker's abstention margin and confidence threshold. We evaluate three baselines, Stable Diffusion 1.5, SD 1.5 BoxDiff, and SD 1.4 GLIGEN. The checker reports pass rate and coverage as well as conditional pass rates on decided samples. The results show that grounding methods substantially improve both pass rate and coverage, while abstention remains a dominant factor due mainly to missing detections.
翻译:评估文本到图像模型是否遵循显式空间指令难以实现自动化。物体检测器可能遗漏目标或返回多个可能检测结果,而简单几何测试在边界情况下会变得模糊不清。空间评估本质上是一个选择性预测问题:当证据不足时,检查器可以弃权并报告置信度,从而使结果能够解释为风险覆盖权衡而非单一分数。我们提出SpatialBench-UC——一个用于成对空间关系的小型可复现基准。该基准包含200个提示(50个物体对×4种关系),按交换物体角色形成的100个反事实对进行分组。我们发布了基准工具包、版本化提示、固定配置、逐样本检查器输出及报告表格,支持跨模型的可复现与可审计比较。同时包含用于校准检查器弃权边界和置信度阈值的轻量级人工审核流程。我们评估了三个基线模型:Stable Diffusion 1.5、SD 1.5 BoxDiff和SD 1.4 GLIGEN。检查器报告通过率与覆盖率,以及已决策样本的条件通过率。结果表明:定位方法能显著提升通过率和覆盖率,而因检测缺失导致的弃权仍是主导因素。