Evaluating AI systems that interact with humans requires understanding their behavior across diverse user populations, but collecting representative human data is often expensive or infeasible, particularly for novel technologies or hypothetical future scenarios. Recent work in Generative Agent-Based Modeling has shown that large language models can simulate human-like synthetic personas with high fidelity, accurately reproducing the beliefs and behaviors of specific individuals. However, most approaches require detailed data about target populations and often prioritize density matching (replicating what is most probable) rather than support coverage (spanning what is possible), leaving long-tail behaviors underexplored. We introduce Persona Generators, functions that can produce diverse synthetic populations tailored to arbitrary contexts. We apply an iterative improvement loop based on AlphaEvolve, using large language models as mutation operators to refine our Persona Generator code over hundreds of iterations. The optimization process produces lightweight Persona Generators that can automatically expand small descriptions into populations of diverse synthetic personas that maximize coverage of opinions and preferences along relevant diversity axes. We demonstrate that evolved generators substantially outperform existing baselines across six diversity metrics on held-out contexts, producing populations that span rare trait combinations difficult to achieve in standard LLM outputs.
翻译:评估与人类交互的人工智能系统需要理解其在不同用户群体中的行为表现,但收集具有代表性的人类数据通常成本高昂或不可行,尤其是在面对新兴技术或假设性未来场景时。基于生成式智能体建模的最新研究表明,大语言模型能够以高保真度模拟类人的合成人物角色,准确复现特定个体的信念与行为。然而,现有方法大多需要目标群体的详细数据,且往往优先考虑密度匹配(复制最可能的情况)而非支持覆盖(涵盖所有可能情况),导致长尾行为未能得到充分探索。本文提出人物生成器——一种能够根据任意情境生成定制化多样化合成群体的函数。我们采用基于AlphaEvolve的迭代优化循环,将大语言模型作为变异算子,对人物生成器代码进行数百轮迭代优化。该优化过程最终产生轻量级的人物生成器,能够自动将简短描述扩展为多样化的合成人物群体,并沿相关多样性维度最大化观点与偏好的覆盖范围。实验表明,在六个多样性指标上,经过演化的人物生成器在未知情境中显著优于现有基线方法,所产生的群体能够覆盖标准大语言模型输出难以实现的稀有特征组合。