Current numerical abstract interpretation relies on fixed, hand-crafted, instruction-specific transformers tailored to each domain, causing three key limitations: transformers cannot be reused across domains; precise compositional reasoning over instruction sequences is difficult; and all downstream tasks must use the same fixed transformer regardless of precision or efficiency needs.To address this, we propose the Evolving Abstract Transformer, which replaces the fixed single-output design with an adaptable search over a parametric space of sound outputs via two algorithms. First, the Universal Parametric Output Space Encoder (UPOSE) constructs a compact parametric space of sound outputs for any polyhedral numerical domain and any operator in the Quadratic-Bounded Guarded Operators (QGO) class, covering both individual instructions and structured sequences. Second, the Adaptive Gradient Guidance (AGG) algorithm leverages the differentiable structure of UPOSE's output space to efficiently search it according to downstream objectives and available runtime, continually evolving the output as more time is provided. We implement these ideas in the AbsEvolve framework and evaluate across three numerical abstract domains: Zones, Octagons, and Polyhedra. Results show the evolving transformer works across domains and instructions, enables efficient precision-efficiency tradeoffs by adjusting number of gradient steps in the search, and reaches the most precise invariants up to 3.2x faster than existing baselines.
翻译:暂无翻译