WebAssembly (Wasm) has become a key compilation target for portable and efficient execution across diverse platforms. Benchmarking its performance, however, is a multi-dimensional challenge: it depends not only on the choice of runtime engines, but also on hardware architectures, application domains, source languages, benchmark suites, and runtime configurations. This paper introduces Wasure, a modular and extensible command-line toolkit that simplifies the execution and comparison of WebAssembly benchmarks. To complement performance evaluation, we also conducted a dynamic analysis of the benchmark suites included with Wasure. Our analysis reveals substantial differences in code coverage, control flow, and execution patterns, emphasizing the need for benchmark diversity. Wasure aims to support researchers and developers in conducting more systematic, transparent, and insightful evaluations of WebAssembly engines.
翻译:WebAssembly(Wasm)已成为跨多种平台实现可移植高效执行的关键编译目标。然而,对其性能进行基准测试是一个多维度的挑战:它不仅取决于运行时引擎的选择,还受硬件架构、应用领域、源语言、基准测试套件和运行时配置的影响。本文介绍了Wasure,一个模块化且可扩展的命令行工具包,它简化了WebAssembly基准测试的执行与比较。为补充性能评估,我们还对Wasure包含的基准测试套件进行了动态分析。我们的分析揭示了在代码覆盖率、控制流和执行模式方面存在的显著差异,这强调了基准测试多样性的必要性。Wasure旨在支持研究人员和开发者对WebAssembly引擎进行更系统、透明且具有洞察力的评估。