We introduce Skeleton-Cache, the first training-free test-time adaptation framework for skeleton-based zero-shot action recognition (SZAR), aimed at improving model generalization to unseen actions during inference. Skeleton-Cache reformulates inference as a lightweight retrieval process over a non-parametric cache that stores structured skeleton representations, combining both global and fine-grained local descriptors. To guide the fusion of descriptor-wise predictions, we leverage the semantic reasoning capabilities of large language models (LLMs) to assign class-specific importance weights. By integrating these structured descriptors with LLM-guided semantic priors, Skeleton-Cache dynamically adapts to unseen actions without any additional training or access to training data. Extensive experiments on NTU RGB+D 60/120 and PKU-MMD II demonstrate that Skeleton-Cache consistently boosts the performance of various SZAR backbones under both zero-shot and generalized zero-shot settings. The code is publicly available at https://github.com/Alchemist0754/Skeleton-Cache.
翻译:我们提出了Skeleton-Cache,这是首个面向骨架零样本动作识别(SZAR)的免训练测试时自适应框架,旨在提升模型在推理阶段对未见动作的泛化能力。Skeleton-Cache将推理过程重构为基于非参数缓存的轻量级检索流程,该缓存存储了结合全局与细粒度局部描述符的结构化骨架表征。为引导描述符层面预测的融合,我们利用大语言模型(LLMs)的语义推理能力,为不同类别分配重要性权重。通过将这些结构化描述符与LLM引导的语义先验相结合,Skeleton-Cache能够在无需额外训练或访问训练数据的情况下,动态适应未见动作。在NTU RGB+D 60/120和PKU-MMD II数据集上的大量实验表明,Skeleton-Cache在零样本和广义零样本设置下,均能持续提升多种SZAR骨干模型的性能。代码已公开于https://github.com/Alchemist0754/Skeleton-Cache。