Human Activity Recognition (HAR) is a central problem for context-aware applications, especially for smart homes and assisted living. A few very recent studies have shown that Large Language Models (LLMs) can be used for HAR at home, reaching high performance and addressing key challenges. In this paper, we provide new experimental results regarding the use of LLMs for HAR, on two state-of-the-art datasets. More specifically, we show how recognition performance evolves depending on the size of the LLM used. Moreover, we experiment on the use of knowledge distillation techniques to fine-tune smaller LLMs with HAR reasoning examples generated by larger LLMs. We show that such fine-tuned models can perform almost as well as the largest LLMs, while having 50 times less parameters.
翻译:人类活动识别(HAR)是情境感知应用,特别是智能家居和辅助生活领域的核心问题。近期少数研究表明,大型语言模型(LLM)可用于家庭环境下的HAR,取得了高性能并解决了关键挑战。本文在两个最先进的数据集上,提供了关于使用LLM进行HAR的新实验结果。具体而言,我们展示了识别性能如何随所用LLM的规模而变化。此外,我们实验了利用知识蒸馏技术,使用由更大规模LLM生成的HAR推理示例来微调较小的LLM。结果表明,经过此类微调的模型性能几乎可与最大规模的LLM相媲美,而其参数量却减少了50倍。