Large Language Models (LLMs) are increasingly used to recommend mobile applications through natural language prompts, offering a flexible alternative to keyword-based app store search. Yet, the reasoning behind these recommendations remains opaque, raising questions about their consistency, explainability, and alignment with traditional App Store Optimization (ASO) metrics. In this paper, we present an empirical analysis of how widely-used general purpose LLMs generate, justify, and rank mobile app recommendations. Our contributions are: (i) a taxonomy of 16 generalizable ranking criteria elicited from LLM outputs; (ii) a systematic evaluation framework to analyse recommendation consistency and responsiveness to explicit ranking instructions; and (iii) a replication package to support reproducibility and future research on AI-based recommendation systems. Our findings reveal that LLMs rely on a broad yet fragmented set of ranking criteria, only partially aligned with standard ASO metrics. While top-ranked apps tend to be consistent across runs, variability increases with ranking depth and search specificity. LLMs exhibit varying sensitivity to explicit ranking instructions - ranging from substantial adaptations to near-identical outputs - highlighting their complex reasoning dynamics in conversational app discovery. Our results aim to support end-users, app developers, and recommender-systems researchers in navigating the emerging landscape of conversational app discovery.
翻译:大型语言模型(LLMs)正日益通过自然语言提示被用于推荐移动应用程序,为基于关键字的应用商店搜索提供了一种灵活的替代方案。然而,这些推荐背后的推理过程仍不透明,引发了关于其一致性、可解释性以及与传统的应用商店优化(ASO)指标契合度的疑问。本文对广泛使用的通用大型语言模型如何生成、论证并排序移动应用推荐进行了实证分析。我们的贡献包括:(i)从LLM输出中归纳出的16个可泛化排序标准的分类体系;(ii)一个用于分析推荐一致性及对显式排序指令响应性的系统评估框架;(iii)一个支持可复现性及未来基于AI的推荐系统研究的复现工具包。我们的研究结果表明,LLMs依赖一套广泛但零散的排序标准,这些标准仅部分与标准ASO指标一致。虽然排名靠前的应用在不同运行中趋于一致,但变异性随排名深度和搜索特异性增加而增大。LLMs对显式排序指令表现出不同的敏感性——从显著调整到近乎相同的输出——凸显了其在对话式应用发现中复杂的推理动态。我们的研究结果旨在帮助终端用户、应用开发者和推荐系统研究者驾驭这一新兴的对话式应用发现领域。