Recommender-systems research has accelerated model and evaluation advances, yet largely neglects automating the research process itself. We argue for a shift from narrow AutoRecSys tools -- focused on algorithm selection and hyper-parameter tuning -- to an Autonomous Recommender-Systems Research Lab (AutoRecLab) that integrates end-to-end automation: problem ideation, literature analysis, experimental design and execution, result interpretation, manuscript drafting, and provenance logging. Drawing on recent progress in automated science (e.g., multi-agent AI Scientist and AI Co-Scientist systems), we outline an agenda for the RecSys community: (1) build open AutoRecLab prototypes that combine LLM-driven ideation and reporting with automated experimentation; (2) establish benchmarks and competitions that evaluate agents on producing reproducible RecSys findings with minimal human input; (3) create review venues for transparently AI-generated submissions; (4) define standards for attribution and reproducibility via detailed research logs and metadata; and (5) foster interdisciplinary dialogue on ethics, governance, privacy, and fairness in autonomous research. Advancing this agenda can increase research throughput, surface non-obvious insights, and position RecSys to contribute to emerging Artificial Research Intelligence. We conclude with a call to organise a community retreat to coordinate next steps and co-author guidance for the responsible integration of automated research systems.
翻译:推荐系统研究已加速了模型与评估方法的进步,但很大程度上忽视了研究过程本身的自动化。我们主张从狭隘的AutoRecSys工具——专注于算法选择和超参数调优——转向一个集成端到端自动化的自主推荐系统研究实验室(AutoRecLab),该实验室涵盖问题构思、文献分析、实验设计与执行、结果解释、文稿草拟以及溯源记录。借鉴自动化科学的最新进展(例如多智能体AI科学家和AI合作科学家系统),我们为推荐系统社区勾勒出一份议程:(1)构建开放的AutoRecLab原型,将LLM驱动的构思和报告与自动化实验相结合;(2)建立基准和竞赛,评估智能体在最少人工输入下产出可复现推荐系统发现的能力;(3)为透明AI生成的投稿创建评审渠道;(4)通过详细的研究日志和元数据定义归属与可复现性标准;(5)促进关于自主研究中伦理、治理、隐私和公平性的跨学科对话。推进此议程可提升研究产出效率,揭示非显而易见的洞见,并使推荐系统领域为新兴的人工研究智能做出贡献。最后,我们呼吁组织一次社区研讨会,以协调后续步骤,并共同撰写关于负责任地整合自动化研究系统的指导原则。