Using LLMs as rerankers requires experimenting with various hyperparameters, such as prompt formats, model choice, and reformulation strategies. We introduce PyTerrier-GenRank, a PyTerrier plugin to facilitate seamless reranking experiments with LLMs, supporting popular ranking strategies like pointwise and listwise prompting. We validate our plugin through HuggingFace and OpenAI hosted endpoints.
翻译:将大型语言模型(LLM)用作重排序器需要尝试多种超参数,例如提示格式、模型选择以及查询重构策略。本文介绍了PyTerrier-GenRank,这是一个PyTerrier插件,旨在促进利用LLM进行无缝的重排序实验,并支持点式提示和列表式提示等主流排序策略。我们通过HuggingFace和OpenAI托管的终端节点验证了该插件的有效性。