Recent surge in Large Language Model (LLM) availability has opened exciting avenues for research. However, efficiently interacting with these models presents a significant hurdle since LLMs often reside on proprietary or self-hosted API endpoints, each requiring custom code for interaction. Conducting comparative studies between different models can therefore be time-consuming and necessitate significant engineering effort, hindering research efficiency and reproducibility. To address these challenges, we present prompto, an open source Python library which facilitates asynchronous querying of LLM endpoints enabling researchers to interact with multiple LLMs concurrently, while maximising efficiency and utilising individual rate limits. Our library empowers researchers and developers to interact with LLMs more effectively and allowing faster experimentation, data generation and evaluation. prompto is released with an introductory video (https://youtu.be/lWN9hXBOLyQ) under MIT License and is available via GitHub (https://github.com/alan-turing-institute/prompto).
翻译:近期大规模语言模型(LLM)可用性的激增为研究开辟了令人兴奋的途径。然而,与这些模型进行高效交互存在显著障碍,因为LLM通常部署在专有或自托管的API端点上,每个端点都需要定制代码进行交互。因此,在不同模型之间进行比较研究可能耗时且需要大量工程投入,从而阻碍研究效率和可复现性。为应对这些挑战,我们提出了prompto,这是一个开源Python库,它促进了LLM端点的异步查询,使研究人员能够并发地与多个LLM交互,同时最大化效率并利用各自的速率限制。我们的库使研究人员和开发者能够更有效地与LLM交互,并实现更快的实验、数据生成和评估。prompto在MIT许可证下发布,附有介绍视频(https://youtu.be/lWN9hXBOLyQ),并可通过GitHub(https://github.com/alan-turing-institute/prompto)获取。