Large Language Models (LLMs) are increasingly used to automate classification tasks in business, such as analyzing customer satisfaction from text. However, the inherent stochasticity of LLMs can create measurement error when the outcome is considered deterministic. This problem is often neglected with the empirical practice of a single round of output, or addressed with ad-hoc methods like majority voting. Such naive approaches fail to quantify uncertainty and can produce biased estimates of population-level metrics. In this paper, we propose a formal statistical solution by introducing a Bayesian latent state model to address it. Our model treats the true classification as a latent variable and the multiple LLM ratings as noisy measurements of this outcome state. This framework jointly estimates LLM error rates, population-level outcome rates, individual-level probabilities of the outcome, and the causal impact of interventions, if any, on the outcome. The methodology is applicable to both fully unsupervised and semi-supervised settings, where ground truth labels are unavailable or available for only a subset of the classification targets. We provide formal theoretical conditions and proofs for the strict identifiability of the model parameters. Through simulation studies, we demonstrate that our model accurately recovers true parameters, showing superior performance and capabilities compared to other methods. We provide tailored recommendations of modeling choices based on the difficulty level of the task. We also apply it to a real-world case study analyzing over 14,000 customer support transcripts. We conclude that this methodology provides a general framework for converting probabilistic outputs from LLMs into reliable insights for scientific and business applications.
翻译:暂无翻译