Mental health in children and adolescents has been steadily deteriorating over the past few years [ 1 ]. The recent advent of Large Language Models (LLMs) offers much hope for cost and time efficient scaling of monitoring and intervention, yet despite specifically prevalent issues such as school bullying and eating disorders, previous studies on have not investigated performance in this domain or for open information extraction where the set of answers is not predetermined. We create a new dataset of Reddit posts from adolescents aged 12-19 annotated by expert psychiatrists for the following categories: TRAUMA, PRECARITY, CONDITION, SYMPTOMS, SUICIDALITY and TREATMENT and compare expert labels to annotations from two top performing LLMs (GPT3.5 and GPT4). In addition, we create two synthetic datasets to assess whether LLMs perform better when annotating data as they generate it. We find GPT4 to be on par with human inter-annotator agreement and performance on synthetic data to be substantially higher, however we find the model still occasionally errs on issues of negation and factuality and higher performance on synthetic data is driven by greater complexity of real data rather than inherent advantage.
翻译:过去几年,儿童和青少年的心理健康状况持续恶化[1]。大型语言模型(LLMs)的最新发展为以高成本和时间效率进行监测和干预的规模化应用带来了巨大希望,然而尽管学校欺凌和饮食失调等特定问题普遍存在,先前的研究尚未探讨该模型在这一领域的表现,也未涉及答案集未预先确定的开放信息抽取任务。我们创建了一个新的数据集,包含12-19岁青少年在Reddit上的帖子,并由精神病学专家对以下类别进行标注:创伤(TRAUMA)、不稳定(PRECARITY)、状况(CONDITION)、症状(SYMPTOMS)、自杀倾向(SUICIDALITY)和治疗(TREATMENT),并将专家标签与两个表现最佳的LLM(GPT3.5和GPT4)的标注结果进行比较。此外,我们创建了两个合成数据集,以评估LLM在标注其生成数据时是否表现更优。我们发现GPT4达到了与人类标注者间一致性相当的水平,且合成数据上的表现显著更高。然而,该模型在否定性和事实性问题上仍偶有出错,而合成数据上的更高表现源于真实数据更高的复杂性,而非模型固有的优势。