With the rapid advancement of Multi-modal Large Language Models (MLLMs), MLLM-based Image Quality Assessment (IQA) methods have shown promising generalization. However, directly extending these MLLM-based IQA methods to PCQA remains challenging. On the one hand, existing PCQA datasets are limited in scale, which hinders stable and effective instruction tuning of MLLMs. On the other hand, due to large-scale image-text pretraining, MLLMs tend to rely on texture-dominant reasoning and are insufficiently sensitive to geometric structural degradations that are critical for PCQA. To address these gaps, we propose a novel MLLM-based no-reference PCQA framework, termed GT-PCQA, which is built upon two key strategies. First, to enable stable and effective instruction tuning under scarce PCQA supervision, a 2D-3D joint training strategy is proposed. This strategy formulates PCQA as a relative quality comparison problem to unify large-scale IQA datasets with limited PCQA datasets. It incorporates a parameter-efficient Low-Rank Adaptation (LoRA) scheme to support instruction tuning. Second, a geometry-texture decoupling strategy is presented, which integrates a dual-prompt mechanism with an alternating optimization scheme to mitigate the inherent texture-dominant bias of pre-trained MLLMs, while enhancing sensitivity to geometric structural degradations. Extensive experiments demonstrate that GT-PCQA achieves competitive performance and exhibits strong generalization.
翻译:随着多模态大语言模型(MLLMs)的快速发展,基于MLLM的图像质量评估(IQA)方法已展现出良好的泛化能力。然而,直接将这类基于MLLM的IQA方法扩展到点云质量评估(PCQA)仍面临挑战。一方面,现有PCQA数据集规模有限,这阻碍了MLLM进行稳定且有效的指令微调。另一方面,由于经过大规模图像-文本预训练,MLLM倾向于依赖纹理主导的推理,而对PCQA至关重要的几何结构退化不够敏感。为弥补这些不足,我们提出了一种新颖的基于MLLM的无参考PCQA框架,称为GT-PCQA,该框架建立在两个关键策略之上。首先,为了在稀缺的PCQA监督下实现稳定有效的指令微调,我们提出了一种2D-3D联合训练策略。该策略将PCQA表述为相对质量比较问题,以统一大规模IQA数据集与有限的PCQA数据集,并采用参数高效的LoRA方案来支持指令微调。其次,我们提出了一种几何-纹理解耦策略,该策略将双提示机制与交替优化方案相结合,以缓解预训练MLLM固有的纹理主导偏差,同时增强对几何结构退化的敏感性。大量实验表明,GT-PCQA取得了具有竞争力的性能,并展现出强大的泛化能力。