Text-attributed graphs are widely used across domains, offering rich opportunities for zero-shot learning via graph-text alignment. However, existing methods struggle with tasks requiring fine-grained pattern recognition, particularly on heterophilic graphs. Through empirical and theoretical analysis, we identify an \textbf{over-abstraction problem}: current approaches operate at excessively large hyperbolic radii, compressing multi-scale structural information into uniform high-level abstractions. This abstraction-induced information loss obscures critical local patterns essential for accurate predictions. By analyzing embeddings in hyperbolic space, we demonstrate that optimal graph learning requires \textbf{faithful preservation} of fine-grained structural details, better retained by representations positioned closer to the origin. To address this, we propose \textbf{H4G}, a framework that systematically reduces embedding radii using learnable block-diagonal scaling matrices and M\"obius matrix multiplication. This approach restores access to fine-grained patterns while maintaining global receptive ability with minimal computational overhead. Experiments show H4G achieves state-of-the-art zero-shot performance with \textbf{12.8\%} improvement on heterophilic graphs and \textbf{8.4\%} on homophilic graphs, confirming that radius reduction enables faithful multi-scale representation for advancing zero-shot graph learning.
翻译:文本属性图在各领域广泛应用,通过图-文本对齐为零样本学习提供了丰富机遇。然而,现有方法在处理需要细粒度模式识别的任务时存在困难,尤其在异配图上。通过实证与理论分析,我们发现了**过度抽象问题**:当前方法在过大的双曲半径上操作,将多尺度结构信息压缩为统一的高层抽象。这种抽象导致的信息损失掩盖了对准确预测至关重要的局部模式。通过分析双曲空间中的嵌入,我们证明最优的图学习需要**忠实保持**细粒度结构细节,而更靠近原点的表示能更好地保留这些细节。为此,我们提出**H4G**框架,该框架通过可学习的块对角缩放矩阵和默比乌斯矩阵乘法系统性地减小嵌入半径。这种方法在恢复细粒度模式访问能力的同时,以最小计算开销保持全局感受能力。实验表明H4G在零样本任务中达到最先进性能,在异配图上提升**12.8%**,在同配图上提升**8.4%**,证实半径缩减能为推进零样本图学习提供忠实的多尺度表示。