Non-contrast chest CTs offer a rich opportunity for both conventional pulmonary and opportunistic extra-pulmonary screening. While Multi-Task Learning (MTL) can unify these diverse tasks, standard hard-parameter sharing approaches are often suboptimal for modeling distinct pathologies. We propose HyperCT, a framework that dynamically adapts a Vision Transformer backbone via a Hypernetwork. To ensure computational efficiency, we integrate Low-Rank Adaptation (LoRA), allowing the model to regress task-specific low-rank weight updates rather than full parameters. Validated on a large-scale dataset of radiological and cardiological tasks, \method{} outperforms various strong baselines, offering a unified, parameter-efficient solution for holistic patient assessment. Our code is available at https://github.com/lfb-1/HyperCT.
翻译:非增强胸部CT为传统肺部筛查和机会性肺外筛查提供了丰富机会。尽管多任务学习(MTL)能统一这些多样化任务,但标准的硬参数共享方法在建模不同病理特征时往往次优。我们提出HyperCT框架,通过超网络动态适配Vision Transformer主干网络。为确保计算效率,我们集成了低秩适配(LoRA),使模型能够回归任务特定的低秩权重更新而非完整参数。在包含放射学与心脏病学任务的大规模数据集上验证,\method{}优于多种强基线方法,为整体患者评估提供了统一且参数高效的解决方案。我们的代码已开源在https://github.com/lfb-1/HyperCT。