Modern graph learning systems often combine links with text, as in citation networks with abstracts or social graphs with user posts. In such systems, text is usually easier to edit than graph structure, which creates a practical security risk: an attacker may hide a small malicious cue in training text and later use it to trigger incorrect predictions. This paper studies that risk in a realistic setting where the attacker edits only node text and leaves the graph unchanged. We propose \textbf{TAGBD}, a graph-aware backdoor attack that first selects training nodes that are easier to manipulate, then generates stealthy poison text with a shadow graph model, and finally injects the text by replacing the original content or appending a short phrase. Experiments on three benchmark datasets show that TAGBD achieves very high attack success rates, transfers across different graph models, and remains effective under common defenses. These results show that inconspicuous poison text alone can serve as a reliable attack channel in text-attributed graphs, highlighting the need for defenses that inspect both node content and graph structure.
翻译:暂无翻译