Large Language Models (LLMs) are increasingly adopted in the financial domain. Their exceptional capabilities to analyse textual data make them well-suited for inferring the sentiment of finance-related news. Such feedback can be leveraged by algorithmic trading systems (ATS) to guide buy/sell decisions. However, this practice bears the risk that a threat actor may craft "adversarial news" intended to mislead an LLM. In particular, the news headline may include "malicious" content that remains invisible to human readers but which is still ingested by the LLM. Although prior work has studied textual adversarial examples, their system-wide impact on LLM-supported ATS has not yet been quantified in terms of monetary risk. To address this threat, we consider an adversary with no direct access to an ATS but able to alter stock-related news headlines on a single day. We evaluate two human-imperceptible manipulations in a financial context: Unicode homoglyph substitutions that misroute models during stock-name recognition, and hidden-text clauses that alter the sentiment of the news headline. We implement a realistic ATS in Backtrader that fuses an LSTM-based price forecast with LLM-derived sentiment (FinBERT, FinGPT, FinLLaMA, and six general-purpose LLMs), and quantify monetary impact using portfolio metrics. Experiments on real-world data show that manipulating a one-day attack over 14 months can reliably mislead LLMs and reduce annual returns by up to 17.7 percentage points. To assess real-world feasibility, we analyze popular scraping libraries and trading platforms and survey 27 FinTech practitioners, confirming our hypotheses. We notified trading platform owners of this security issue.
翻译:大型语言模型(LLM)在金融领域的应用日益广泛。其分析文本数据的卓越能力使其非常适合推断财经新闻的情感倾向。算法交易系统(ATS)可利用此类反馈来指导买卖决策。然而,这种做法存在威胁行为者可能制作旨在误导LLM的“对抗性新闻”的风险。具体而言,新闻标题可能包含对人类读者不可见但仍被LLM摄入的“恶意”内容。尽管先前的研究已探讨过文本对抗样本,但其对LLM支持的ATS在系统层面的影响尚未从货币风险角度进行量化。为应对这一威胁,我们考虑一种无法直接访问ATS但能够在单日内篡改股票相关新闻标题的对手。我们在金融情境下评估两种人类难以察觉的操纵手段:在股票名称识别过程中误导模型的Unicode同形异义字替换,以及改变新闻标题情感倾向的隐藏文本条款。我们在Backtrader中实现了一个融合基于LSTM的价格预测与LLM衍生情感(使用FinBERT、FinGPT、FinLLaMA及六个通用LLM)的逼真ATS,并通过投资组合指标量化货币影响。基于真实数据的实验表明,在14个月期间操纵单日攻击可可靠地误导LLM,并使年化收益率降低高达17.7个百分点。为评估现实可行性,我们分析了主流爬虫库与交易平台,并对27位金融科技从业者进行了调研,结果验证了我们的假设。我们已就此安全问题通知相关交易平台所有者。