We revisit the problem of Pauli shadow tomography: given copies of an unknown $n$-qubit quantum state $\rho$, estimate $\text{tr}(P\rho)$ for some set of Pauli operators $P$ to within additive error $\epsilon$. This has been a popular testbed for exploring the advantage of protocols with quantum memory over those without: with enough memory to measure two copies at a time, one can use Bell sampling to estimate $|\text{tr}(P\rho)|$ for all $P$ using $O(n/\epsilon^4)$ copies, but with $k\le n$ qubits of memory, $\Omega(2^{(n-k)/3})$ copies are needed. These results leave open several natural questions. How does this picture change in the physically relevant setting where one only needs to estimate a certain subset of Paulis? What is the optimal dependence on $\epsilon$? What is the optimal tradeoff between quantum memory and sample complexity? We answer all of these questions. For any subset $A$ of Paulis and any family of measurement strategies, we completely characterize the optimal sample complexity, up to $\log |A|$ factors. We show any protocol that makes $\text{poly}(n)$-copy measurements must make $\Omega(1/\epsilon^4)$ measurements. For any protocol that makes $\text{poly}(n)$-copy measurements and only has $k < n$ qubits of memory, we show that $\widetilde{\Theta}(\min\{2^n/\epsilon^2, 2^{n-k}/\epsilon^4\})$ copies are necessary and sufficient. The protocols we propose can also estimate the actual values $\text{tr}(P\rho)$, rather than just their absolute values as in prior work. Additionally, as a byproduct of our techniques, we establish tight bounds for the task of purity testing and show that it exhibits an intriguing phase transition not present in the memory-sample tradeoff for Pauli shadow tomography.
翻译:我们重新审视泡利阴影层析成像问题:给定未知 $n$ 量子比特量子态 $\rho$ 的若干副本,以加性误差 $\epsilon$ 估计某组泡利算符 $P$ 对应的 $\text{tr}(P\rho)$。该问题已成为探索具备量子内存的协议相对于无内存协议之优势的常用测试平台:若具备同时测量两个副本的足够内存,可利用贝尔采样以 $O(n/\epsilon^4)$ 个副本估计所有 $P$ 对应的 $|\text{tr}(P\rho)|$;但若仅有 $k\le n$ 量子比特内存,则需 $\Omega(2^{(n-k)/3})$ 个副本。这些结论遗留了若干自然问题:在仅需估计特定泡利子集的物理相关场景中,该框架如何变化?对 $\epsilon$ 的最优依赖关系是什么?量子内存与样本复杂度之间的最优权衡如何?我们完整回答了所有这些问题。针对任意泡利子集 $A$ 及任意测量策略族,我们完全刻画了最优样本复杂度(精确至 $\log |A|$ 因子)。我们证明任何进行 $\text{poly}(n)$ 副本测量的协议必须进行 $\Omega(1/\epsilon^4)$ 次测量。对于任何进行 $\text{poly}(n)$ 副本测量且仅具有 $k < n$ 量子比特内存的协议,我们证明 $\widetilde{\Theta}(\min\{2^n/\epsilon^2, 2^{n-k}/\epsilon^4\})$ 个副本是必要且充分的。我们提出的协议还能估计实际值 $\text{tr}(P\rho)$,而非如先前工作仅估计其绝对值。此外,作为我们技术的副产品,我们为纯度测试任务建立了紧致界,并揭示了该任务存在一个在泡利阴影层析成像的内存-样本权衡中未出现的相变现象。