Noisy quantum devices demand error-mitigation techniques to be accurate yet simple and efficient in terms of number of shots and processing time. Many established approaches (e.g., extrapolation and quasi-probability cancellation) impose substantial execution or calibration overheads, while existing learning-based methods have difficulty scaling to large and deep circuits. In this research, we introduce QAGT-MLP: an attention-based graph transformer tailored for small- and large-scale quantum error mitigation (QEM).
Seyed Mohamad Ali Tousi
Dr. G. N. DeSouza
QAGT-MLP encodes each quantum circuit as a graph whose nodes represent gate instances and whose edges capture qubit connectivity and causal adjacency. A dual-path attention module extracts features around measured qubits at two scales or contexts: 1) graph-wide global structural context; and 2) fine-grained local lightcone context. These learned representations are concatenated with circuit-level descriptor features and the circuit noisy expected values, then they are passed to a lightweight MLP to predict the noise-mitigated values.
On large-scale 100-qubit Trotterized 1D Transverse-Field Ising Models -- TFIM circuits -- the proposed QAGT-MLP outperformed state-of-the-art learning baselines in terms of mean error and error variability, demonstrating strong validity and applicability in real-world QEM scenarios under matched shot budgets. By using attention to fuse global structures with local lightcone neighborhoods, QAGT-MLP achieves high mitigation quality without the increasing noise scaling or resource demand required by classical QEM pipelines, while still offering a scalable and practical path to QEM in modern and future quantum workloads.
View PDF on arXiv View GitHub Repository© 2025 QAGT-MLP Project. All rights reserved. The paper is provided for research purposes only and should be properly cited when used.
@misc{tousi2025qagtmLP,
title={QAGT-MLP: An Attention-Based Graph Transformer for Small and Large-Scale Quantum Error Mitigation},
author={Seyed Mohamad Ali Tousi and G. N. DeSouza},
year={2025},
eprint={2511.03119},
archivePrefix={arXiv},
primaryClass={cs.ET},
url={https://arxiv.org/abs/2511.03119},
}