Timezone: »
The problem of characterizing quantum channels arises in a number of contexts such as quantum process tomography and quantum error correction.
However, direct approaches to parameterizing and optimizing the Choi matrix representation of quantum channels face a curse of dimensionality: the number of parameters scales exponentially in the number of qubits. Recently, Torlai et al. [2020] proposed using locally purified density operators (LPDOs), a tensor network representation of Choi matrices, to overcome the unfavourable scaling in parameters. While the LPDO structure allows it to satisfy a complete positivity' (CP) constraint required of physically valid quantum channels, it makes no guarantees about a similarly required
trace preservation' (TP) constraint. In practice, the TP constraint is violated, and the learned quantum channel may even be trace-increasing, which is non-physical. In this work, we present the problem of optimizing over TP LPDOs, discuss two approaches to characterizing the TP constraints on LPDOs, and outline the next steps for developing an optimization scheme.
Author Information
Siddarth Srinivasan (Department of Computer Science, University of Washington)
Sandesh Adhikary (University of Washington)
Jacob Miller (Mila, Univérsité de Montréal)
Guillaume Rabusseau (Mila - Université de Montréal)
Byron Boots (University of Washington)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 : Towards a Trace-Preserving Tensor Network Representation of Quantum Channels »
Dates n/a. Room
More from the Same Authors
-
2021 : ContracTN: A Tensor Network Library Designed for Machine Learning »
Jacob Miller · Guillaume Rabusseau -
2021 : Probabilistic Graphical Models and Tensor Networks: A Hybrid Framework »
Jacob Miller · Geoffrey Roeder -
2022 : Learning Semantics-Aware Locomotion Skills from Human Demonstrations »
Yuxiang Yang · Xiangyun Meng · Wenhao Yu · Tingnan Zhang · Jie Tan · Byron Boots -
2023 Poster: Adversarial Model for Offline Reinforcement Learning »
Mohak Bhardwaj · Tengyang Xie · Byron Boots · Nan Jiang · Ching-An Cheng -
2023 Poster: Temporal Graph Benchmark for Machine Learning on Temporal Graphs »
Shenyang Huang · Farimah Poursafaei · Jacob Danovitch · Matthias Fey · Weihua Hu · Emanuele Rossi · Jure Leskovec · Michael Bronstein · Guillaume Rabusseau · Reihaneh Rabbany -
2022 Poster: High-Order Pooling for Graph Neural Networks with Tensor Decomposition »
Chenqing Hua · Guillaume Rabusseau · Jian Tang -
2021 : Discussion Pannel »
Xiao-Yang Liu · Qibin Zhao · Chao Li · Guillaume Rabusseau -
2021 : ContracTN: A Tensor Network Library Designed for Machine Learning »
Jacob Miller · Guillaume Rabusseau -
2020 : Poster 7: Paper 13: Quantum Tensor Networks, Stochastic Processes, and Weighted Automata »
Sandesh Adhikary -
2020 : Q&A: Byron Boots »
Byron Boots -
2020 : Invited Talk: Byron Boots »
Byron Boots -
2020 Poster: Intra Order-preserving Functions for Calibration of Multi-Class Neural Networks »
Amir Rahimi · Amirreza Shaban · Ching-An Cheng · Richard I Hartley · Byron Boots -
2019 : Continuous Online Learning and New Insights to Online Imitation Learning »
Jonathan Lee · Ching-An Cheng · Ken Goldberg · Byron Boots -
2013 Workshop: Workshop on Spectral Learning »
Byron Boots · Daniel Hsu · Borja Balle -
2010 Poster: Predictive State Temporal Difference Learning »
Byron Boots · Geoffrey Gordon -
2007 Oral: A Constraint Generation Approach to Learning Stable Linear Dynamical Systems »
Sajid M Siddiqi · Byron Boots · Geoffrey Gordon -
2007 Poster: A Constraint Generation Approach to Learning Stable Linear Dynamical Systems »
Sajid M Siddiqi · Byron Boots · Geoffrey Gordon