Table of Contents

QT5104 Topics in Quantum Information Theory

Week 6.2

This might be a recap: The goal now is to map from $$\rho_{XB}$$ to $$\rho_{ZB} \equiv \rho_{f(X)B}$$ where B is some independent subsystem, so as to extract randomness.

In order to make the process independent of $$\rho_{XB}$$, we use random functions. Given a seed $$S$$ with some pmf distribution $$p_S$$ (usually taken to be uniform), we apply the function $$f_S$$ to $$X$$,

A definition of two-universal family of functions, given as the small collision probability $$Pr_S[f_S(x) = f_S(x')] \le{} d_Z^{-1} \quad{}\forall{} x \ne x'$$, given a family of random functions $$f_S$$ and $$p_S$$ such that the function maps from X to Z.

Using the Holdr inequality, we have another inequality $$||ABC||_1 \le{} ||A||_4 ||B||_2 ||C||_4$$. This allows us to bound the trace distance:

\begin{align*} 2\Delta{}(\rho_{f_S(X)B}, \Pi_Z\otimes{} \rho_B) &= || (\rho_B^{1/4}\rho_B^{-1/4}) (\rho_{f_S(X)B} - \Pi_Z \otimes{} \rho_B) (\rho_B^{-1/4}\rho_B^{1/4})||_1 \\ &\le{} ||I_Z \otimes{} \rho_B^{1/4} ||_4 ||\rho_B^{-1/4} (\rho_{f_S(X)B} - \Pi_Z \otimes{} \rho_B) \rho_B^{-1/4} ||_2 \cdots{} \\ &= \sqrt{d_Z tr(\rho_B^{-1/2} (\rho_{f_S(X)B} - \Pi_Z \otimes{} \rho_B)^2 \rho_B^{-1/2} )} \\ &= \sqrt{d_Z tr(\rho_B^{-1/2} \rho_{f_S(X)B} \rho_B^{-1/2} \rho_{f_S(X)B} ) - 1} \end{align*}

Some notes:

\begin{align*} &E_S[tr(\rho_B^{-1/2} \rho_{f_S(X)B} \rho_B^{-1/2} \rho_{f_S(X)B} ) ] \\ &= E_S[\sum_z \sum_{x,x'} \rho_x\rho_{x'} \mathbf{1}\{ f_S(x) = z \}\mathbf{1}\{ f_S(x') = z \} tr(\rho_B^{-1/2}\rho_B^x \rho_B^{-1/2}\rho_B^{x'})] \\ &= E_S[\sum_{x,x'} \rho_x\rho_{x'} \mathbf{1}\{ f_S(x) = f_S(x') \}tr(\rho_B^{-1/2}\rho_B^x \rho_B^{-1/2}\rho_B^{x'})] \\ &= \sum_{x} p_x^2 tr(\rho_B^{-1/2}\rho_B^x \rho_B^{-1/2}\rho_B^{x'}) + \sum_{x\ne{}x'} p_x p_{x'} E_S[\mathbf{1}\{f_S(x) = f_S(x')\}] tr(\rho_B^{-1/2}\rho_B^x \rho_B^{-1/2}\rho_B^{x'}) \\ &\le{} \sum_{x} p_x^2 tr(\rho_B^{-1/2}\rho_B^x \rho_B^{-1/2}\rho_B^{x'}) + \frac{1}{d_Z} \sum_{x\ne{}x'} p_x p_{x'} tr(\rho_B^{-1/2}\rho_B^x \rho_B^{-1/2}\rho_B^{x'}) \\ &= tr(\rho_B^{-1/2}\rho_{XB}\rho_B^{-1/2}\rho_{XB}) + \frac{1}{d_Z} \\ &\le{} 2^{-H_\text{min}(XB)_\rho} + \frac{1}{d_Z} \end{align*}

\begin{align*} E_S[2\Delta{}(\rho_{f_S(X)B}, \Pi_Z\otimes{} \rho_B)] &\le{} \sqrt{d_Z tr(\rho_B^{-1/2} \rho_{f_S(X)B} \rho_B^{-1/2} \rho_{f_S(X)B} ) - 1} \\ \le{} \sqrt{d_Z 2^{-H_\text{min}(XB)_\rho}} \end{align*}

Interpretation: we want $$\frac{1}{2}\sqrt{d_Z 2^{-H_\text{min}(X|B)}} \le{} epsilon{}$$.

We use the smooth-min entropy $$H_\text{min}^\epsilon{}(A|B)_\rho = \max_{\tilde{\rho} \approx{}_\epsilon{} \rho} H_\text{min} (A|B)_{\tilde{\rho}}$$

\begin{align*} E_S[2\Delta{}(\rho_{f_S(X)B}, \Pi_Z\otimes{} \rho_B)] &\le{} \sqrt{d_Z 2^{-H_\text{min}(X|B)_\rho}} \\ &= 2\epsilon{} + \sqrt{d_Z 2^{-H^\epsilon{}_\text{min}(X|B)_\rho}} \end{align*}

Week 6.1

Missed!


Week 5.2

Missed!

Week 5.1

A nice reference: Watrous's.

Surprisal

We define surprisal $$s(p_x)$$ with the following desirable properties, given a random variable $$X \in \mathcal{X}$$ and pmf $$p_x = P_X(x)$$.

Entropy

The quantum entropy generalizes Shannon entropy for classical states, and should hold unitary invariance $$H(\rho) = H(U\rho{}U^\dagger{})$$.

There are von Neumann measurements, i.e. (Rank-1) projective measurements:

Interesting point: in classical systems, the entropy increases with the number of systems, e.g. $$H(XY) \ge{} H(X)$$, whereas in the case of purification, the larger system (of a singular pure state) has an entropy of 0, even though the smaller system can have non-zero entropy.

Given pure state $$|\rho\rangle{}_{AB}$$ with $$\rho_A,\rho_B$$ marginals, we have $$H(A) = H(B) \equiv H(\rho_B)$$ and $$H(AB) = 0$$.

Conditional entropy

We define the conditional entropy $$H(A|B) := H(AB) - H(B)$$,

The data-processing inequality (DPI) is the statement: $$H(A'|B')_{\epsilon\otimes{}\mathcal{F}(\rho)} \ge{} H(A|B)$$, for a unital quantum channel $$\epsilon_{A\rightarrow{}A'}$$ and quantum channel $$\mathcal{F}_{B\rightarrow{}B'}$$.

\begin{align*} H(A|BC) &\le{} H(A|B) \\ H(ABC) - H(BC) &\le{} H(AB) - H(B) \\ H(AC|B) &\le{} H(A|B) + H(C|B) \end{align*}

If $$\rho_{ABC}$$ is pure, we get the duality $$H(A|B) = -H(A|C)$$. The proof uses the idea that $$H(AB) = H(C)$$ since both parts complete the joint state: $$H(A|B) = H(AB) - H(B) = H(C) - H(AC) = -H(A|C)$$.

We also define mutual information $$I(A:B) = H(A) - H(A|B)$$, which is a measure of correlation (applies to both classical and quantum).

Relative entropy

For two states $$\rho$$ and $$\sigma$$, we define the relative entropy $$D(\rho||\sigma) = tr(\rho(\log \rho - \log \sigma))$$.

Thoughts


Week 4.2

CHSH game

CHSH quantum strategy: $$\rho_{A'B'} = |\Psi_0\rangle{}\langle{}\Psi_0|$$, $$M^0 = \{|0\rangle{}\langle{}0|, |1\rangle{}\langle{}1|\}, M^1 = \{|+\rangle{}\langle{}+|,|-\rangle{}\langle{}-|\}$$, $$N^0 = \{|\pi/8\rangle{}\langle{}\pi/8|, |5\pi/8\rangle{}\langle{}5\pi/8|\}, N^1 = \{|-\pi/8\rangle{}\langle{}-\pi/8|,|3\pi/8\rangle{}\langle{}3\pi/8|\}$$

We can quickly show this is the supremum using Naimark dilation: Given a POVM $$\{M_x\}_x$$ on A, there exists a projective POVM (PVM) $$\{P_x\}_x$$ and an isometry $$U:A\rightarrow{}AX$$ such that $$\langle{}M_x,A\rangle = \langle{}P_x,U\rho{}U^\dagger{}\rangle{}$$.

The upper bound can be derived and is given by Tsirelson's bound, using $$P(a,b|x,y) = \langle{}\phi|P_{A'}^x\otimes{}Q_{B'}^x|\phi\rangle{}$$:

The winning probability then becomes:

\begin{align*} p_\text{win}^q &= \frac{1}{2} + \frac{1}{8}\langle{}\psi|C^0\otimes{}D^0 + C^0\otimes{}D^1 + C^1 \otimes{}D^0 - C^1 \otimes{}D^1|\psi\rangle \\ &\equiv{} \frac{1}{2} + \frac{1}{8}\langle{}\psi|B|\psi\rangle \\ &\le{} \frac{1}{2} + \frac{1}{8}||B||_\infty{} \\ &= \frac{1}{2} + \frac{1}{8}\sqrt{||B||^2_\infty{}} \end{align*}

Mermin magic square game

The game follows $$X=Y=\{1,2,3\}, A=B = \{-1,1\}^{\otimes{}3}$$ and $$V(a,b,x,y) = \mathbf{1}\{a_1a_2a_3=1, b_1b_2b_3=-1, a_y = b_x\}$$. This is better visualized as a 3x3 square, where the Alice prepares a row and Bob prepares a column such that their intersection is the same.

The optimal classical strategy is to fill the square:

x \ y 1 2 3
1 1 1 1
2 -1 1 -1
3 1 -1 ?

where the last ? cannot be agreed on between Alice and Bob. This thus gives $$p_\text{win}^{\text{cl.}} = \frac{8}{9}$$.

For the quantum game, we rely on:

x \ y 1 2 3
1 $$I\otimes{}Z$$ $$Z\otimes{}I$$ $$Z\otimes{}Z$$
2 $$X\otimes{}I$$ $$I\otimes{}X$$ $$X\otimes{}X$$
3 $$-X\otimes{}Z$$ $$-Z\otimes{}X$$ $$Y\otimes{}Y$$

The joint eigenbasis gives the eigenvectors in the computational basis, i.e. $$|\theta_0\rangle{} = |00\rangle{}$$, etc. To win the game, Alice and Bob need to share two joint maximally-entangled states. Let $$O_{xy}$$ represent the observable in cell (x,y), and the state (on either Alice or Bob) be $$|\psi\rangle{} = |\Psi_0\rangle{}\otimes{}|\Psi_0\rangle{}$$, the winning probability is thus,

\begin{align*} p_\text{win}^q &= \frac{1}{2}(1 + \langle{}O_{xy}\otimes{}O_{xy},|\psi\rangle{}\langle{}\psi|\rangle{}) \\ &= \frac{1}{2}(1 + \langle{}O_{xy}O_{xy}^T\otimes{}I,|\psi\rangle{}\langle{}\psi|\rangle{}) \\ &= \frac{1}{2}(1 + \frac{1}{4}tr(O_{xy}O_{xy}^T)) \\ &= 1 \end{align*}

Experimental realization of this has not been performed.

Week 4.1

Quantum formalism recap

Quantum correlations and games

A two-party game $$G=(X,Y,A,B,P_{XY},V)$$, where $$X,Y,A,B$$ are discrete and finite alphabets (input/output), $$P_{XY}$$ is the pmf and $$V:A\times{}B\times{}X\times{}Y \rightarrow \{0,1\}$$.

A classical strategy $$S$$ for Alice and Bob is given by $$Q_{X'Y'}$$ and $$f_A:X\times{}X' \rightarrow A$$ and $$f_B:Y\times{}Y'\rightarrow{} B$$.

\begin{align*} P_{\text{win}}^{\text{cl.}}(G,S) &= Pr[V(f_A, f_B, X, Y) = 1] \\ &= \sum_{x,x',y,y'} P_{XY}(x,y)Q_{X'Y'}(x',y') \mathbf{1}\{V(\cdots{})=1\} \\ &= \sum_{x,y,a,b} P_{XY}(x,y)S(a,b|x,y) \mathbf{1}\{V(a,b,x,y)=1\} \end{align*}

A quantum strategy $$S_q$$ is given by a state $$\rho_{A',B'}$$ and a collection of POVMs $$\{M^x_a\}_a$$ on A' and $$\{N_b^y\}_b$$ on B' for all x,y.

We note that $$p_\text{win}^q(G) \ge{} p_\text{win}^{\text{cl.}}(G)$$.

Current thoughts


Week 3.1

Composite systems

A state $$\rho_{AB}$$ is separable if it can be written as $$\rho_{AB} = \sum_i p_i \rho_A^i \otimes{} \rho_B^i$$, otherwise we say $$\rho_{AB}$$ is entangled.

Channels

We look at isomorphism from $$L(V,W)$$ to $$V\otimes{}W$$, or in more explicit terms, $$L(L(V),L(W))$$ to $$L(V)\otimes{}L(W) \sim L(V\otimes{}W)$$. This is the Choi-Jamiotkowski isomorphism.

We can vectorize channels: $$\text{vec}(\epsilon_{V\rightarrow{}W}) = (\epsilon_{V\rightarrow{}W} \otimes{} \mathcal{I}) (|\Omega{}\rangle{}\langle{}\Omega{}|) = \sum_{i,j} \epsilon(|i\rangle{}\langle{}j|) \otimes{} |i\rangle{}\langle{}j|$$. This is called a Choi state.

Examples of other channels:

Norms and metrics on states

Canonical norm: $$||X|| = \sqrt{\langle{}X,X\rangle{}} = \sqrt{\sum_j \langle{}Xv_j,Xv_j\rangle{}} = \sqrt{\sum_j s_j^2} \equiv ||X||_2$$. This is called the 2-norm, or Frobenius norm, or Schatten-2 norm.

The p-norm has the same type of invariance properties and is defined $$||X||_p = (\sum_i s_i^p)^(1/p)$$.

Recall that a metric has the properties $$\Delta{}(\rho,\sigma) \ge{} 0$$ with equality iff $$\rho=\sigma$$, satisfies Cauchy-Schwarz inequality, and is symmetric. Some metrics that we can define:

Some relations:

\begin{align*} \max_u |\langle{}\rho|\sigma\rangle{}| &= \max_u |\langle{}\sqrt{\rho},\sqrt{\sigma}U^\dagger\rangle{}| \\ &= \max_u | tr\sqrt{\rho}\sqrt{\sigma} U^T |\\ &= \max_u |tr(|\sqrt{\rho}\sqrt{\sigma}| U^T V)| \\ &\ge{} tr|\sqrt{\rho}\sqrt{\sigma}| \\ &= \sqrt{F(\rho,\sigma)} \end{align*}


Week 2.2

Tensor spaces

Given V, W Hilbert spaces on F with inner products and ONBs, we can define a tensor space $$V\otimes{} W = \text{span}\{v_i\otimes{} w_j\}_{i,j}$$.

Composite systems

Notation:

Marginal state is given by $$\rho_A = \sum_i (I_A \otimes{} \langle{}i|_B) \rho_{AB} (I_A \otimes{} |i\rangle{}_B) \equiv{} (I_A\otimes{}\text{tr}_B)(\rho_AB) \equiv{} \text{tr}_B(\rho_AB)$$, given any ONB. $$\text{tr}_B$$ is called the partial trace, and can be written as $$tr_B(X) = \sum_i \langle{}i|X|i\rangle{}$$.

Given a state $$\rho_A = \sum_i \lambda{}_i |\psi_i\rangle{}\langle{}\psi_i| $$.

\begin{align*} tr_{A'} (\rho_{AA'}) &\equiv tr_{A'}(|\rho\rangle{}\langle{}\rho|_{AA'}) \\ &= tr_{A'} (\sum_{i,j} \sqrt{\lambda{}_i\lambda_j} |\psi_i\rangle{}\langle{}\psi_j| \otimes | i\rangle{}\langle{}i|) \\ &= \sum_{i,j} \sqrt{\lambda_i\lambda_j} \sum_k |\psi_i\rangle{}\langle{}\psi_j| \cdot{} \langle{}k|i\rangle{}\langle{}j|k\rangle{} \\ &= \sum_k \lambda_k |\psi_k\rangle{}\langle{}\psi_k| \\ &= \rho_A \end{align*}

Channels

Channels are linear maps that map between states: $$\epsilon_{A\rightarrow{}B} \in L(\text{Herm}(A),\text{Herm}(B))$$. Some properties we would like it to have:

Examples:

Channels have adjoints, e.g. the adjoint of trace is $$(tr_B)^\dagger{}: H_A \mapsto{} H_A\otimes{} I_B$$. This adjoint is actually pretty conceptually important.

Week 2.1

Every $$ X\in{} L(V,W)$$ can be written in the form $$ X\mapsto{} \sum_{i=0}^{r-1} s_i |w_i\rangle{}\langle{}v_i| $$, where $$ \{|v_i\rangle{}\}_i$$ is orthonormal in V. $$s_i>0$$ are called singular values.

For Hermitian operators, we have the spectral decomposition (eigenvalue decomposition), which boils down to the even more familiar $$H=\sum_i \lambda_i |v_i\rangle{}\langle{}v_i|$$ for $$H\in{} L(V)$$.

A Hermitian $$H\in{}L(V)$$ is positive semidefinite (psd) matrix if $$\langle{}v,Hv\rangle{}\ge{} 0 \forall{} v\in{}V$$.

For a function $$f:(0,\infty{}) \rightarrow{} R$$, we define $$f(H)$$ as $$\sum_{i=0,\lambda_i>0}^{r-1} f(\lambda-i)|v_i\rangle{}\langle{}v_i|$$ for any psd $$H$$.

We can define a partial order on Hermitian operators $$ A\ge{}B$$ iff $$A-B\ge{}0$$, e.g. $$1\ge{} M \ge{} 0$$ if eigenvalues of M are [0,1].

Quantum formalism

Quantum states (or density operators) are psd operators $$\rho,\sigma,\omega,\pi$$ with unit trace.

A positive operator-valued measure (POVM) is a set $$\{M_x\}_{x\in{}X}$$ such that $$M_x\ge{}0$$ and $$\sum_x M_x = \bf{1}$$.

Using the formalism (this involves multiple perspectives!),

\begin{align*} p_x &= \langle{} \Psi{}, M_x \rangle{} \\ &= \text{tr} (\Psi{}M_x) = \text{tr} (|\psi{}\rangle{}\langle{}\psi| M_x)\\ &= \langle{}\psi|M_x|\psi{}\rangle{} \\ &= \langle{} \psi{}, M_x\psi{} \rangle{} \end{align*}


Week 1.2 Linear algebra recap

Generally a recap of linear algebra basics.

Vector spaces

We denote a field $$F$$ (either the real field $$R$$ or complex field $$C$$). A vector space on F is a set V that is closed under addition $$V\times V \rightarrow V$$ and scalar multiplication $$F \times V \rightarrow V$$. This has the zero element 0.

A Hilbert space is a (finite dimensional) vector space with inner product $$\langle{}\cdot{},\cdot{}\rangle{}: V \times F \rightarrow F$$.

One example are pure states $$|\psi{}\rangle{} \in H$$, with the computational basis defined as $$\{|i\rangle{}\}_{i=0}^{d-1}$$. The inner product is defined $$\langle{}\theta{}|\phi{}\rangle{} \equiv{} \langle{}|\theta{}\rangle{},|\phi{}\rangle{}\rangle{} = \sum_i \overline{\theta}_i \cdot{} \phi_i $$. This is analogous to the matrix representation of dot product between column and row vectors.

Linear operators

Linear operators $$L(V,W)$$ contains all linear maps from V to W.

We can use the unitary to define further:

\begin{align*} \langle{}X,Y\rangle{} &= \sum_i \langle{}Xw_i,Yw_i\rangle{} \\ &= \sum_i \langle{}XUw_i,YUw_i\rangle{} \\ &= \langle{}XU,YU\rangle{} \\ &= \langle{}U^\dagger{}Y^\dagger{},U^\dagger{}X^\dagger{}\rangle{} \\ &= \langle{}Y^\dagger{},UU^\dagger{}X^\dagger{}\rangle{} \\ &= \langle{}Y^\dagger{},X^\dagger{}\rangle{} \end{align*}

A small note: vectors can be seen also as a map from the field to the vector space, e.g. $$ V\sim{} L(C,V)$$. We can generalize even further into channels, which map from some Hilbert space / set of linear operators to itself or another, e.g. $$ \langle{}\epsilon^\dagger{}(X),Y\rangle = \langle{} X,\epsilon{}(Y)\rangle{}$$ which can be seen as analogous to the Heisenberg vs Schrondinger picture.

Week 1.1 Administrative

A list of names of students in the class. Scribing for lecture notes and a final exam.

1)
a,c-id),(c+id,b