Haar's Half Measure

What I talk about when I talk about physics.

18 Aug 2023

Kindergarten quantum error correcting codes (P2)

4-qubit codes

The encoded logical states are

\[\begin{align} |0_L\rangle \propto |0000\rangle + |1111\rangle\newline |1_L\rangle \propto |1010\rangle + |0101\rangle \end{align}\]

We have four physical qubits here, so the logical qubit lives inside a two-dimensional subspace of a combined 16 dimensional Hilbert space. The first thing one should notice is, unlike classical-inspired codes, e.g. three-qubit repetition code, where each encoded logical state is purely a tensor product of computational states of the physical qubits, the encoded logical state in this case is a superposition of the computational states of the physical qubits. This is a general requirement to detect phase error, something that’s intrinsically quantum mechanical.

You need entanglement for protection against both bit and phase errors.

The second thing one should notice is the Hamming weight is two. Meaning, we can detect up to two bit-flip events.

Let us now play around with this code. Assuming an error $E_1=XIII$ happened. Did our qubit leave the codespace? Yes. Can we detect the error? Yes. The resulting states being

\[\begin{align} |0_L\rangle \to |1000\rangle + |0111\rangle \propto |0_X\rangle\newline |1_L\rangle \to |0010\rangle + |1101\rangle \propto |1_X\rangle \end{align}\]

which is clearly orthogonal to the state before. There sure exists an observable $O$ that gives either of these $+1$ and $-1$ the other. Furthermore, let us define new basis states as $|0_X\rangle$ and $|1_X\rangle$. Let us now consider the case in which the third bit is flipped, $E_2=IIXI$.

\[\begin{align} |0_L\rangle \to |0010\rangle + |1101\rangle \propto |1_X\rangle\newline |1_L\rangle \to |1000\rangle + |0111\rangle \propto |0_X\rangle \end{align}\]

We see that the state ended up in the same error space, although the bits are logically flipped. Again, we should be able to detect this $E_2$ error–the states are orthogonal. But, the real question is,

Can we distinguish $E_1$ and $E_2$?

Short answer is no. This is the concept of degeneracy, where two error processes lead to the same error space.

Let us now move on to phase flipping events. Again, let us suppose we have $E_3=IZII$ that acts on the code space.

\[\begin{align} |0_L\rangle \to +1|0000\rangle -1|1111\rangle \propto |0_Z\rangle\newline |1_L\rangle \to +1|1010\rangle -1|0101\rangle \propto |1_Z\rangle \end{align}\]

Did we leave the code space? Yes. Remember, the code space is spanned by $\{|0_L\rangle, |1_L\rangle\}$, so we can have a state being linear superposition of basis states like

\[\begin{align} |\varphi\rangle &= a|0_L\rangle + b |1_L\rangle\newline &=a (|0000\rangle + |1111\rangle) + b (|1010\rangle + |0101\rangle) \end{align}\]

For this state $|\varphi\rangle$ to be ``legal", $a$ has to be uniform for both $|0000\rangle$ and $|1111\rangle$. But the error state requires a minus sign, so the states must be orthogonal. Another way to check orthogonality is to simply observe that $\langle 0_Z|0_L\rangle=+1-1=0$. Bottom line is that we can detect this error. Now, let us see for the case $E_4=ZIII$.

\[\begin{align} |0_L\rangle \to +1|0000\rangle -1|1111\rangle \propto +|0_Z\rangle\newline |1_L\rangle \to -1|1010\rangle + 1|0101\rangle \propto -|1_Z\rangle \end{align}\]

This error $E_4$ did a phase flip in the error space! Can we detect it? Yeah, we left the code space. But we cannot correct $E_3$ and $E_4$ simultaneously. Again, degeneracy prevents us from correcting the errors that exhibit the same syndrome.

General quantum error correction conditions

Let us now find a formalism to unify what we’ve learned so far. We have two bit-flip errors,

\[\begin{align} E_1 = XIII,\ E_2=IXII \end{align}\]

and we can detect them, since they both map the code word onto orthogonal (w.r.t the code space) error spaces. Here’s a nice remark,

When two errors map the same codeword to two orthogonal error spaces, a malicious environment cannot distinguish the code word by measuring the two errors (expectation values).

This is expressed in the “detectability condition”,

\[\begin{align} P E_j P = c_j P, \end{align}\]

where $P$ is the projection into the codespace, $P=|0_L\rangle\langle 0_L| + |1_L\rangle\langle 1_L|$. What the equation also implies is the fact that there’s no matrix elements between the two codewords, i.e.

\[\begin{align} \langle W_m|E_j|W_n\rangle = 0. \end{align}\]

Furthermore, we know that if two errors lead the codeword out of different orthogonal subspaces, then trivially they can be distinguished (detected) and corrected. However, if two (or more) errors lead to the same error spaces (still, it is orthogonal to the codespace), then the two errors should better undo each other. This is succinctly expressed in the “correctability condition”, or the Knill-Laflamme condition,

\[\begin{align} P E_j^\dagger E_k P = c_{jk} P \end{align}\]

This is simply because we have no way to correct an error other than looking at its syndrome. In some cases, the Knill-Laflamme condition is written as

\[ \begin{align} \langle W_\sigma | E_j^\dagger E_k | W_{\sigma’}\rangle = c_{jk} \delta_{\sigma\sigma’} \end{align} \]

where $|W_{\sigma}\rangle$ denotes the logical codewords.

Bosonic error correction codes

General description

As always, the goal of QEC is to suppress the dominant error. Unlike the qubit systems, we you have the errors being

\[ \begin{align} \Epsilon = \{I, X, Z, Y\} \end{align} \]

the Pauli operators, a single oscillator is poisoned by excitation/loss and dephasing errors,

\[ \begin{align} \Epsilon = \{\hat{a}, \hat{a}^\dagger \hat{a}\} \end{align} \]

It’s worth pointing out that this error model doesn’t affect the number of excitation dramatically in the Fock state. Correspondingly one can imagine the Wigner function jiggling around (or smears out) a bit due to $\hat{a}$ or $\hat{a}^\dagger\hat{a}$, but it’s not going to be a full revolution around the phase space. This implies a certain intrinsic robustness of bosonic systems, which is important, in the context of bias-preserving error correction.

Next time, we'll talk about "Why vim users are the worst :("