# Question 2

### Question 2.

Hey! Could you also please help me with an exercise (Notes 5, eq 21): show that a separable bipartite state can always be expressed as convex combination of pure product states: $\rho_{AB} = \sum\limits_{i=1} p_i |\psi_i\rangle\langle\psi_i|\otimes |\phi_i\rangle\langle\phi_i|.$ I get confused when I expand the density matrices and try to confine the sum to only one variable.

Hi! Good question. The definition of a separable state is that we can write it in the form $\rho_{AB} = \sum_{i=1}^n p_i \omega_i^A \otimes \sigma_i^B$ for some probability distribution $$\{p_i\}_{i=1}^n$$ and density matrices $$\omega_i^A$$ and $$\sigma_i^B$$. As you mentioned, the trick in in the number of index variables. Each state $$\omega_i^A$$ has a decomposition into at most $$d_A:= \dim \mathcal{H}_A$$ pure states: $\omega_i^A = \sum_{j=1}^{d_A} q^{(i)}_j | \psi^{(i)}_j \rangle\langle \psi^{(i)}_j |.$(1) We know we can restrict to $$d_A$$ pure states because the eigendecomposition gives a decomposition with that many pure states, and we don’t need to restrict the sum to fewer than $$d_A$$ states because we can always have the associated probability $$q^{(i)}_j = 0$$. So for each $$i$$, we have a probability distribution $$\{q^{(i)}_j\}_{j=1}^{d_A}$$ and set of pure states $$\{| \psi^{(i)}_j \rangle\}_{j=1}^{d_A}$$ which satisfy (1). Likewise, we can decompose each $$\sigma_B^{(i)}$$ as $\sigma_i^B = \sum_{k=1}^{d_B} r^{(i)}_k | \phi^{(i)}_k \rangle\langle \phi^{(i)}_k |.$ for some probability distribution $$\{r^{(i)}_k\}_{k=1}^{d_B}$$ and pure states $$\{| \phi^{(i)}_k \rangle\}_{k=1}^{d_B}$$. Putting this together, we have $\rho_{AB} = \sum_{i=1}^n \sum_{j=1}^{d_A} \sum_{k=1}^{d_B} p_i q^{(i)}_j r^{(i)}_k | \psi^{(i)}_j \rangle\langle \psi^{(i)}_j |\otimes | \phi^{(i)}_k \rangle\langle \phi^{(i)}_k |.$(2) This is probably where you got to in your expansions. The trick is to realize that the bound variables $$i,j,k$$ don’t have inherent meaning; the sum is simply $\rho_{AB} = p_1 q^{(1)}_1 r^{(1)}_1 | \psi^{(1)}_1 \rangle\langle \psi^{(1)}_1 |\otimes | \phi^{(1)}_1 \rangle\langle \phi^{(1)}_1 | + p_1 q^{(1)}_2 r^{(1)}_k | \psi^{(1)}_2 \rangle\langle \psi^{(1)}_2 |\otimes | \phi^{(1)}_1 \rangle\langle \phi^{(1)}_1 | + \ldots$ where we have a term for each value of $$i,j,k$$. So we can biject the set $$\{1,2,\dotsc,n\} \times \{1,2,\dotsc,d_A\} \times \{1,2,\dotsc,d_B\}$$ with the set $$\{1,2,\dotsc, n*d_A*d_B\}$$, by for example1.] $\alpha(i,j,k) = (i-1)*d_Ad_B + (j-1)*d_B + k$ which runs from $$\alpha(1,1,1) = 1$$ to $$\alpha(n,d_A,d_B) = nd_Ad_B$$, and in fact is a bijection as we could check. So we can define a probability distribution $$\{t_\beta\}_{\beta=1}^{nd_Ad_B}$$ by the inverse mapping: if $$\alpha(i,j,k) = \beta$$, then $$t_\beta = p_i q^{(i)}_j r^{(i)}_j$$. Likewise, we can define pure states $| \chi_\beta \rangle_A = | \psi^{(i)}_j \rangle_A$ where $$i$$ and $$j$$ are such that $$\beta= \alpha(i,j,k)$$ for some $$k$$. Likewise, we define pure states $| \tilde \chi_\beta \rangle_B = | \phi^{(i)}_k \rangle_B$ where $$i$$ and $$k$$ are such that $$\beta= \alpha(i,j,k)$$ for some $$j$$. Putting it all together, $\rho_{AB} = \sum_{\beta=1}^{nd_Ad_B} t_\beta | \chi_\beta \rangle\langle \chi_\beta |\otimes | \tilde \chi_\beta \rangle\langle \tilde \chi_\beta |.$(3) Why is this equal? Since each term in the sum on the right-hand side of (3) appears exactly once as a term in the sums on the right-hand side of (2).

This trick often isn’t written out, so I hope it helps to have it here. In scientific programming languages, the bijection $$\alpha$$ is often called sub2ind, since it converts “subscripts” like $$i,j,k$$ to a single index, $$\beta$$. For example, Julia is an open-source scientific programming language and documents the function here, if you’re interested.

1. Thanks to this stackoverflow answer