# Fast small random density matrices

## September 23, 2018*

Sometimes, I find it useful to generate small random density matrices to test some statement I hope is true. When it’s not true, often this can quickly surface a counterexample. I’ve been using the scientific programming language Julia which recently hit version 1.0! That means there won’t be any “breaking changes” to the language (until 2.0). Pre-1.0, every new version (0.5, 0.6, etc) would change the language and you’d have to update your code. Since Julia is really fun to work in, and is now much more stable, I thought I’d look a little more seriously at how to make my code faster.

One basic primitive is the generation of random density matrices of dimension d. I generate these by first generating the eigenvalues, namely a real-valued vector of length d with all non-negative entries that add up to 1. Then I put this on the diagonal of a $$d\times d$$ matrix, and conjugate by a random unitary.

I construct the probability vector by the algorithm described by Smith and Tromble in 2004Smith and Tromble, 2004. This particular implementation was actually modified from some MATLAB code written by Koenraad Audenaert.

:

function randsimplexpt(d::Int)::Vector{Float64}
if d==1
z=1;
else
z=zeros(Float64, d);
w=sort(rand(Float64, d-1));
z[1]=w[1];
for k=2:d-1
z[k]=w[k]-w[k-1];
end
z[d]=1-w[d-1];
end
return z
end

and the unitary by an algorithm described by Māris Ozols:

using LinearAlgebra

function randunitary(d)
rg1 = randn(d,d)
rg2 = randn(d,d)
RG = rg1 + im*rg2
Q,R = qr(RG);
r = diag(R)
L = diagm(0 => r./abs.(r));
return Q*L
end

Both algorithms have the advantages of being simple to implement, and that they sample uniformly at random, which seems like a good property to have. Then it’s simple to generate random density matrices:

function randdm(d)
eigs = diagm(0 => randsimplexpt(d))
U = randunitary(d)
return U * eigs * (U')
end

Not too bad! Using the @btime macro from BenchmarkTools.jl, on my laptop I get the timings:

julia> @btime randdm(2);
3.940 μs (30 allocations: 2.67 KiB)

julia> @btime randdm(3);
5.964 μs (30 allocations: 3.72 KiB)

julia> @btime randdm(4);
7.172 μs (30 allocations: 5.81 KiB)

But then I learned about the StaticArrays.jl package, which defines static array types for arrays of fixed size. In fact, the package encodes the size of the array in the type! That means that just as Julia can treat a Float64 (a 64-bit floating point number) differently from an Int64 (a 64-bit integer), using StaticArray types, Julia can treat a 2x2 static array differently from a 3x3 static array. This matters because for any given function, Julia compiles a different of it for each type that you input.

Why does this help? Let’s take the example of inverting matrices. There’s a simple formula to invert a 2x2 matrix (wiki). If Julia knows the function will only act on 2x2 matrices, it can just use that formula for computing inverses, instead of a general one. Without this information, when compiling the function Julia has to assume the matrix could be any size, and can’t just use the fast 2x2 versionThis could be oversimplified… I’m not an expert on this.

. If you’re interested, I recommend this talk by Jeff Bezanson, or this great stackexchange answer by Stefan Karpinski (both co-creators of Julia) for better (albeit longer) explanations of how Julia’s internals work, and how the types play a role in compilation.

Since I wouldn’t change the size of a density matrix mid-computation (tensoring, etc., would make a new one instead), they are a perfect candidate for using these statically-sized types. Instead of using the Julia type Array to encode the density matrix, I can use the SMatrix and SVector types of StaticArrays.jl (the “S”’s stand for static). Let me write the three functions again in this way.

using StaticArrays
using LinearAlgebra

function randsimplexpt(::Val{d}) where {d}
if d==1
z=1;
else
z=zeros(Float64, d);
w=sort(rand(Float64, d-1));
z[1]=w[1];
for k=2:d-1
z[k]=w[k]-w[k-1];
end
z[d]=1-w[d-1];
end
return SVector{d, Float64}(z)
end

function randunitary(::Val{d}) where {d}
rg1 = SMatrix{d,d,Float64, d*d}(randn(Float64, d,d))
rg2 = SMatrix{d,d,Float64, d*d}(randn(Float64, d,d))
RG = rg1 + im*rg2
Q,R = qr(RG);
r = diag(R)
L = diagm(Val(0) => r./abs.(r));
return Q*L
end

function randdm(::Val{d}) where {d}
eigs = diagm(Val(0) => randsimplexpt(Val(d)))
U = randunitary(Val(d))
return U * eigs * (U')
end

What’s different? Well, not much. The main thing you’ll notice is in the signature of the function (the line starting with the word “function”), instead of just d, I wrote ::Val{d}. Why is that? Val{d} is a different typeThese are so-called “value types”, and are described more here.

for every number d (the canonical element of type Val{d} is just the object Val{d}()), and ::Val{d} means: use this function for any input of type ::Val{d}. So instead of putting the dimension, d, as the argument of the function, the argument is any object of type Val{d}. So we’re encoding the dimension in the type of the argument instead of the argument itself.

At first, this seemed kind of like a weird trick to me. Val{1} is a different type than Val{2}, and so forth, so it’s almost like saying instead of seeing 2 and 3 as both numbers, we’ll interpret them each as fundamentally different things. But I think it actually makes a lot of sense. There’s a really interesting talk about types, Subtyping made friendly by Francesco Zappa Nardelli. It’s mostly about formalizing the Julia type system, but one piece of intuition he gives is to think of types as analogous to mathematical sets. For example, the type Int for integers corresponds to the “set of integers”. So here, instead of thinking about 2 and 3 as both being in the set of integers, we’re thinking about 2 as being in the set $$\{2\}$$, and 3 as being in the set $$\{3\}$$. Since they’re in different sets, Julia compiles a different version of functions to deal with each of them, just like it might compile versions of $$x\mapsto x^2$$ to deal with complex $$x$$ versus real $$x$$. In our case, we really are thinking of the set of 2x2 matrices as being different from the set of 3x3 matrices, and want Julia to do things differently according to the dimension (e.g. use a different algorithm for computing inverses). It’s just that the “2” in randdm(2) really represents “2x2 density matrices”, as in, “sample randomly from the set of 2x2 density matrices”.

So for us, what this means is that by structuring our function to get the dimension through the type instead of the argument itself, Julia will compile a different version of the function for every dimension d, and it will know the value of that d when compiling the function. In particular, it doesn’t have to assume a general choice of d. So, does this really help?

julia> @btime randdm(Val{2}());
429.955 ns (5 allocations: 512 bytes)

julia> @btime randdm(Val{3}());
651.683 ns (5 allocations: 624 bytes)

julia> @btime randdm(Val{4}());
907.952 ns (5 allocations: 752 bytes)

That’s 8-9 times faster, with way less memory usage too! Of course, that’s thanks to the work of the developers of StaticArrays, who made sure the dimensional information we give Julia at the compile-time stage actually goes to good use.

A few last notes:

• while the Val{2}()’s here do look ugly, I just put something like d = Val{4}() at the start of my file, and then just use d as usual.
• this is only effective for low dimensions (at least at this stage); at $$d=20$$, my laptop already has trouble evaluating randdm once.
• what some very preliminary tests show is that what’s actually much more helpful for performance is lazy evaluation of tensor products (assuming you need to compute some tensor products), which can be done easily with LazyArrays.jl. I haven’t done much with this yet though.