*Update (19 January 2019):* I looked at this code again, and realized I had made a few basic mistakes, such as constructing a statically sized normally-distributed random matrix via `SMatrix{d,d,Float64, d*d}(randn(Float64, d,d))`

which constructs a random matrix (allocating memory dynamically) and then converts it to a static `SMatrix`

, instead of `randn(SMatrix{d,d,Float64})`

which directly constructs an `SMatrix`

without dynamic memory allocation. I also hadn’t written the function `randsimplexpt`

in a very good way. I’ve updated the code in the post with these improvements; the static versions of the methods are now allocation-free and twice as fast, in some cases.

Sometimes, I find it useful to generate small random density matrices to test some statement I hope is true. When it’s not true, often this can quickly surface a counterexample. I’ve been using the scientific programming language Julia which recently hit version 1.0! That means there won’t be any “breaking changes” to the language (until 2.0). Pre-1.0, every new version (0.5, 0.6, etc) would change the language and you’d have to update your code. Since Julia is really fun to work in, and is now much more stable, I thought I’d look a little more seriously at how to make my code faster.

One basic primitive is the generation of random density matrices of dimension `d`

. I generate these by first generating the eigenvalues, namely a real-valued vector of length `d`

with all non-negative entries that add up to 1. Then I put this on the diagonal of a \(d\times d\) matrix, and conjugate by a random unitary.

I construct the probability vector by the algorithm described by Smith and Tromble in 2004Smith and Tromble, 2004

:

```
"""Takes a vector of length `d-1` of numbers between `0.0` and `1.0` and converts it to a simplex pt in R^d"""
function simplexpt(unif)
d = length(unif) + 1; T = eltype(unif)
w= zeros(T,d+1)
w[2:d] .= sort(unif)
w[d+1] = one(T)
diff(w)
end
randsimplexpt(d) = simplexpt(rand(Float64, d-1))
```

and the unitary by an algorithm described by Māris OzolsOzols 2008

:

```
using LinearAlgebra
function randunitary(d)
rg1 = randn(Float64,d,d)
rg2 = randn(Float64,d,d)
RG = rg1 + im*rg2
Q,R = qr(RG);
r = diag(R)
L = diagm(0 => r./abs.(r));
return Q*L
end
```

Both algorithms have the advantages of being simple to implement, and that they sample uniformly at random, which seems like a good property to have. Then it’s simple to generate random density matrices:

```
function randdm(d)
eigs = diagm(0 => randsimplexpt(d))
U = randunitary(d)
return U * eigs * (U')
end
```

Not too bad! Using the `@btime`

macro from `BenchmarkTools.jl`

, on my laptop I get the timings:

```
julia> @btime randdm(2);
4.273 μs (34 allocations: 2.88 KiB)
julia> @btime randdm(3);
6.172 μs (34 allocations: 3.92 KiB)
julia> @btime randdm(4);
7.597 μs (34 allocations: 6.03 KiB)
```

But then I learned about the StaticArrays.jl package, which defines static array types for arrays of fixed size. In fact, the package encodes the size of the array in the type! That means that just as Julia can treat a Float64 (a 64-bit floating point number) differently from an Int64 (a 64-bit integer), using StaticArray types, Julia can treat a 2x2 static array differently from a 3x3 static array. This matters because for any given function, Julia compiles a different of it for each type that you input.

Why does this help? Let’s take the example of inverting matrices. There’s a simple formula to invert a 2x2 matrix (wiki). If Julia knows the function will only act on 2x2 matrices, it can just use that formula for computing inverses, instead of a general one. Without this information, when compiling the function Julia has to assume the matrix could be any size, and can’t just use the fast 2x2 versionThis could be oversimplified… I’m not an expert on this.

. If you’re interested, I recommend this talk by Jeff Bezanson, or this great stackexchange answer by Stefan Karpinski (both co-creators of Julia) for better (albeit longer) explanations of how Julia’s internals work, and how the types play a role in compilation.

Since I wouldn’t change the size of a density matrix mid-computation (tensoring, etc., would make a new one instead), they are a perfect candidate for using these statically-sized types. Instead of using the Julia type `Array`

to encode the density matrix, I can use the `SMatrix`

and `SVector`

types of `StaticArrays.jl`

(the “S”’s stand for static). Let me write the three functions again in this way.

```
using LinearAlgebra, StaticArrays, TupleTools
# Use the package TupleTools to add a sort method for StaticArrays
import Base.sort
sort(p::SVector{d,T}; rev::Bool = false) where {d,T} = SVector{d,T}(TupleTools.sort(Tuple(p); rev=rev))
"""Takes a vector of length `d-1` of numbers between `0.0` and `1.0` and converts it to a simplex pt in R^d"""
function simplexpt(unif::SVector{dm1,T}) where {dm1, T}
d = dm1 + 1
# Since SVector's are immutable, we can't change their entries in-place.
# Instead, we will use StaticArrays's `setindex` method which gives us a copy
# of the vector with the value at the index changed. This is very fast!
w = zeros(SVector{d+1, T})
s = sort(unif)
for i=2:d
w = setindex(w,s[i-1], i)
end
w = setindex(w,one(T),d+1)
diff(w)
end
randsimplexpt(::Val{d}) where {d} = simplexpt(rand(SVector{d-1, Float64}))
function randunitary(::Val{d}) where {d}
rg1 = randn(SMatrix{d,d,Float64})
rg2 = randn(SMatrix{d,d,Float64})
RG = rg1 + im*rg2
Q,R = qr(RG);
r = diag(R)
L = diagm(Val(0) => r./abs.(r));
return Q*L
end
function randdm(::Val{d}) where {d}
eigs = diagm(Val(0) => randsimplexpt(Val(d)))
U = randunitary(Val(d))
return U * eigs * (U')
end
```

What’s different? Well, not too much. The main thing you’ll notice is in the signature of the function (the line starting with the word “function”), instead of just `d`

, I wrote `::Val{d}`

. Why is that? `Val{d}`

is a different typeThese are so-called “value types”, and are described more here.

for every number `d`

(the canonical element of type `Val{d}`

is just the object `Val{d}()`

), and `::Val{d}`

means: use this function for any input of type `::Val{d}`

. So instead of putting the dimension, `d`

, as the *argument* of the function, the argument is any object of *type* `Val{d}`

. So we’re encoding the dimension in the type of the argument instead of the argument itself.

At first, this seemed kind of like a weird trick to me. `Val{1}`

is a different type than `Val{2}`

, and so forth, so it’s almost like saying instead of seeing 2 and 3 as both numbers, we’ll interpret them each as fundamentally different things. But I think it actually makes a lot of sense. There’s a really interesting talk about types, *Subtyping made friendly* by Francesco Zappa Nardelli. It’s mostly about formalizing the Julia type system, but one piece of intuition he gives is to think of types as analogous to mathematical sets. For example, the type `Int`

for integers corresponds to the “set of integers”. So here, instead of thinking about 2 and 3 as both being in the set of integers, we’re thinking about 2 as being in the set \(\{2\}\), and 3 as being in the set \(\{3\}\). Since they’re in different sets, Julia compiles a different version of functions to deal with each of them, just like it might compile versions of \(x\mapsto x^2\) to deal with complex \(x\) versus real \(x\). In our case, we really are thinking of the set of 2x2 matrices as being different from the set of 3x3 matrices, and want Julia to do things differently according to the dimension (e.g. use a different algorithm for computing inverses). It’s just that the “2” in `randdm(2)`

really represents “2x2 density matrices”, as in, “sample randomly from the set of 2x2 density matrices”.

So for us, what this means is that by structuring our function to get the dimension through the type instead of the argument itself, Julia will compile a different version of the function for every dimension `d`

, and it will know the value of that `d`

when compiling the function. In particular, it doesn’t have to assume a general choice of `d`

. So, does this really help?

```
julia> @btime randdm(Val(2));
185.121 ns (0 allocations: 0 bytes)
julia> @btime randdm(Val(3));
438.633 ns (0 allocations: 0 bytes)
julia> @btime randdm(Val(4));
676.543 ns (0 allocations: 0 bytes)
```

That’s 11-23 times faster, with no allocationsMy understanding is: allocations refers to “heap allocations”, which is how Julia stores memory dynamically, if it doesn’t know ahead of time how much memory it will need. These are slower than “stack allocations”, which are static and fast.

! Of course, that’s thanks to the work of the developers of `StaticArrays`

, who made sure the dimensional information we give Julia at the compile-time stage actually goes to good use.

A few last notes:

- this is only effective for low dimensions (at least at this stage); at \(d=20\), my laptop already has trouble compiling
`randdm`

. - what some very preliminary tests show is that what’s actually more helpful for performance in some cases is lazy evaluation of tensor products (assuming you need to compute some tensor products), which can be done easily with LazyArrays.jl. I haven’t done much with this yet though.