I gave an informal talk today on the arrow of time in repeated interaction systems and I thought I’d write about it here.
Repeated interaction systems (RIS)
For a recent review of RIS, see arxiv/1305.2472. We also introduce them in my and my coauthor’s work Landauer’s Principle in Repeated Interaction Systems. However, I’ll include what we need here below.
For our purposes, a repeated interaction system consists of a finite dimensional system interacting with a sequence (or “chain”) of finite dimensional systems called “probes”.
Associated to is an Hilbert space , self-adjoint operator called the Hamiltonian, and an inital state with , .
Similarly, each probe has a Hilbert space , self-adjoint Hamiltonian , and initial state (with , ). We will assume the probes’ Hilbert spaces are isomorphic, , and that the initial state is a thermal state also called a Gibbs state at inverse temperate . That is,
The system interacts with each probe, one at a time, for some duration . This can be described inductively as follows. If the system is in the state after iteracting with the th probe, then couples with to form the joint (uncorrelated) state The interaction of the system and th probe is described by some interaction Hamiltonian so that after time , the joint state of and is given by where
The state on alone then, after interacting with the th probe, is given by This is the initial state of for the interaction with the next probe.
Note that we may define the reduced dynamics on for step . This is the map In this language, , and thus
Introduction to arrow of time
Now that we understand the setup of an RIS, we can consider what it means to test the arrow of time in this context. We will imagine two black boxes and .
For such a box, we input an RIS; that is, we give it all the temperatures, Hamiltonians, initial states, etc., of an RIS with some large number of probes; let’s call the number of probes . Afterwards, we may push a button and recieve a printout.
To produce its printout, does the following when we press the button:
- Make a fresh RIS identical to the one we gave it
- Measure the initial state of the system (labelling this outcome by ) and the energy of each probe (labelling the energy level of the th probe by )
- Time evolve the system and probes by having them interact as described above
- Measure the final state of the system (yielding outcome ) and the energies of the probes again (where is the energy level of the th probe), and
- Print out the measurement outcomes .
To produce its printout, does the following:
- Make a fresh RIS by putting in the state
and for , putting in the state . 2. Measure (labelling the outcome by ) and the energy of each probe (outcomes ). 2. Time evolve by , i.e. with replaced by . 3. Measure the state of (calling the outcome ) and the energy of each probe (labelling the outcomes by ). 4. Printout .
Now, we are a given a box , and we input an RIS. Our task now is to determine if we were given , or if we were given .
This is the task of determine whether we can experimentally detect whether or not time has evolved in the forward direction, corresponding to , or in the backwards direction, corresponding to , for this particular RIS.
A more careful description of the forward and backward processes
To proceed, we’ll need to calculate the probability of getting a particular set of measurement outcomes for each process. Let us analyse the forward process.
The system starts in the state , and the chain of probes in the initial state .
We write the spectral decomposition of , where are the eigenvalues of , and is the projection onto the eigenspace associated to , and the spectral decomposition of the th probes Hamiltonian.
We measure in the eigenbasis of and each probe in their Hamiltonian’s eigenbasis, obtaining outcomes and with probability where . By performing these measurements, we have projected e.g. the th probe into it’s th energy level. We have also projected the system into the state . If is a non-degenerate eigenvalue, this means that is now in a pure state.
Next, interacts with each probe, one at a time, starting at until , via the time evolution
Now, we measure in the eigenbasis of and all of the probes in their energy eigenbasis; that is, we measure in the eigenbasis of . We will write the spectral decomposition . The probability of obtaining outcome from measuring and outcomes from measuring the probes is Note we only need to write the measurement operators for the second measurement once, by cyclicity of the trace.
We call this quantity ; this is the probability that when we press the button on we obtain the outcomes . It is in fact a discrete probability distribution.
Similarly, the probability of getting the measurement outcomes when we press the button on is given by which we will call .
Hypothesis testing on the arrow of time
Let us assume we press the button of our box times. We collect a printout each time. Based on the observed frequencies of each set of measurement outcomes, we want to know: are these outcomes likely being drawn from the distribution , or from ?
This is a classic question in hypothesis testing, and has been well-studied; e.g. Cover & Thomas Ch. 11 is a clear reference.
We can consider our uses of as drawing one sample from independent and identically distributed random variables .
We wish to choose a set of sequences of outcomes such that if , we output , and we output otherwise.
There are two errors we can make: we can say that our box is when really it is , or vice-versa. Let us define the error probabilities which depend on the set we choose.
We will minimize a particular type of error which leads to a particular nice formula for how the error rate decays as we increase .
Let us define That is, the smallest probability of “guessing time evolved forwards when it really evolved backwards”, subject to the constraint that the error that we “guess that time evolved backwards when it really evolved forwards” is kept small.
That is, we are sure as we can be that if we say “forwards”, it was indeed forwards, while keeping the other error tolerably small.
For , the Chernoff-Stein lemma tells us that
is the relative entropy between the two probability distributions. That is, the error asymptotically decays exponentially with a rate given by the relative entropy .
Connection to Landauer’s Principle
Let me refer to my post on Landauer’s principle. At each step of the RIS, we are in the setup of Landauer’s principle described there: a system interacts unitarily with a thermal reservoir, in this case . Thus, at each step of the RIS run without measurement, we have an entropy production . It turns out that This is very nice! In fact, was the object of study in the recent work of myself and my coauthors in Landauer’s Principle in Repeated Interaction Systems (which I’m happy say has now been published).
I won’t prove the equality here; however, if all goes well, it’ll been on the arxiv soon™ [Edit: it’s up, arxiv:1705.08281].