Question 12

from Answers to questions

Question 12.

When we talk about capacities in a completely classical context, Shannon’s Source Coding Theorem gives that any rate \(R>H(U)\) is reliable. Whereas in the quantum case the HSW Theorem states that a rate \(R\) is reliable if \(R< \chi^*(\Lambda)\) . But in both contexts means roughly the same thing. For the classical case: the number of bits of codeword/number of uses of the source. And for quantum: ‘number of bits of classical message that is transmitted per use of the channel’. So why in the clasical case do we appear to want to minimize \(R\) and in the quantum case want to maximise it?

Ah, good question. It’s actually not a classical vs. quantum difference you’re encountering here; it’s a difference between source coding and channel coding.

So in Shannon’s Source Coding Theorem (classical) and in Schumacher’s Theorem (quantum), we are compressing data and want to minimize the rate (compress the most). In Shannon’s Noisy Coding Theorem (classical) and in the HSW Theorem (quantum), we are sending information over a channel, and want to maximize the rate (send the most info per use of the channel).