When we talk about capacities in a completely classical context, Shannon’s Source Coding Theorem gives that any rate \(R>H(U)\) is reliable. Whereas in the quantum case the HSW Theorem states that a rate \(R\) is reliable if \(R< \chi^*(\Lambda)\) . But in both contexts means roughly the same thing. For the classical case: the number of bits of codeword/number of uses of the source. And for quantum: ‘number of bits of classical message that is transmitted per use of the channel’. So why in the clasical case do we appear to want to minimize \(R\) and in the quantum case want to maximise it?
Ah, good question. It’s actually not a classical vs. quantum difference you’re encountering here; it’s a difference between source coding and channel coding.
- In source coding (i.e. data compression), we don’t control the messages being sent out from the source; we are just trying to compress them. Here, the rate is the number of bits of storage space per message that we need. We want to minimize the rate since that means we’re compressing the most.
- In channel coding (i.e. data transmission), we have messages we wish to send over the channel in a way that’s robust to the noisyness of the channel. We are thinking of the channel as a resource we are using, and we want to minimize the number of times we have to use the channel. Here, the rate is the number of bits of information we can transmit per use of the channel. We want to maximize the rate in order to transmit as much information as possible per use of the channel.
So in Shannon’s Source Coding Theorem (classical) and in Schumacher’s Theorem (quantum), we are compressing data and want to minimize the rate (compress the most). In Shannon’s Noisy Coding Theorem (classical) and in the HSW Theorem (quantum), we are sending information over a channel, and want to maximize the rate (send the most info per use of the channel).