# Question 12

When we talk about capacities in a completely classical context, Shannon’s Source Coding Theorem gives that any rate $$R>H(U)$$ is reliable. Whereas in the quantum case the HSW Theorem states that a rate $$R$$ is reliable if $$R< \chi^*(\Lambda)$$ . But in both contexts means roughly the same thing. For the classical case: the number of bits of codeword/number of uses of the source. And for quantum: ‘number of bits of classical message that is transmitted per use of the channel’. So why in the clasical case do we appear to want to minimize $$R$$ and in the quantum case want to maximise it?