- #1
Irishdoug
- 102
- 16
- Homework Statement
- Given a random variable X with d possible outcomes and distribution p(x),prove that the Shannon entropy is maximised for the uniform distribution where all outcomes are equally likely p(x) =1/d
- Relevant Equations
- ## H(X) = - \sum_{x}^{} p(x)log_{2}p(x) ##
##log_{2}## is used as the course is a Quantum Information one.
I have used the Lagrange multiplier way of answering. So I have set up the equation with the constraint that ## \sum_{x}^{} p(x) = 1##
So I have:
##L(x,\lambda) = - \sum_{x}^{} p(x)log_{2}p(x) - \lambda(\sum_{x}^{} p(x) - 1) = 0##
I am now supposed to take the partial derivatives with respect to p(x) and ##\lambda##, however the derivatives with respect to ##\lambda## will give 0 I believe as we have to constants, 1 and -1.
So ##\frac{\partial (- \sum_{x}^{} p(x)log_{2}p(x) - \lambda(\sum_{x}^{} p(x) - 1)) }{\partial p(x)} = -(log_{2}p(x) + \frac{1}{ln_{2}}+\lambda) = 0##
I am unsure what to do with the summation signs, and I am also unsure how to proceed from here. Can I please have some help.
So I have:
##L(x,\lambda) = - \sum_{x}^{} p(x)log_{2}p(x) - \lambda(\sum_{x}^{} p(x) - 1) = 0##
I am now supposed to take the partial derivatives with respect to p(x) and ##\lambda##, however the derivatives with respect to ##\lambda## will give 0 I believe as we have to constants, 1 and -1.
So ##\frac{\partial (- \sum_{x}^{} p(x)log_{2}p(x) - \lambda(\sum_{x}^{} p(x) - 1)) }{\partial p(x)} = -(log_{2}p(x) + \frac{1}{ln_{2}}+\lambda) = 0##
I am unsure what to do with the summation signs, and I am also unsure how to proceed from here. Can I please have some help.