Limits involving logs, negative inf and probabilities.

In summary, the equation is from entropy and discusses the difference between entropy before and entropy after information is acquired about a system. It shows that if there is intrinsic probability to more than one state, then the difference in entropy is still max entropy, but with a finite probability attached to more than one state.
  • #1
nobahar
497
2
Hello!
I am playing around with an equation (i.e. it's not a textbook question), and I arrived at the following problem:

The equation is:
[tex]A = -1*\sum_{i=1}^{N}*\log_{N}(P_{i})*P_{i}[/tex]
Pi is less than or equal to 1 and more than or equal to 0, and is a probability of finding an object in a particular state out of N states.
I was looking at the limiting values for A. If the probabilities are all equal, A = 1. If the probability of one state approaches 1 and the other states therefore approach 0, then I get:

[tex]\lim_{P_{X} \rightarrow 1}(\log_{N}(P_{X})*P_{X})[/tex]
For the state which has a probability approaching one.

[tex]\lim_{P_{Y} \rightarrow 0}(\log_{N}(P_{Y})*P_{Y})[/tex]
For the other states, Y, Z, etc, whose probabilities are approaching 0.

I realize probabilities do not change, but what I mean by approach is that I want to look at the uppermost and lowermost values for A. I Cannot plug in P = 0, because 1) I don't think there is a solution for log(0) and 2) if the probability was actually = 0 then there wouldn't be N possible states. However, I think the extreme values of A are obtained under the conditions when N is equal for all, and therefore A equals 1, and when the probability of one particular state is much greater than all other states.
Having said this: for the first, the log = 0, and P = 1, and so this term in the sum is 0. I get stuck with what are essentially all the other components in the sum, because the limit as P [itex]\rightarrow[/itex] 0 for the log(P) is -[itex]\infty[/itex]., and P tends to 0. I do not know what to do here.
I apologise if the notation is unconventional, I hope it's correct.

Thanks in advance,
Nobahar.
 
Last edited:
Physics news on Phys.org
  • #2
I was expecting it to yield zero, and I think this is the case. I converted it to a quotient - which is log(P)/(1/P) and then converted to natural logarithms - and applied L'Hopital's rule, it produces ln(N)*-P for the function to evaluate. It yielded zero. Anyone want to check if this is correct, it would be much appreciated?
Many thanks.
 
  • #3
This is a standard problem in entropy computations. The minimum entropy you will get when you know it is in a certain state, i.e. there is one P with prob. 1 and the rest are 0. The max you will get if you have no idea, i.e. everything has the same probability. As an engineer, I'd say if there are states whose probability is 0, then there are no such states (although a mathematician would disagree, "almost never"). This all you have managed to calculate and to deduce, well done. As for your calculations, indeed, if you get 0 for a quantity that is clearly non-negative, you have found a minimum.

A more natural way to arrive and to prove your solution is the extreme, you could use Lagrange multipliers and find the extremes of:
[tex]\sum_{n=0}^N P_n \log_NP_n + \lambda\sum_{n=0}^N P_n[/tex]
(partial diff. w.r.t. Pns and set each one of these to zero and use the fact that the sum of probabilities is 1).
 
  • #4
Hi Paallikko, thanks for the response.

Yes, the equation is indeed from entropy that I was reading about!

The subject of information is then raised, and is described as the difference between the entropy before and the entropy after some knowledge is acquired about the system. This is because the probabilities of certain states change: some are increased, some reduced or eliminated. It doesn't say some are increased, but this is possible, right? Is it not possible that in some 'set-up' there are some states with a fixed possibility of occurrence?, that it doesn't have to be the case that one state is definite?; instead, there can be some intrinsic probability as to the state of the system?

I ask this because it makes me wonder what the consequences are for information. If the information about a system is defined as the difference in entropy, then if there is some intrinsic probability, such that all that is possible to know about the system still means that there is some finite probability attached to more than one state, then surely you have ALL the information about the system. Yet this would require the difference in entropies to be max entropy - 0 entropy? If there is a finite probability attached to more than one state, then there is a kind of 'residual entropy' that can't be eliminated. Does this mean that all the information about the system can never be known? It seems to me that it can all be known, it's just that there is a probability attached to certain states?

I hope this makes sense, I can link to some sources or expand if it isn't clear.
 

FAQ: Limits involving logs, negative inf and probabilities.

What is a limit involving logs?

A limit involving logarithms refers to the behavior of a function as its input approaches a certain value. Logarithms are mathematical functions that are used to solve exponential equations. When evaluating a limit involving logarithms, we are interested in understanding the behavior of the function as the input gets closer and closer to a particular value.

How do you calculate limits involving logs?

The limit of a function involving logarithms can be calculated using various methods, such as L'Hopital's rule, substitution, or algebraic manipulation. The specific method used will depend on the form of the function and the properties of logarithms. It is important to remember that the limit may not exist if the function has a vertical asymptote or an undefined value at the point of interest.

What is negative infinity in limits involving logs?

Negative infinity is a concept used in calculus to represent a value that is infinitely small and negative. In the context of limits involving logarithms, negative infinity can occur when the input of the function approaches a value that is smaller than 1 or when the input approaches 0 from the negative side. This can result in the logarithm of a negative or zero value, which is undefined, and the limit will approach negative infinity.

How do probabilities come into play with limits involving logs?

In some cases, limits involving logarithms can be used to calculate probabilities. For example, in the context of continuous probability distributions, the natural logarithm of the probability density function is used to calculate the probability of an event occurring within a certain interval. Limits involving logs can also be used in the context of discrete probability distributions when dealing with sums or products of probabilities.

What are some real-life applications of limits involving logs?

Limits involving logarithms have many real-life applications, especially in the fields of science and engineering. For instance, they are used in physics to calculate the half-life of radioactive substances, in chemistry to determine the concentration of a solution, and in economics to model population growth. Limits involving logarithms are also useful in data analysis and signal processing to identify trends and patterns in data.

Similar threads

Replies
8
Views
1K
Replies
10
Views
1K
Replies
9
Views
1K
Replies
6
Views
933
Replies
7
Views
788
Replies
9
Views
1K
Replies
4
Views
1K
Replies
8
Views
1K
Back
Top