Maximum Entropy in Gaussian Setting

AI Thread Summary
The discussion focuses on maximizing differential entropy within a set of inequalities related to random variables in a Gaussian setting. It establishes that the Normal distribution maximizes differential entropy and outlines specific inequalities involving mutual information. The user seeks to determine if a jointly Gaussian distribution can be assumed to maximize the entropy of a specific expression given prior assumptions about the distributions of other variables. Clarifications are provided regarding the notation used, specifically that the vertical bars denote conditional probabilities. Ultimately, the user aims to demonstrate that a jointly Gaussian distribution of the involved random variables maximizes the inequalities presented.
ferreat
Messages
2
Reaction score
0
Hello,
I have a doubt about the distribution of random variables that maximize the differential entropy in a set of inequalities. It is well known that the Normal distribution maximizes the differential entropy. I have the following set of inequalities:

T1 < I(V;Y1|U)
T2 < I(U;Y2)
T3 < I(X1,X2;Y3|V)
T4 < I(X1,X2;Y3)

where, Y1=X1+N1, Y2=a*X1+N2, Y3=b*X1+X2+N3. N1,N2,N3 are Gaussian ~ N(0,1). The lower case a and b are positive real numbers a < b. U, V, X1 and X2 are random variables. I want to maximize that set of inequalities. I know the following:

(i) From T4, h(Y3) maximum is when Y3 is Gaussian then X1 and X2 are Gaussian.

(ii) From T2 we maximize it by having h(Y2) or h(a*X1+N2) maximum. From this by the Entropy Power Inequality (EPI) we bound -h(a*X1+N2|U) and have X1|U Gaussian.

(iii) From T1 we maximize it by having h(Y1|U) or h(X1+N1|U) maximum which we can do as -h(a*X1+N2|U) in the part ii can be bounded having Y1 Gaussian (satisfying the maximum entropy theorem).

The Question:

From T3, can I assume that jointly Gaussian distribution will maximize h(Y3|V) or h(b*X1+X2+N3) having the assumptions i,ii,iii ?

My aim is to show that jointly Gaussian distribution of U, V, X1 and X2 maximizes the set of inequalities. I hope anyone can help me out with this.
 
Physics news on Phys.org
I don't understand your notation. Is "T4" a number or is it only a designator for an expression? Are the vertical bars "|" to denote absolute values? - conditional probabilities?
 
Thanks Stephen for your reply. Basically the set of inequalities is what is known in Information Theory as a rate region:
T1 < I(V;Y1|U)
T2 < I(U;Y2)
T3 < I(X1,X2;Y3|V)
T1+T2+T3 < I(X1,X2;Y3).
T1, T2 adn T3 are the rates obtained when transmitting messages 1, 2 and 3. The I's are Mutual Informations and the vertical bars "|" indicate conditioning. For instance I(V;Y1|U) = h(Y1|U) - h(Y1|U,V) where h(x) is the differential entropy.
My question is basically is after having assumed h(X1+N1|U) maximum implies (X1+N1|U) Gaussian in (iii), could I assume h(b*X1+X2+N3|V) maximum implies (b*X1+X2+N3|V) Gaussian? I know if I hadn't assumed (i,ii,iii) this last question would be affirmative, but having (i,ii,iii) is it still true?
 
I'm taking a look at intuitionistic propositional logic (IPL). Basically it exclude Double Negation Elimination (DNE) from the set of axiom schemas replacing it with Ex falso quodlibet: ⊥ → p for any proposition p (including both atomic and composite propositions). In IPL, for instance, the Law of Excluded Middle (LEM) p ∨ ¬p is no longer a theorem. My question: aside from the logic formal perspective, is IPL supposed to model/address some specific "kind of world" ? Thanks.
I was reading a Bachelor thesis on Peano Arithmetic (PA). PA has the following axioms (not including the induction schema): $$\begin{align} & (A1) ~~~~ \forall x \neg (x + 1 = 0) \nonumber \\ & (A2) ~~~~ \forall xy (x + 1 =y + 1 \to x = y) \nonumber \\ & (A3) ~~~~ \forall x (x + 0 = x) \nonumber \\ & (A4) ~~~~ \forall xy (x + (y +1) = (x + y ) + 1) \nonumber \\ & (A5) ~~~~ \forall x (x \cdot 0 = 0) \nonumber \\ & (A6) ~~~~ \forall xy (x \cdot (y + 1) = (x \cdot y) + x) \nonumber...
Back
Top