MHB Q(x) is a strictly convex function, show that G is positive definite

AI Thread Summary
The discussion centers on proving that the symmetric matrix G is positive definite if the quadratic function q(x) is strictly convex. The participants explore the relationship between strict convexity and the properties of the gradient and Hessian, noting that strict convexity implies any stationary point is a unique global minimizer. A key insight is that the strict convexity of q(x) leads to the inequality q(0) < (1/2)(q(x) + q(-x)), which implies that x^T G x > 0 for any nonzero vector x. This conclusion confirms that G must be positive definite, demonstrating the connection between the properties of the quadratic function and the matrix G.
numbersense
Messages
4
Reaction score
0
Consider the quadratic function $\displaystyle q(\textbf{x}) = \frac{1}{2} \textbf{x}^T G \textbf{x} + \textbf{d}^T \textbf{x} + c$ on $\mathbb{R}^n$, where $\textbf{d}$ is a constant $n \times 1$ vector, $G$ is a constant $n \times n$ symmetric matrix and $c$ is a scalar.

The gradient is $\nabla q(\textbf{x}) = G \textbf{x} + \textbf{d}$ and the Hessian is $\nabla^2 q(\textbf{x}) = G$.

If $q(\textbf{x})$ is a strictly convex function then show that $G$ is positive definite.I am not sure whether I should start with the convex function definition or start by considering the gradient or the Hessian.

I tried expanding the inequality in the convex function definition but didn't get anywhere.

There is a proposition that says $f$ is strictly convext on $\mathbb{R}^n$ $\implies$ any stationary point is the unique global minimizer. (I can't even prove that a stationary point exists) Another theorem says that positive definiteness is a sufficient condition for being a unique global minimizer and positive semi definiteness is a necessary condition for being a local minimizer. I can't see how to use these statements to prove what the question is asking.
 
Mathematics news on Phys.org
numbersense said:
Consider the quadratic function $\displaystyle q(\textbf{x}) = \frac{1}{2} \textbf{x}^T G \textbf{x} + \textbf{d}^T \textbf{x} + c$ on $\mathbb{R}^n$, where $\textbf{d}$ is a constant $n \times 1$ vector, $G$ is a constant $n \times n$ symmetric matrix and $c$ is a scalar.

The gradient is $\nabla q(\textbf{x}) = G \textbf{x} + \textbf{d}$ and the Hessian is $\nabla^2 q(\textbf{x}) = G$.

If $q(\textbf{x})$ is a strictly convex function then show that $G$ is positive definite.I am not sure whether I should start with the convex function definition or start by considering the gradient or the Hessian.

I tried expanding the inequality in the convex function definition but didn't get anywhere.

There is a proposition that says $f$ is strictly convext on $\mathbb{R}^n$ $\implies$ any stationary point is the unique global minimizer. (I can't even prove that a stationary point exists) Another theorem says that positive definiteness is a sufficient condition for being a unique global minimizer and positive semi definiteness is a necessary condition for being a local minimizer. I can't see how to use these statements to prove what the question is asking.
I don't think you need to use the gradient or the Hessian to show that $G$ is positive definite. The fact that $q$ is strictly convex tells you that $q(\textbf{0}) <\frac12\bigl(q(\textbf{x}) + q(-\textbf{x})\bigr)$, for any nonzero vector $\textbf{x}.$ It follows very easily that $0 < \textbf{x}^T G \textbf{x}$.
 
Opalg said:
I don't think you need to use the gradient or the Hessian to show that $G$ is positive definite. The fact that $q$ is strictly convex tells you that $q(\textbf{0}) <\frac12\bigl(q(\textbf{x}) + q(-\textbf{x})\bigr)$, for any nonzero vector $\textbf{x}.$ It follows very easily that $0 < \textbf{x}^T G \textbf{x}$.

Thanks. It follows like this.
\begin{align*}
q\left(\frac{1}{2} \textbf{x} + \left(1 - \frac{1}{2}\right) (-\textbf{x})\right) = q(0) = c <& \frac{1}{2} q(\textbf{x}) + \left(1 - \frac{1}{2}\right) q(-\textbf{x}) = \textbf{x}^T G \textbf{x} + c\\
\textbf{x}^T G\textbf{x} >& 0
\end{align*}

This is quite clever. I didn't think of obtaining an inequality by using specific values in the convex function definition.
 
Seemingly by some mathematical coincidence, a hexagon of sides 2,2,7,7, 11, and 11 can be inscribed in a circle of radius 7. The other day I saw a math problem on line, which they said came from a Polish Olympiad, where you compute the length x of the 3rd side which is the same as the radius, so that the sides of length 2,x, and 11 are inscribed on the arc of a semi-circle. The law of cosines applied twice gives the answer for x of exactly 7, but the arithmetic is so complex that the...
Thread 'Imaginary Pythagoras'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Back
Top