Intermediate Math Challenge - May 2018

In summary, the conversation focuses on an intermediate math challenge, with various problems being posed and solved by different users. The rules of the challenge are also mentioned, including the need for full derivations or proofs in solutions. The problems range from solving integrals to proving properties of matrices and primes.
  • #36
QuantumQuest said:
Well done. There is also another very nice way to solve it but I won't say anything more right now, as there maybe people that want to tackle it another way;)
As we have a solution already, a hint:
A few lines with complex numbers
 
  • Like
Likes QuantumQuest
Physics news on Phys.org
  • #37
Biker said:
That is literally the exact same solution I gave... No problem though. Cheers!

Yep

##
\frac{(2n-1)^2 (2n-3)^2 \dots 1^2}{(2n)!} = \frac{(2n-1)^2 (2n-3)^2 \dots 1^2}{(2n) (2n-1) (2n-2) (2n-3) \dots} = \frac{(2n-1) (2n-3) \dots}{2n (2n-2) \dots} = \frac{(2n-1) (2n-3) \dots}{2^n n!}
##
##
= \frac{2n (2n-1) (2n-2) (2n-3) \dots}{2^n 2n (2n-2) \dots n!} = \frac{(2n)!}{2^{2n} (n!)^2}
##

So I think honours go to Biker.
 
  • #38
lpetrich said:
I'll take a stab at solving problem 5.
The first thing I do is to find the commutators of the Dn's. For convenience, I will redefine them by doing Dn = old Dn+1 -- ##D_n = x^{n+1} \frac{d}{dx}##.

Combining the operators,
$$ D_m(D_n(f(x))) = D_m( x^{n+1} f'(x) ) = x^{m+n+2} f''(x) + (n+1) x^{m+n+1} f'(x) $$
Thus, the commutator ##[D_m,D_n] = (n-m)D_{n+n}##.
This I recognize as the Virasoro algebra.

I now consider what finite subalgebras are possible. From the commutators I deduce some constraints.
  • The zero operator, D0, does not affect the presence or absence of any other operators.
  • If two operators are positive, Dm and Dn with m and n > 0, then their commutator contains an operator Dm+n with a higher arg value. Repeating the commutation with this new operator gives another one with even higher arg value. Thus, if more than one operator is positive, there are thus an infinite number of positive operators.
  • The same argument shows that if two operators are negative, Dm and Dn with m and n < 0, then they generate an infinite number of negative operators.
  • So there is at most one positive operator and at most one negative operator.
  • Their commutator must be a multiple of the zero operator, or else they will make more than one positive or negative operator.
Thus, the possible finite subalgebras of this algebra are
  • One element: Dn for any n.
  • Two elements: D0 and Dn for any nonzero n.
  • Three elements: Dn, D-n, and D0 for any nonzero n.
The three-element one's commutators:
## [D_0, D_n] = n D_n ##
## [D_0, D_{-n}] = - n D_{-n} ##
## [D_{-n}, D_n] = 2n D_0 ##
These can be combined into
## [D_0, D_n+D_{-n}] = n (D_n - D_{-n}) ##
## [D_n-D_{-n},D_0] = - n (D_n + D_{-n}) ##
## [D_n+D_{-n}, D_n-D_{-n}] = 4n D_0 ##
The pattern of signs points to the algebra SO(2,1).

So the possible finite subalgebras are the one-element ones, the nontrivial two-element ones, and the three-element SO(2,1).
This is correct, well done.
However, the three-dimensional case is commonly noted as the simple Lie algebra of type ##A_1##, i.e. ##\mathfrak{sl}_2 \cong \mathfrak{su}_2##. The two-dimensional case is its Borel subalgebra, the maximal solvable subalgebra. It is the only non Abelian of dimension two. The one-dimensional is obviously Abelian. These are the only differential structures on the real line, if I remember it correctly. By the way, its the Witt algebra, Virasoro algebras are central extensions of Witt.
 
Last edited:
  • #39
Biker said:
That is literally the exact same solution I gave... No problem though. Cheers!
Correct me, if I'm wrong, but as I could see, you gave the result without the way how to achieve it.
 
  • #40
fresh_42 said:
Correct me, if I'm wrong, but as I could see, you gave the result without the way how to achieve it.
I gave the way in a spoiler, But I don't really care. As long my answer was correct. The pleasure of solving it is enough.
 
  • Like
Likes member 587159
  • #41
Biker said:
That is literally the exact same solution I gave... No problem though. Cheers!

Sorry but where exactly is this solution? You just told me under spoiler in your post #19 just one headline of what you did. Also you had asked me for an answer which was not the final answer. Why didn't you post your full solution as lpetrich did?
 
  • #42
Biker said:
I gave the way in a spoiler, But I don't really care. As long my answer was correct. The pleasure of solving it is enough.
Sorry, but I searched twice now for problem #6, and all I could find is post #14 with a result, not a way to the result. To be precise, it wasn't even a result, just a question. Anyway, there will be more questions. (And all of you are still invited to PM me good problems. Mine seem to be to easy for you :wink:.)
 
  • Like
Likes Biker
  • #43
QuantumQuest said:
Sorry but where exactly is this solution? You just told me under spoiler in your post #19 just one headline of what you did. Also you had asked me for an answer which was not the final answer. Why didn't you post your full solution as lpetrich did?
The problems don't really need to show steps, If you know what to do then the rest is trivial.
I gave a solution which was the "final" solution for me, and a lot of alternative forms of the solution exist. I also said that the method is just recursive integration, and that the integration simplifies itself (last term) with the boundaries of the integral.

I really liked the problem though because it was the first time for me at least to make a recursive law in integration and i was quite happy when I noticed it, So thank you for the question.
 
  • #44
Biker said:
The problems don't really need to show steps, If you know what to do then the rest is trivial.

If you solve a problem for yourself i.e. doing your own study you can do it any way you wish. But in the context of a challenge there are rules and for good reason. So, rules also apply for our challenges here. If you haven't already done so then please take a look at the rules.

Biker said:
I really liked the problem though because it was the first time for me at least to make a recursive law in integration and i was quite happy when I noticed it, So thank you for the question.

It is always good for all of us to learn so you're welcome. I think that it would be even more constructive to try to come up with a different way. A different solution - when we say different we mean it, deserves also credit no matter if the problem has already been solved in one way. So, if you want try a different approach but remember that here only full solutions (i.e. including all steps) are credited.
 
  • Like
Likes Biker and StoneTemplePython
  • #45
fresh_42 said:
This is correct, well done.
However, the three-dimensional case is commonly noted as the simple Lie algebra of type ##A_1##, i.e. ##\mathfrak{sl}_2 \cong \mathfrak{su}_2##. The two-dimensional case is its Borel subalgebra, the maximal solvable subalgebra. It is the only non Abelian of dimension two. The one-dimensional is obviously Abelian. These are the only differential structures on the real line, if I remember it correctly.
The 3D Lie algebra A1 is something of a degenerate case. For real parameters,
so(3) ~ su(2) ~ sp(2) (or usp(2))
so(2,1) ~ su(1,1) ~ sp(2,R) ~ sl(2,R)
They are related by analytic continuation.

If you've ever worked with quantum-mechanical angular momentum, you've worked with this algebra.
 
  • #46
What do you mean by degenerate? A term I would avoid by all means in the context of semisimple Lie algebras, as they can be defined by exactly the non-degeneracy of their Killing-forms. This adjective is highly confusing if used as you did.

The three dimensional ##\mathfrak{sl}(2)## is so to say the prototype of a simple Lie Algebra, one dimension of all what's needed. As there is only one, I've never really cared about the various realizations and ##\mathfrak{sl}(2)## is easiest to handle, although ##\mathfrak{su}_\mathbb{R}(2,\mathbb{C})## is the physics version of it.
 
Last edited:
  • #47
fresh_42 said:
What do you mean by degenerate?
"Degenerate" in the sense of "degeneracy" in quantum mechanics -- several algebra-family members looking alike.
 
  • #48
fresh_42 said:
That's correct, although you could have made life much easier for an old man. I don't know all the trig formulas in mind anymore. And to demonstrate the integration by parts would have been a nice service for the younger among us. However, the argument with the odd function was nice, so I'll not complain about the wrong round-off of the result, although an exact solution would have been better:
$$
\int_{-1}^0 \,x \cdot \sqrt{x^2+x+1} \,dx = -\frac{1}{4} - \frac{3}{16} \log 3 \approx -0.45598980\ldots \approx -0.456
$$
I got this result using the substitution ##u=x +1/2## and the entries in a table of integrals for $$\int_\frac{-1}{2}^\frac{1}{2}u\sqrt{u^2 + \frac{3}{4}}du -\frac{1}{2}\int_\frac{-1}{2}^\frac{1}{2}\sqrt{u^2 + \frac{3}{4}}du$$
 
  • #49
Hello QuantumQuest. You said there is a nice way to do problem 6. Did you mean this:

First write

##
I = \int_0^\pi \sin^{2n} \theta d \theta = \frac{1}{2} \int_{-\pi}^\pi \sin^{2n} \theta d \theta
##

where we have used that ##\sin^{2n} \theta## is an even function in ##\theta##. This can then be converted into a contour integral around the unit circle by the change of variables

##
z = e^{i \theta} \qquad dz = i e^{i \theta} d \theta
##

and writing

##
\sin \theta = \frac{1}{2i} \Big( z - \frac{1}{z} \Big)
##

so that

\begin{align}
I & = \frac{1}{2} \oint \Big[ \frac{1}{2i} \Big( z - \frac{1}{z} \Big) \Big]^{2n} \frac{dz}{iz}
\nonumber \\
& = \frac{1}{2i} \frac{(-1)^n}{2^{2n}} \oint \Big( z - \frac{1}{z} \Big)^{2n} \frac{dz}{z}
\nonumber
\end{align}

This countour integral is easily solved by finding the coefficient of ##\frac{1}{z}## using the binomial expansion:

\begin{align}
I & =
\frac{1}{2i} \frac{(-1)^n}{2^{2n}} \oint \Big( \cdots +
\begin{pmatrix}
2n \\ n
\end{pmatrix}
z^n \Big( - \frac{1}{z} \Big)^n + \cdots \Big) \frac{dz}{z}
\nonumber \\
& = \frac{1}{2i} \frac{(-1)^n}{2^{2n}} \oint \Big( \cdots + \frac{(-1)^n (2n)!}{(n!)^2} + \cdots \Big) \frac{dz}{z}
\nonumber \\
& = \frac{1}{2i} \frac{(-1)^n}{2^{2n}} \times 2 \pi i \frac{(-1)^n (2n)!}{(n!)^2}
\nonumber \\
& = \frac{(2n)!}{2^{2n} (n!)^2} \pi
\nonumber
\end{align}
 
Last edited:
  • Like
Likes lpetrich, nuuskur, QuantumQuest and 1 other person
  • #51
Complex analysis a.k.a black magic :oldgrumpy:
 
  • #52
Only 4 left to solve and we have 2.5 weeks left in May. Awesome job everyone!
 
  • #53
First day open to all. Open:
1.) Determinants.
7.) Series.
9.) Jacobi Identity.
10.) Integral.
 
Last edited:
  • Like
Likes Greg Bernhardt
  • #54
Is the solution of problem 10 this:

By the residue theorem:

##
I = 2 \pi i \lim_{z \rightarrow 0} e^{kz} = 2 \pi i .
##

Then by the change of variables

##
z = e^{i \theta} \qquad dz = i e^{i \theta} d \theta
##

The integral becomes

\begin{align}
I & = \int_{- \pi}^\pi e^{k e^{i \theta}} \frac{i e^{i \theta} d \theta}{e^{i \theta}}
\nonumber \\
& = i \int_{- \pi}^\pi e^{k \cos \theta + i k \sin \theta} d \theta
\nonumber \\
& = i \int_{- \pi}^\pi e^{k \cos \theta} e^{i k \sin \theta} d \theta
\nonumber \\
& = i \int_{- \pi}^\pi e^{k \cos \theta} [ \cos (k \sin \theta) + i \sin (k \sin \theta) ] d \theta
\nonumber
\end{align}

Using that ##\sin (k \sin \theta)## is an odd function in ##\theta##, we have

##
I = i \int_{- \pi}^\pi e^{k \cos \theta} \cos (k \sin \theta) d \theta .
##

Comparing this to the first answer we got for ##I##, we have

##
\int_{- \pi}^\pi e^{k \cos \theta} \cos (k \sin \theta) d \theta = 2 \pi .
##
 
  • Like
Likes Greg Bernhardt and QuantumQuest
  • #56
I think I have solved problem 7.

I used the generating function technique. Define

##
f (x) = \sum_{n=1}^\infty \frac{(n!)^2}{n (2n)!} x^n = \sum_{n=1}^\infty a_n x^n
##

The sum we are after will then be equal to ##f (1)##.

Note that

##
a_{n+1} = \frac{(n!)^2}{n (2n)!} \times \frac{n (n+1)^2}{(n+1) (2n+2) (2n+1)} = a_n \times \frac{n}{2 (2n +1)}
##

or ##2 (2n+1) a_{n+1} = n a_n##. In the following we use ##a_1 = 1/2##. First

\begin{align}
x \frac{d}{dx} f (x) & = \sum_{n=1}^\infty n a_n x^n
\nonumber \\
& = \sum_{n=1}^\infty 2 (2n+1) a_{n+1} x^n
\nonumber
\end{align}

Then

\begin{align}
x^2 \frac{d}{dx} f (x) & = \sum_{n=1}^\infty 2 (2n+1) a_{n+1} x^{n+1}
\nonumber \\
& = 2 \sum_{m=2}^\infty (2m -1) a_m x^m
\nonumber \\
& = 2 \sum_{m=1}^\infty (2m -1) a_m x^m - 2 (2 \times 1 - 1) a_{1} x
\nonumber \\
& = 4 \sum_{m=1}^\infty m a_m x^m - 2 f (x) - x
\nonumber \\
& = 4 x \frac{d}{dx} f (x) - 2 f (x) - x
\nonumber
\end{align}

This gives the differential equation:

##
(x^2 - 4x) \frac{d}{dx} f (x) + 2f (x) + x = 0
##

or

##
\frac{d}{dx} f (x) + \frac{2}{x^2 - 4x} f (x) = - \frac{1}{x - 4} .
##

(with boundary condition ##f (0) = 0##). Now this is the famililar form

##
\frac{d}{dx} f (x) + p (x) f (x) = q (x)
##

that can be solved using the integrating factor method which uses

\begin{align}
\frac{d}{dx} \Big[ e^{\nu (x)} f (x) \Big] & = e^{\nu (x)} \Big[ \frac{d}{dx} f (x) + p (x) f (x) \Big]
\nonumber \\
& = e^{\nu (x)} q (x)
\nonumber
\end{align}

where ##\nu (x) = \int p (x) dx##. We will solve this with

##
f (x) = e^{- \nu (x)} \int e^{\nu (x)} q(x) dx .
##

In our case

##
p (x) = - \frac{1}{2} \Big( \frac{1}{x} - \frac{1}{x-4} \Big) , \qquad q (x) = - \frac{1}{x-4}
##

so that

\begin{align}
\nu (x) & = - \frac{1}{2} \int \Big( \frac{1}{x} - \frac{1}{x - 4} \Big) dx
\nonumber \\
& = - \frac{1}{2} ( \ln x - \ln (x-4))
\nonumber
\end{align}

meaning

##
e^{+ \nu (x)} = \frac{(x-4)^{1/2}}{x^{1/2}} \qquad \mathrm{and} \qquad e^{- \nu (x)} = \frac{x^{1/2}}{(x-4)^{1/2}}
##

and

\begin{align}
f (x) & = - \frac{x^{1/2}}{(x-4)^{1/2}} \int_0^x \frac{(x-4)^{1/2}}{x^{1/2}} \frac{1}{x-4} dx
\nonumber \\
& = i \frac{x^{1/2}}{(4-x)^{1/2}} \int_0^x \frac{dx}{(x^2 - 4x)^{1/2}} dx
\nonumber
\end{align}

(choosing the lower limit to be ##0## will ensure that ##f (0) = 0##). We now take ##x = 1##, then

\begin{align}
\sum_{n=1}^\infty \frac{(n!)^2}{n (2n)!} & = f (1)
\nonumber \\
& = i \frac{1}{\sqrt{3}} \int_0^1 \frac{dx}{(x^2 - 4 x)^{1/2}}
\nonumber
\end{align}

We evaluate the above integral. Write ##x^2 - 4 x = [x - 2]^2 - 4##. Then with the substitution ##u = x - 2## we obtain

\begin{align}
\int_0^1 \frac{dx}{(x^2 - 4 x)^{1/2}} & = \int_{-2}^{-1} \frac{du}{(u^2 - 2^2)^{1/2}}
\nonumber \\
& = \Big[ \cosh^{-1} (u/2) \Big]_{-2}^{-1}
\nonumber \\
& = \cosh^{-1} (-1/2) - \cosh^{-1} (-1) .
\nonumber
\end{align}

Using the formula ##\cosh^{-1} x = \ln [x + \sqrt{x+1} \sqrt{x-1}]##, we have

\begin{align}
\cosh^{-1} (-1/2) - \cosh^{-1} (-1) & = \ln [-1/2 + i \sqrt{3}/2] - \ln (-1)
\nonumber\\
& = i \frac{2}{3} \pi - i \pi
\nonumber\\
& = - i \frac{1}{3} \pi
\nonumber
\end{align}

Accordingly

\begin{align}
\sum_{n=1}^\infty \frac{(n!)^2}{n (2n)!} & = i \frac{1}{\sqrt{3}} \times -i \frac{\pi}{3}
\nonumber \\
& = \frac{\pi}{3 \sqrt{3}} .
\nonumber
\end{align}
 
  • Like
Likes Greg Bernhardt
  • #57
julian said:
I think I have solved problem 7.
Wow! It looks good at first glance, i.e. the result is correct and I couldn't find flaws. There is a simpler solution using a known series expansion, but this one is really good.
 
  • #58
Thanks fresh_42. A thing that needs to be checked is if we are allowed to differentiate the series term by term. This is easy to establish.

There is the theorem:

The derivative of the series sum ##f (x) = \sum_i^\infty u_n (x)## equals the sum of the individual term derivatives,

##
\frac{d}{dx} f (x) = \sum_{n=1}^\infty \frac{d}{dx} u_n (x) ,
##

provided the following conditions hold:

##
u_n (x) \quad \mathrm{and} \quad \frac{d u_n (x)}{dx} \quad \mathrm{are \; continuous \; in} \; [a,b]
##

##
\sum_{n=1}^\infty \frac{d u_n (x)}{dx} \quad \mathrm{is \; uniformally \; convergent \; in} \; [a,b] .
##

In our case it is obvious that ##u_n (x) = a_n x^n## and ##\frac{u_n (x)}{dx} = n a_n x^{n-1}## are continuous in ##[0,1]##. So we just have to check that ##\sum_{n=1}^\infty \frac{d u_n (x)}{dx}## is uniformally convergent.

We can do this using the Weierstrass M test. This states that if we can construct a series of numbers ##\sum_1^\infty M_n##, in which ##M_n \geq |v_n (x)|## for all ##x \in [a,b]## and ##\sum_1^\infty M_n## is convergent, the series ##\sum_1^\infty v_n (x)## will be uniformally convergent in ##[a,b]##.

In our case an obvious candidate for the ##M_n##'s are ##n a_n = \frac{(n!)^2}{(2n)!}##. The ratio test tells you the series ##\sum_1^\infty M_n## is convergent:

##
\lim_{n \rightarrow \infty} \frac{(n+1) a_{n+1}}{n a_n} = \lim_{n \rightarrow \infty} \frac{(n+1)^2}{(2n+2) (2n+1)} = \frac{1}{4} < 1 .
##

We then just need to note that

##
M_n = n a_n \geq |n a_n x^{n-1}| = \Big| \frac{d u_n (x)}{dx} \Big| \quad \mathrm{for \; all} \;\; x \in [0,1] .
##Also, it might have been "nicer" to write

##
f (x) = \frac{x^{1/2}}{(4-x)^{1/2}} \int_0^x \frac{dx}{(4 - x^2)^{1/2}}
##

and

\begin{align}
f (1) & = \frac{1}{\sqrt{3}} \int_0^1 \frac{dx}{(4x - x^2)^{1/2}}
\nonumber \\
& = \frac{1}{\sqrt{3}} \int_0^1 \frac{dx}{[2^2 - (x - 2)^2]^{1/2}}
\nonumber \\
& = \frac{1}{\sqrt{3}} \int_{-2}^{-1}\frac{du}{[2^2 - u^2]^{1/2}}
\nonumber \\
& = \frac{1}{\sqrt{3}} \Big[ \sin^{-1} (u/2) \Big]_{-2}^{-1}
\nonumber \\
& = \frac{1}{\sqrt{3}} \Big( \sin^{-1} (1) - \sin^{-1} (1/2) \Big)
\nonumber \\
& = \frac{1}{\sqrt{3}} \Big( \frac{\pi}{2} - \frac{\pi}{6} \Big)
\nonumber \\
& = \frac{\pi}{3 \sqrt{3}}
\nonumber
\end{align}

but it is equivalent.
 
Last edited:
  • Like
Likes fresh_42
  • #59
I think I have solved problem 1:

I split the proof into the parts:

Part (a) A few facts about real skew symmetric matrices.
Part (b): Proof for ##n## even.
(i) Looking at case ##n = 2##.
(ii) Proving a key inequality. This will prove case ##n=2##
(iii) Proving case for general even ##n## (then easy).
Part (c) Case of odd ##n##.
(i) Proving case for ##n = 3## (easy because of part (b)).
(iii) Proving case for general odd ##n## (easy because of part (b)).

Part (a):

Few facts about real skew matrices:

They are normal: ##A A^\dagger = A A^T = -A A = A^T A = A^\dagger A## and so the spectral theorem holds. There is a unitary matrix ##U## such that ##U^\dagger A U = D## where ##D## is a diagonal matrix. The entries of the diagonal of ##D## are the eigenvalues of ##A##.
The eigenvalues are pure imaginary.
As the coefficients of the characteristic polynomial, ##\det (A - \lambda I) = 0##, are real the eigenvalues come in conjugate pairs. If the dimension, ##n##, of the matrix ##A## is odd then 0 must be one of the eigenvalues.

Part (b) (i):

We first take the simplest case of even ##n##: ##n = 2##. We can write

##
U^\dagger A U = D =
\begin{pmatrix}
\lambda & 0 \\
0 & - \lambda
\end{pmatrix} .
##

First consider

\begin{align}
& \prod_{i=1}^k \det \big( A + x_i I \big) =
\nonumber \\
& = \det \big[ \big( A + x_1 I \big) \big( A + x_2 I \big) \dots \big( A + x_k I \big) \big]
\nonumber \\
& = \det \big[ U^{-1} \big( A + x_1 I \big) U U^{-1} \big( A + x_2 I \big) U U^{-1} \dots U U^{-1} \big( A + x_k
I \big) U \big]
\nonumber \\
& = \det \Big[
\begin{pmatrix}
\lambda + x_1 & 0 \\
0 & - \lambda + x_1
\end{pmatrix}
\begin{pmatrix}
\lambda + x_2 & 0 \\
0 & - \lambda + x_2
\end{pmatrix}
\dots
\begin{pmatrix}
\lambda + x_k & 0 \\
0 & - \lambda + x_k
\end{pmatrix}
\Big]
\nonumber \\
& = \det
\nonumber \\
&
\begin{pmatrix}
(\lambda + x_1) (\lambda + x_2) \dots (\lambda + x_k) & 0 \\
0 & (- \lambda + x_1) (- \lambda + x_2) \dots (- \lambda + x_k)
\end{pmatrix}
\nonumber \\
& = (- \lambda^2 + x_1^2) (- \lambda^2 + x_2^2) \dots (- \lambda^2 + x_k^2)
\nonumber \\
& = (\theta^2 + x_1^2) (\theta^2 + x_2^2) \dots (\theta^2 + x_k^2) \qquad \qquad \qquad \qquad \qquad \qquad (1)
\nonumber
\end{align}

where we have introduced ##\lambda = i \theta## where ##\theta## is real.

Now consider

\begin{align}
& \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big) =
\nonumber \\
& =
\det \Big[
\begin{pmatrix}
\lambda + (x_1 x_2 \dots x_k)^{1/k} & 0 \\
0 & - \lambda + (x_1 x_2 \dots x_k)^{1/k}
\end{pmatrix}^k
\Big]
\nonumber \\
& = \det \Big[
\begin{pmatrix}
\lambda + (x_1 x_2 \dots x_k)^{1/k} & 0 \\
0 & - \lambda + (x_1 x_2 \dots x_k)^{1/k}
\end{pmatrix}
\Big]^k
\nonumber \\
& = \big[ - \lambda^2 + (x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
\nonumber \\
&= \big[ \theta^2 + (x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
\qquad \qquad (2)
\nonumber
\end{align}

Comparing eq (1) and eq (2), we see that proving the main result for ##n = 2## then amounts to proving the inequality:

##
(\theta^2 + x_1^2) \dots (\theta^2 + x_k^2) \geq [\theta^2 + (x_1^2 \dots x_k^2)^{1/k}]^k
\qquad \qquad \qquad \qquad (3)
##

We prove this, for general value of ##\theta##, in the next subsection. See next spoiler!

Part (b) (ii):

We wish to prove (3):

##
(\theta^2 + x_1^2) \dots (\theta^2 + x_k^2) \geq [\theta^2 + (x_1^2 \dots x_k^2)^{1/k}]^k
##

The proof is an essentially inductive argument. Our base case ##k = 2## is easy:

\begin{align}
(\theta^2 + x_1^2) (\theta^2 + x_2^2) & = \theta^4 + \theta^2 x_1^2 + \theta^2 x_2^2 + x_1^2 x_2^2
\nonumber \\
& \geq \theta^4 + 2 \theta^2 (x_1 x_2) + x_1^2 x_2^2
\nonumber \\
& = [\theta^2 + (x_1^2 x_2^2)^{1/2}]^2
\nonumber
\end{align}

where we used ##a^2 + b^2 \geq 2 ab##.

Next we prove that whenever the result holds for ##k##, it holds for ##2k## as well. That is, we'll first prove the result for powers of ##2: k = 2, 4, 8, 16, \dots##. Assume we know the result holds for some ##k##. Now consider ##2k## positive numbers ##x_1^2, \dots , x_k^2## and ##y_1^k , \dots y_k^2##. We use the induction hypothesis and the base case to find

\begin{align}
& (\theta^2 + x_1^2) \dots (\theta^2 + x_k^2) (\theta^2 + y_1^2) \dots (\theta^2 + y_k^2)
\nonumber \\
& \geq [\theta^2 + (x_1^2 \dots x_k^2)^{1/k}]^k [\theta^2 + (y_1^2 \dots y_k^2)^{1/k}]^k
\nonumber \\
& = [\theta^4 + \theta^2 (x_1^2 \dots x_k^2)^{1/k} + \theta^2 (y_1^2 \dots y_k^2)^{1/k} + (x_1^2 \dots x_k^2
y_1^2 \dots y_k^2)^{1/k}]^k
\nonumber \\
& \geq [\theta^4 + 2 \theta^2 (x_1^2 \dots x_k^2 y_1^2 \dots y_k^2)^{1/2k} + (x_1^2 \dots x_k^2
y_1^2 \dots y_k^2)^{1/k}]^k
\nonumber \\
& = [\theta^2 + (x_1^2 \dots x_k^2 y_1^2 \dots y_k^2)^{1/2k}]^{2k}
\nonumber
\end{align}

where we used ##a^2 + b^2 \geq 2 ab##. This is the required result. We now know the theorem to be true for infinitely many ##k##.

Next we prove that whenever the result is true for ##k##, it's also true for ##k - 1##. This will prove the result for all the in-between integers. Let ##k \geq 4## and assume the result holds for ##k##. Consider the ##k-1## positive numbers ##x_1^2 , x_2^2 , \dots , x_{k-1}^2##. Define ##x_k^2## to be ##(x_1^2 x_2^2 \dots x_{k-1}^2)^{1/(k-1)}##. We then have

\begin{align}
& (\theta^2 + x_1^2) (\theta^2 + x_2^2) \dots (\theta^2 + x_{k-1}^2) (\theta^2 + x_k^2)
\nonumber \\
& \geq [\theta^2 + (x_1^2 \dots x_{k-1}^2 x_k^2)^{1/k} ]^k
\nonumber \\
& = [\theta^2 + \{ (x_1^2 \dots x_{k-1}^2)^{1/(k-1)} \}^{(k-1)/k} \; (x_k^2)^{1/k} ]^k
\nonumber \\
& = [\theta^2 + \big( x_k^2 \big)^{(k-1)/k} \; (x_k^2)^{1/k} ]^k
\nonumber \\
& = [\theta^2 + x_k^2]^k
\nonumber
\end{align}

Rearranging we have

\begin{align}
& (\theta^2 + x_1^2) (\theta^2 + x_2^2) \dots (\theta^2 + x_{k-1}^2) \geq [\theta^2 + x_k^2]^{k-1}
\nonumber \\
& \qquad \equiv [\theta^2 + (x_1^2 x_2^2 \dots x_{k-1}^2)^{1 / (k-1)} ]^{k-1}
\nonumber
\end{align}

which is the required result.

Which means we have established (3) and proven the main result for ##n = 2##, i.e.,

##
\prod_{i=1}^k \det \big( A + x_i I \big) \geq \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big) \qquad n = 2 .
##

We will prove the main result for arbitrary even ##n## in the next subsection. See next spoiler!

Part (b) (iii):

We now prove the main result for general even ##n##. We can write

##
U^\dagger A U = D =
\begin{pmatrix}
\lambda_1 & 0 & 0 & 0 & \dots & \dots & 0 & 0 \\
0 & - \lambda_1 & 0 & 0& \dots & \dots & 0 & 0 \\
0 & 0 & \lambda_2 & 0& \dots & \dots & 0 & 0 \\
0 & 0 & 0 & - \lambda_2& \dots & \dots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
\vdots & \vdots & \vdots & \vdots & \dots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & 0 & \dots & \dots & \lambda_{n/2} & 0 \\
0 & 0 & 0 & 0 & \dots & \dots & 0 & - \lambda_{n/2} \\
\end{pmatrix} .
##

So that

\begin{align}
& \prod_{i=1}^k \det (A - x_i I)
\nonumber \\
& = \prod_{i=1}^k \det
\nonumber \\
&
\begin{pmatrix}
\lambda_1 + x_i & 0 & \dots & \dots & 0 & 0 \\
0 & - \lambda_1 + x_i & \dots & \dots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
\vdots & \vdots & \dots & \ddots & \vdots & \vdots \\
0 & 0 & \dots & \dots & \lambda_{n/2} + x_i & 0 \\
0 & 0 & \dots & \dots & 0 & - \lambda_{n/2} + x_i \\
\end{pmatrix}
\nonumber \\
& =
\Big( \prod_{i=1}^k
\det
\begin{pmatrix}
\lambda_1 + x_i & 0 \\
0 & - \lambda_1 + x_i
\end{pmatrix} \Big)
\dots
\Big( \prod_{i=1}^k
\begin{pmatrix}
\lambda_{n/2} + x_i & 0 \\
0 & - \lambda_{n/2} + x_i
\end{pmatrix} \Big)
\nonumber \\
& = \big( \prod_{i=1}^k (\theta_1^2 + x_i^2) \big) \dots \big( \prod_{i=1}^k(\theta_{n/2}^2 + x_i^2) \big)
\nonumber
\end{align}

where we have introduced ##\lambda_l = i \theta_l## where ##\theta_l## is real..

Next consider

##
\qquad \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big)^k
##
##
\quad = \det
\begin{pmatrix}
\lambda_1 + (x_1 \dots x_k)^{\frac{1}{k}} & 0 & \dots & \dots \\
0 & - \lambda_1 + (x_1 \dots x_k)^{\frac{1}{k}} & \dots & \dots \\
\vdots & \vdots & \ddots & \vdots \\
\vdots & \vdots & \dots & \ddots
\end{pmatrix}^k
##
##
\quad = \prod_{l=1}^{n/2} \det
\Big[
\begin{pmatrix}
\lambda_l + (x_1 x_2 \dots x_k)^{1/k} & 0 \\
0 & - \lambda_l + (x_1 x_2 \dots x_k)^{1/k}
\end{pmatrix}^k
\Big]
##
##
\quad = \big[ \theta_1^2 + ( x_1^2 x_2^2 \dots x_k^2 )^{1/k} \big]^k
\dots
\big[ \theta_{n/2}^2 + ( x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
##

We easily have from (3) that

\begin{align}
& \big( \prod_{i=1}^k (\theta_1^2 + x_i^2) \big)
\dots
\big( \prod_{i=1}^k (\theta_{n/2}^2 + x_i^2) \big)
\geq
\nonumber \\
& \qquad \qquad \qquad \qquad \qquad
\big[ \theta_1^2 + (x_1^2 x_2^2 \dots x_k^2)^{\frac{1}{k}} \big]^k
\dots
\big[ \theta_{n/2}^2 + (x_1^2 x_2^2 \dots x_k^2)^{\frac{1}{k}} \big]^k
\nonumber
\end{align}

which proves the main result for even ##n##:

##
\prod_{i=1}^k \det \big( A + x_i I \big) \geq \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big) \qquad n \;

\mathrm{even} .
##

In the next section we prove the main result for odd ##n##. See next spoiler!

Part (c) (i):

We now consider the case ##n = 3##. We write

##
U^\dagger A U = D =
\begin{pmatrix}
0 & 0 & 0 \\
0 & \lambda & 0 \\
0 & 0 & - \lambda
\end{pmatrix} .
##

First consider

##
\prod_{i=1}^k \det \big( A + x_i I \big) =
##
##
= \det \big[ U^{-1} \big( A + x_1 I \big) U U^{-1} \dots U U^{-1} \big( A + x_k I \big) U \big]
##
##
\quad = \det \Big[
\begin{pmatrix}
x_1 & 0 & 0 \\
0 & \lambda + x_1 & 0 \\
0 & 0 & - \lambda + x_1
\end{pmatrix}
\dots
\begin{pmatrix}
x_k & 0 & 0 \\
0 & \lambda + x_k & 0 \\
0 & 0 & - \lambda + x_k
\end{pmatrix}
\Big]
##
##
= \det
##
##
\begin{pmatrix}
x_1 x_2 \dots x_k & 0 & 0 \\
0 & (\lambda + x_1) \dots (\lambda + x_k) & 0 \\
0 & 0 & (- \lambda + x_1) \dots (- \lambda + x_k)
\end{pmatrix}
##
##
= (x_1 x_2 \dots x_k) \times
##
##
\quad \det
\begin{pmatrix}
(\lambda + x_1) \dots (\lambda + x_k) & 0 \\
0 & (- \lambda + x_1) \dots (- \lambda + x_k)
\end{pmatrix} .
##
##
= (x_1 x_2 \dots x_k) (\theta^2 + x_1^2) \dots (\theta^2 + x_k^2)
##

Now consider

\begin{align}
& \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big)^k =
\nonumber \\
& = \det
\Big[
\begin{pmatrix}
(x_1 x_2 \dots x_k)^{\frac{1}{k}} & 0 & 0 \\
0 & \lambda + (x_1 x_2 \dots x_k)^{\frac{1}{k}} & 0 \\
0 & 0 & - \lambda + (x_1 x_2 \dots x_k)^{\frac{1}{k}}
\end{pmatrix}
\Big]^k
\nonumber \\
& = (x_1 x_2 \dots x_k) \det
\Big[
\begin{pmatrix}
\lambda + (x_1 x_2 \dots x_k)^{1/k} & 0 \\
0 & - \lambda + (x_1 x_2 \dots x_k)^{1/k}
\end{pmatrix}
\Big]^k
\nonumber \\
& = (x_1 x_2 \dots x_k ) \big[ \theta^2 + (x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
\nonumber
\end{align}

As the OP's question assumes that ##\{ x_1, \dots , x_k \}## are positive numbers, and using the result (3), we

have:

##
(x_1 x_2 \dots x_k) (\theta^2 + x_1^2) \dots (\theta^2 + x_k^2) \geq (x_1 x_2 \dots x_k ) \big[ \theta^2 + (x_1^2
x_2^2 \dots x_k^2)^{1/k} \big]^k
##

which establishes the main result for ##n = 3##:

##
\prod_{i=1}^k \det \big( A + x_i I \big) \geq \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big) \qquad n = 3 .
##

Part (c) (ii):

We now turn to the general case of any odd ##n##. We can write

##
U^\dagger A U = D =
\begin{pmatrix}
0 & 0 & 0 & 0 & 0 & \dots & \dots & 0 & 0 \\
0 & \lambda_1 & 0 & 0 & 0 & \dots & \dots & 0 & 0 \\
0 & 0 & - \lambda_1 & 0 & 0& \dots & \dots & 0 & 0 \\
0 & 0 & 0 & \lambda_2 & 0& \dots & \dots & 0 & 0 \\
0 & 0 & 0 & 0 & - \lambda_2& \dots & \dots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \dots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & 0 & 0 & \dots & \dots & \lambda_n & 0 \\
0 & 0 & 0 & 0 & 0 & \dots & \dots & 0 & - \lambda_n \\
\end{pmatrix} .
##

First consider

\begin{align}
& \prod_{i=1}^k \det \big( A + x_i I \big) =
\nonumber \\
& = \prod_{i=1}^k \det
\nonumber \\
&
\begin{pmatrix}
x_i & 0 & 0 & \dots & 0 & 0 \\
0 & \lambda_1 + x_i & 0 & \dots & 0 & 0 \\
0 & 0 & - \lambda_1 +x_i & \dots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0 & 0 & \dots & \lambda_{(n-1)/2}+ x_i & 0 \\
0 & 0 & 0 & \dots & 0 & - \lambda_{(n-1)/2} + x_i \\
\end{pmatrix}
\nonumber \\
& = (x_1 x_2 \dots x_k ) \big( \prod_{i=1}^k (\theta_1^2 + x_i^2) \big)
\dots
\big( \prod_{i=1}^k (\theta_{(n-1)/2}^2 + x_i^2) \big)
\nonumber
\end{align}

Now consider

\begin{align}
& \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big)^k =
\nonumber \\
& = \det
\begin{pmatrix}
(x_1 \dots x_k)^{\frac{1}{k}} & 0 & 0 & \dots \\
0 & \lambda_1 + (x_1 \dots x_k)^{\frac{1}{k}} & 0 & \dots \\
0 & 0 & - \lambda_1 + (x_1 \dots x_k)^{\frac{1}{k}} & \dots \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\end{pmatrix}^k
\nonumber \\
& =
(x_1 x_2 \dots x_k ) \big[ \theta_1^2 + (x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
\dots
\big[ \theta_{(n-1)/2}^2 + (x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
\nonumber
\end{align}

We easily have from (3) that

\begin{align}
& (x_1 x_2 \dots x_k ) \big( \prod_{i=1}^k (\theta_1^2 + x_i^2) \big)
\dots
\big( \prod_{i=1}^k (\theta_{(n-1)/2}^2 + x_i^2) \big)
\geq
\nonumber \\
&
(x_1 x_2 \dots x_k ) \big[ \theta_1^2 + (x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
\dots
\big[ \theta_{(n-1)/2}^2 + (x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
\nonumber
\end{align}

which establishes the main result for general odd values of ##n##:

##
\prod_{i=1}^k \det \big( A + x_i I \big) \geq \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big) \qquad \mathrm

{for \; odd \;} n .
##Completeting the proof of problem 1.
 
Last edited:
  • Like
Likes StoneTemplePython and Greg Bernhardt
  • #60
julian said:
I think I have solved problem 1:

I split the proof into the parts:

Part (a) A few facts about real skew symmetric matrices.
Part (b): Proof for ##n## even.
(i) Looking at case ##n = 2##.
(ii) Proving a key inequality. This will prove case ##n=2##
(iii) Proving case for general even ##n## (then easy).
Part (c) Case of odd ##n##.
(i) Proving case for ##n = 3## (easy because of part (b)).
(iii) Proving case for general odd ##n## (easy because of part (b)).

Part (a):

Few facts about real skew matrices:

They are normal: ##A A^\dagger = A A^T = -A A = A^T A = A^\dagger A## and so the spectral theorem holds. There is a unitary matrix ##U## such that ##U^\dagger A U = D## where ##D## is a diagonal matrix. The entries of the diagonal of ##D## are the eigenvalues of ##A##.
The eigenvalues are pure imaginary.
As the coefficients of the characteristic polynomial, ##\det (A - \lambda I) = 0##, are real the eigenvalues come in conjugate pairs. If the dimension, ##n##, of the matrix ##A## is odd then 0 must be one of the eigenvalues.

Part (b) (i):

We first take the simplest case of even ##n##: ##n = 2##. We can write

##
U^\dagger A U = D =
\begin{pmatrix}
\lambda & 0 \\
0 & - \lambda
\end{pmatrix} .
##

First consider

\begin{align}
& \prod_{i=1}^k \det \big( A + x_i I \big) =
\nonumber \\
& = \det \big[ \big( A + x_1 I \big) \big( A + x_2 I \big) \dots \big( A + x_k I \big) \big]
\nonumber \\
& = \det \big[ U^{-1} \big( A + x_1 I \big) U U^{-1} \big( A + x_2 I \big) U U^{-1} \dots U U^{-1} \big( A + x_k
I \big) U \big]
\nonumber \\
& = \det \Big[
\begin{pmatrix}
\lambda + x_1 & 0 \\
0 & - \lambda + x_1
\end{pmatrix}
\begin{pmatrix}
\lambda + x_2 & 0 \\
0 & - \lambda + x_2
\end{pmatrix}
\dots
\begin{pmatrix}
\lambda + x_k & 0 \\
0 & - \lambda + x_k
\end{pmatrix}
\Big]
\nonumber \\
& = \det
\nonumber \\
&
\begin{pmatrix}
(\lambda + x_1) (\lambda + x_2) \dots (\lambda + x_k) & 0 \\
0 & (- \lambda + x_1) (- \lambda + x_2) \dots (- \lambda + x_k)
\end{pmatrix}
\nonumber \\
& = (- \lambda^2 + x_1^2) (- \lambda^2 + x_2^2) \dots (- \lambda^2 + x_k^2)
\nonumber \\
& = (\theta^2 + x_1^2) (\theta^2 + x_2^2) \dots (\theta^2 + x_k^2) \qquad \qquad \qquad \qquad \qquad \qquad (1)
\nonumber
\end{align}

where we have introduced ##\lambda = i \theta## where ##\theta## is real.

Now consider

\begin{align}
& \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big) =
\nonumber \\
& =
\det \Big[
\begin{pmatrix}
\lambda + (x_1 x_2 \dots x_k)^{1/k} & 0 \\
0 & - \lambda + (x_1 x_2 \dots x_k)^{1/k}
\end{pmatrix}^k
\Big]
\nonumber \\
& = \det \Big[
\begin{pmatrix}
\lambda + (x_1 x_2 \dots x_k)^{1/k} & 0 \\
0 & - \lambda + (x_1 x_2 \dots x_k)^{1/k}
\end{pmatrix}
\Big]^k
\nonumber \\
& = \big[ - \lambda^2 + (x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
\nonumber \\
&= \big[ \theta^2 + (x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
\qquad \qquad (2)
\nonumber
\end{align}

Comparing eq (1) and eq (2), we see that proving the main result for ##n = 2## then amounts to proving the inequality:

##
(\theta^2 + x_1^2) \dots (\theta^2 + x_k^2) \geq [\theta^2 + (x_1^2 \dots x_k^2)^{1/k}]^k
\qquad \qquad \qquad \qquad (3)
##

We prove this, for general value of ##\theta##, in the next subsection. See next spoiler!

Part (b) (ii):

We wish to prove (3):

##
(\theta^2 + x_1^2) \dots (\theta^2 + x_k^2) \geq [\theta^2 + (x_1^2 \dots x_k^2)^{1/k}]^k
##

The proof is an essentially inductive argument. Our base case ##k = 2## is easy:

\begin{align}
(\theta^2 + x_1^2) (\theta^2 + x_2^2) & = \theta^4 + \theta^2 x_1^2 + \theta^2 x_2^2 + x_1^2 x_2^2
\nonumber \\
& \geq \theta^4 + 2 \theta^2 (x_1 x_2) + x_1^2 x_2^2
\nonumber \\
& = [\theta^2 + (x_1^2 x_2^2)^{1/2}]^2
\nonumber
\end{align}

where we used ##a^2 + b^2 \geq 2 ab##.

Next we prove that whenever the result holds for ##k##, it holds for ##2k## as well. That is, we'll first prove the result for powers of ##2: k = 2, 4, 8, 16, \dots##. Assume we know the result holds for some ##k##. Now consider ##2k## positive numbers ##x_1^2, \dots , x_k^2## and ##y_1^k , \dots y_k^2##. We use the induction hypothesis and the base case to find

\begin{align}
& (\theta^2 + x_1^2) \dots (\theta^2 + x_k^2) (\theta^2 + y_1^2) \dots (\theta^2 + y_k^2)
\nonumber \\
& \geq [\theta^2 + (x_1^2 \dots x_k^2)^{1/k}]^k [\theta^2 + (y_1^2 \dots y_k^2)^{1/k}]^k
\nonumber \\
& = [\theta^4 + \theta^2 (x_1^2 \dots x_k^2)^{1/k} + \theta^2 (y_1^2 \dots y_k^2)^{1/k} + (x_1^2 \dots x_k^2
y_1^2 \dots y_k^2)^{1/k}]^k
\nonumber \\
& \geq [\theta^4 + 2 \theta^2 (x_1^2 \dots x_k^2 y_1^2 \dots y_k^2)^{1/2k} + (x_1^2 \dots x_k^2
y_1^2 \dots y_k^2)^{1/k}]^k
\nonumber \\
& = [\theta^2 + (x_1^2 \dots x_k^2 y_1^2 \dots y_k^2)^{1/2k}]^{2k}
\nonumber
\end{align}

where we used ##a^2 + b^2 \geq 2 ab##. This is the required result. We now know the theorem to be true for infinitely many ##k##.

Next we prove that whenever the result is true for ##k##, it's also true for ##k - 1##. This will prove the result for all the in-between integers. Let ##k \geq 4## and assume the result holds for ##k##. Consider the ##k-1## positive numbers ##x_1^2 , x_2^2 , \dots , x_{k-1}^2##. Define ##x_k^2## to be ##(x_1^2 x_2^2 \dots x_{k-1}^2)^{1/(k-1)}##. We then have

\begin{align}
& (\theta^2 + x_1^2) (\theta^2 + x_2^2) \dots (\theta^2 + x_{k-1}^2) (\theta^2 + x_k^2)
\nonumber \\
& \geq [\theta^2 + (x_1^2 \dots x_{k-1}^2 x_k^2)^{1/k} ]^k
\nonumber \\
& = [\theta^2 + \{ (x_1^2 \dots x_{k-1}^2)^{1/(k-1)} \}^{(k-1)/k} \; (x_k^2)^{1/k} ]^k
\nonumber \\
& = [\theta^2 + \big( x_k^2 \big)^{(k-1)/k} \; (x_k^2)^{1/k} ]^k
\nonumber \\
& = [\theta^2 + x_k^2]^k
\nonumber
\end{align}

Rearranging we have

\begin{align}
& (\theta^2 + x_1^2) (\theta^2 + x_2^2) \dots (\theta^2 + x_{k-1}^2) \geq [\theta^2 + x_k^2]^{k-1}
\nonumber \\
& \qquad \equiv [\theta^2 + (x_1^2 x_2^2 \dots x_{k-1}^2)^{1 / (k-1)} ]^{k-1}
\nonumber
\end{align}

which is the required result.

Which means we have established (3) and proven the main result for ##n = 2##, i.e.,

##
\prod_{i=1}^k \det \big( A + x_i I \big) \geq \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big) \qquad n = 2 .
##

We will prove the main result for arbitrary even ##n## in the next subsection. See next spoiler!

Part (b) (iii):

We now prove the main result for general even ##n##. We can write

##
U^\dagger A U = D =
\begin{pmatrix}
\lambda_1 & 0 & 0 & 0 & \dots & \dots & 0 & 0 \\
0 & - \lambda_1 & 0 & 0& \dots & \dots & 0 & 0 \\
0 & 0 & \lambda_2 & 0& \dots & \dots & 0 & 0 \\
0 & 0 & 0 & - \lambda_2& \dots & \dots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
\vdots & \vdots & \vdots & \vdots & \dots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & 0 & \dots & \dots & \lambda_{n/2} & 0 \\
0 & 0 & 0 & 0 & \dots & \dots & 0 & - \lambda_{n/2} \\
\end{pmatrix} .
##

So that

\begin{align}
& \prod_{i=1}^k \det (A - x_i I)
\nonumber \\
& = \prod_{i=1}^k \det
\nonumber \\
&
\begin{pmatrix}
\lambda_1 + x_i & 0 & \dots & \dots & 0 & 0 \\
0 & - \lambda_1 + x_i & \dots & \dots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
\vdots & \vdots & \dots & \ddots & \vdots & \vdots \\
0 & 0 & \dots & \dots & \lambda_{n/2} + x_i & 0 \\
0 & 0 & \dots & \dots & 0 & - \lambda_{n/2} + x_i \\
\end{pmatrix}
\nonumber \\
& =
\Big( \prod_{i=1}^k
\det
\begin{pmatrix}
\lambda_1 + x_i & 0 \\
0 & - \lambda_1 + x_i
\end{pmatrix} \Big)
\dots
\Big( \prod_{i=1}^k
\begin{pmatrix}
\lambda_{n/2} + x_i & 0 \\
0 & - \lambda_{n/2} + x_i
\end{pmatrix} \Big)
\nonumber \\
& = \big( \prod_{i=1}^k (\theta_1^2 + x_i^2) \big) \dots \big( \prod_{i=1}^k(\theta_{n/2}^2 + x_i^2) \big)
\nonumber
\end{align}

where we have introduced ##\lambda_l = i \theta_l## where ##\theta_l## is real..

Next consider

##
\qquad \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big)^k
##
##
\quad = \det
\begin{pmatrix}
\lambda_1 + (x_1 \dots x_k)^{\frac{1}{k}} & 0 & \dots & \dots \\
0 & - \lambda_1 + (x_1 \dots x_k)^{\frac{1}{k}} & \dots & \dots \\
\vdots & \vdots & \ddots & \vdots \\
\vdots & \vdots & \dots & \ddots
\end{pmatrix}^k
##
##
\quad = \prod_{l=1}^{n/2} \det
\Big[
\begin{pmatrix}
\lambda_l + (x_1 x_2 \dots x_k)^{1/k} & 0 \\
0 & - \lambda_l + (x_1 x_2 \dots x_k)^{1/k}
\end{pmatrix}^k
\Big]
##
##
\quad = \big[ \theta_1^2 + ( x_1^2 x_2^2 \dots x_k^2 )^{1/k} \big]^k
\dots
\big[ \theta_{n/2}^2 + ( x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
##

We easily have from (3) that

\begin{align}
& \big( \prod_{i=1}^k (\theta_1^2 + x_i^2) \big)
\dots
\big( \prod_{i=1}^k (\theta_{n/2}^2 + x_i^2) \big)
\geq
\nonumber \\
& \qquad \qquad \qquad \qquad \qquad
\big[ \theta_1^2 + (x_1^2 x_2^2 \dots x_k^2)^{\frac{1}{k}} \big]^k
\dots
\big[ \theta_{n/2}^2 + (x_1^2 x_2^2 \dots x_k^2)^{\frac{1}{k}} \big]^k
\nonumber
\end{align}

which proves the main result for even ##n##:

##
\prod_{i=1}^k \det \big( A + x_i I \big) \geq \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big) \qquad n \;

\mathrm{even} .
##

In the next section we prove the main result for odd ##n##. See next spoiler!

Part (c) (i):

We now consider the case ##n = 3##. We write

##
U^\dagger A U = D =
\begin{pmatrix}
0 & 0 & 0 \\
0 & \lambda & 0 \\
0 & 0 & - \lambda
\end{pmatrix} .
##

First consider

##
\prod_{i=1}^k \det \big( A + x_i I \big) =
##
##
= \det \big[ U^{-1} \big( A + x_1 I \big) U U^{-1} \dots U U^{-1} \big( A + x_k I \big) U \big]
##
##
\quad = \det \Big[
\begin{pmatrix}
x_1 & 0 & 0 \\
0 & \lambda + x_1 & 0 \\
0 & 0 & - \lambda + x_1
\end{pmatrix}
\dots
\begin{pmatrix}
x_k & 0 & 0 \\
0 & \lambda + x_k & 0 \\
0 & 0 & - \lambda + x_k
\end{pmatrix}
\Big]
##
##
= \det
##
##
\begin{pmatrix}
x_1 x_2 \dots x_k & 0 & 0 \\
0 & (\lambda + x_1) \dots (\lambda + x_k) & 0 \\
0 & 0 & (- \lambda + x_1) \dots (- \lambda + x_k)
\end{pmatrix}
##
##
= (x_1 x_2 \dots x_k) \times
##
##
\quad \det
\begin{pmatrix}
(\lambda + x_1) \dots (\lambda + x_k) & 0 \\
0 & (- \lambda + x_1) \dots (- \lambda + x_k)
\end{pmatrix} .
##
##
= (x_1 x_2 \dots x_k) (\theta^2 + x_1^2) \dots (\theta^2 + x_k^2)
##

Now consider

\begin{align}
& \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big)^k =
\nonumber \\
& = \det
\Big[
\begin{pmatrix}
(x_1 x_2 \dots x_k)^{\frac{1}{k}} & 0 & 0 \\
0 & \lambda + (x_1 x_2 \dots x_k)^{\frac{1}{k}} & 0 \\
0 & 0 & - \lambda + (x_1 x_2 \dots x_k)^{\frac{1}{k}}
\end{pmatrix}
\Big]^k
\nonumber \\
& = (x_1 x_2 \dots x_k) \det
\Big[
\begin{pmatrix}
\lambda + (x_1 x_2 \dots x_k)^{1/k} & 0 \\
0 & - \lambda + (x_1 x_2 \dots x_k)^{1/k}
\end{pmatrix}
\Big]^k
\nonumber \\
& = (x_1 x_2 \dots x_k ) \big[ \theta^2 + (x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
\nonumber
\end{align}

As the OP's question assumes that ##\{ x_1, \dots , x_k \}## are positive numbers, and using the result (3), we

have:

##
(x_1 x_2 \dots x_k) (\theta^2 + x_1^2) \dots (\theta^2 + x_k^2) \geq (x_1 x_2 \dots x_k ) \big[ \theta^2 + (x_1^2
x_2^2 \dots x_k^2)^{1/k} \big]^k
##

which establishes the main result for general odd values of ##n = 3##:

##
\prod_{i=1}^k \det \big( A + x_i I \big) \geq \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big) \qquad n = 3 .
##

Part (c) (ii):

We now turn to the general case of any odd ##n##. We can write

##
U^\dagger A U = D =
\begin{pmatrix}
0 & 0 & 0 & 0 & 0 & \dots & \dots & 0 & 0 \\
0 & \lambda_1 & 0 & 0 & 0 & \dots & \dots & 0 & 0 \\
0 & 0 & - \lambda_1 & 0 & 0& \dots & \dots & 0 & 0 \\
0 & 0 & 0 & \lambda_2 & 0& \dots & \dots & 0 & 0 \\
0 & 0 & 0 & 0 & - \lambda_2& \dots & \dots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \dots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & 0 & 0 & \dots & \dots & \lambda_n & 0 \\
0 & 0 & 0 & 0 & 0 & \dots & \dots & 0 & - \lambda_n \\
\end{pmatrix} .
##

First consider

\begin{align}
& \prod_{i=1}^k \det \big( A + x_i I \big) =
\nonumber \\
& = \prod_{i=1}^k \det
\nonumber \\
&
\begin{pmatrix}
x_i & 0 & 0 & \dots & 0 & 0 \\
0 & \lambda_1 + x_i & 0 & \dots & 0 & 0 \\
0 & 0 & - \lambda_1 +x_i & \dots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0 & 0 & \dots & \lambda_{(n-1)/2}+ x_i & 0 \\
0 & 0 & 0 & \dots & 0 & - \lambda_{(n-1)/2} + x_i \\
\end{pmatrix}
\nonumber \\
& = (x_1 x_2 \dots x_k ) \big( \prod_{i=1}^k (\theta_1^2 + x_i^2) \big)
\dots
\big( \prod_{i=1}^k (\theta_{(n-1)/2}^2 + x_i^2) \big)
\nonumber
\end{align}

Now consider

\begin{align}
& \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big)^k =
\nonumber \\
& = \det
\begin{pmatrix}
(x_1 \dots x_k)^{\frac{1}{k}} & 0 & 0 & \dots \\
0 & \lambda_1 + (x_1 \dots x_k)^{\frac{1}{k}} & 0 & \dots \\
0 & 0 & - \lambda_1 + (x_1 \dots x_k)^{\frac{1}{k}} & \dots \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\end{pmatrix}^k
\nonumber \\
& =
(x_1 x_2 \dots x_k ) \big[ \theta_1^2 + (x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
\dots
\big[ \theta_{(n-1)/2}^2 + (x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
\nonumber
\end{align}

We easily have from (3) that

\begin{align}
& (x_1 x_2 \dots x_k ) \big( \prod_{i=1}^k (\theta_1^2 + x_i^2) \big)
\dots
\big( \prod_{i=1}^k (\theta_{(n-1)/2}^2 + x_i^2) \big)
\geq
\nonumber \\
&
(x_1 x_2 \dots x_k ) \big[ \theta_1^2 + (x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
\dots
\big[ \theta_{(n-1)/2}^2 + (x_1^2 x_2^2 \dots x_k^2)^{1/k} \big]^k
\nonumber
\end{align}

which establishes the main result for general odd values of ##n##:

##
\prod_{i=1}^k \det \big( A + x_i I \big) \geq \det \Big( A + (\prod_{i=1}^k x_i)^{1/k} I \Big) \qquad \mathrm

{for \; odd \;} n .
##Completeting the proof of problem 1.

Thanks! I was worried I'd have to type up a solution if no one solved it by month end!

The solution looks about right. Proving the ##n=2## case is definitely the key unlocking the problem, which you tackled in your second spoiler. I'm a bit short on time right now but will take a closer look later on.
 
  • #61
julian said:
I think I have solved problem 1:

I split the proof into the parts:

Part (a) A few facts about real skew symmetric matrices.
Part (b): Proof for ##n## even.
(i) Looking at case ##n = 2##.
(ii) Proving a key inequality. This will prove case ##n=2##
(iii) Proving case for general even ##n## (then easy).
Part (c) Case of odd ##n##.
(i) Proving case for ##n = 3## (easy because of part (b)).
(iii) Proving case for general odd ##n## (easy because of part (b)).

I went through it fairly granularly and did not see any flaws.

A couple thoughts:

1.) If you are so inclined in your first spoiler, you may make use of rule
QuantumQuest said:
2) It is fine to use nontrivial results without proof as long as you cite them and as long as it is "common knowledge to all mathematicians". Whether the latter is satisfied will be decided on a case-by-case basis.
I would be happy to accept basic results about skew symmetric matrices' eigenvalues having zero real component by spectral theory. On the other hand, your workthrough may be more instructive for 3rd parties reading it who don’t know the spectral theory underlying it – so they both have merits.

2.) The key insight for this problem, in my view, is figuring out the ##n = 2## case. Everything can be built off of this. The other insight is relating it to ##\text{GM} \leq \text{AM}## in some way. I think you basically re-created Cauchy’s forward-backward induction proof for ##\text{GM} \leq \text{AM}##, in Part (b) albeit for additivity not for vanilla ##\text{GM} \leq \text{AM}##. Since we are at month end, I will share another much simpler idea, which is the fact that 'regular' ##\text{GM} \leq \text{AM}## implies this result.

my take is that in Part (B) (II) when you are seeking to prove:

##(\theta^2 + x_1^2) \dots (\theta^2 + x_k^2) \geq [\theta^2 + (x_1^2 \dots x_k^2)^{1/k}]^k##

or equivalently

##\Big((\theta^2 + x_1^2) \dots (\theta^2 + x_k^2)\Big)^{1/k} \geq \theta^2 + (x_1^2 \dots x_k^2)^{1/k}##
multiply each side by

##\big(\theta^2\big)^{-1}##
(which is positive and doesn't change the inequality) and define
##z_i := \frac{x_i^2}{\theta^2} \gt 0##

The relationship is thus:

##\Big(\prod_{i=1}^k (1 + z_i)\Big)^{1/k}= \Big((1 + z_1) \dots (1 + z_k)\Big)^{1/k} \geq 1 + (z_1 \dots z_k)^{1/k} = \Big(\prod_{i=1}^k 1\Big)^{1/k} + \Big(\prod_{i=1}^k z_i\Big)^{1/k}##

which is true by the super additivity of the Geometric Mean (which incidentally was a past challenge problem, but since it is not this challenge problem I think it is fine to assume it is common knowledge to mathematicians).
- - - -
To consider the case of any eigenvalues equal to zero, we can verify that the inequality holds with equality, which we can chain onto the above.
- - - -
I have a soft spot for proving this via ##2^r## for ##r = \{1, 2, 3, ...\}## and then filling in the gaps. Really well done. Forward backward-induction is a very nice technique, but a lot of book-keeping!
 
  • #62
Here is the solution to the last open problem #9.

For a given a real Lie algebra ##\mathfrak{g}##, we define
$$
\mathfrak{A(g)} = \{\,\alpha \, : \,\mathfrak{g}\longrightarrow \mathfrak{g}\,\,: \,\,[\alpha(X),Y]=-[X,\alpha(Y)]\text{ for all }X,Y\in \mathfrak{g}\,\}\quad (1)
$$
The Lie algebra multiplication is defined by
  • ##(2)## anti-commutativity: ##[X,X]=0##
  • ##(3)## Jacobi-identity: ##[X,[Y,Z]]+[Y,[Z,X]]+[Z,[X,Y]]=0##
a) ##\mathfrak{A(g)}\subseteq \mathfrak{gl}(g)## is a Lie subalgebra in the Lie algebra of all linear transformations of ##\mathfrak{g}## with the commutator as Lie product ##[\alpha, \beta]= \alpha \beta -\beta \alpha \quad (4)## because
\begin{align*}
[[\alpha,\beta]X,Y]&\stackrel{(4)}{=}[\alpha \beta X,Y] - [\beta\alpha X,Y]\\
&\stackrel{(1)}{=}[X,\beta \alpha Y]-[X,\alpha \beta Y]\\
&\stackrel{(4)}{=}[X,[\beta,\alpha]Y]\\
&\stackrel{(2)}{=}-[X,[\alpha,\beta]Y]
\end{align*}
b) The smallest non Abelian Lie algebra ##\mathfrak{g}## with trivial center is ##\mathfrak{g}=\langle X,Y\,: \,[X,Y]=Y\rangle\,.## It's easy to verify ##\mathfrak{A(g)} \cong \mathfrak{sl}(2,\mathbb{R})\,##, the Lie algebra of ##2 \times 2## matrices with trace zero.

##\mathfrak{g}=\mathfrak{B(sl(}2,\mathbb{R}))## is the maximal solvable subalgebra of ##\mathfrak{sl}(2,\mathbb{R})##, a so called Borel subalgebra.

c) To show that ##\mathfrak{g} \rtimes \mathfrak{A(g)}## is a semidirect product given by $$[X,\alpha]:=[\operatorname{ad}X,\alpha]=\operatorname{ad}X\,\alpha - \alpha\,\operatorname{ad}X\quad (5)$$ we have to show that this multiplication makes ##\mathfrak{A}(g)## an ideal in ##\mathfrak{g} \rtimes \mathfrak{A(g)}## and a ##\mathfrak{g}-##module.
\begin{align*}
[[X,\alpha]Y,Z]&\stackrel{(5)}{=}[[X,\alpha Y],Z] - [\alpha[X,Y],Z]\\
&\stackrel{(3),(1)}{=}-[[\alpha Y,Z],X]-[[Z,X],\alpha Y]+[[X,Y],\alpha Z]\\
&\stackrel{(3),(1)}{=}[[Y,\alpha Z],X]+[\alpha[Z,X],Y]\\&-[[Y,\alpha Z],X]-[[\alpha Z,X],Y]\\
&\stackrel{(2)}{=}[Y,\alpha[X,Z]]-[Y,[X,\alpha Z]]\\
&\stackrel{(5)}{=}-[Y,[X,\alpha Z]]
\end{align*}
and ##\mathfrak{A(g)}## is an ideal in ##\mathfrak{g} \rtimes \mathfrak{A(g)}##. It is also a ##\mathfrak{g}-##module, because ##\operatorname{ad}## is a Lie algebra homomorphism ##(6)## and therefore
\begin{align*}
[[X,Y],\alpha]&\stackrel{(5)}{=}[\operatorname{ad}[X,Y],\alpha]\\
&\stackrel{(6)}{=}[[\operatorname{ad}X,\operatorname{ad}Y],\alpha]\\
&\stackrel{(3)}{=}-[[\operatorname{ad}Y,\alpha],\operatorname{ad}X]-[[\alpha,\operatorname{ad}X],\operatorname{ad}Y]\\
&\stackrel{(2)}{=}[\operatorname{ad}X,[\operatorname{ad}Y,\alpha]]-[\operatorname{ad}Y,[\operatorname{ad}X,\alpha]]\\
&\stackrel{(5)}{=} [X,[Y,\alpha]]-[Y,[X,\alpha]]
\end{align*}
d) For the last equation with ##\alpha \in \mathfrak{A(g)}## and ##X,Y,Z \in \mathfrak{g} ##
$$[\alpha(X),[Y,Z]]+[\alpha(Y),[Z,X]]+[\alpha(Z),[X,Y]] =0\quad (7)$$
we have
\begin{align*}
[\alpha(X),[Y,Z]]&\stackrel{(3)}{=}-[Y,[Z,\alpha(X)]]-[Z,[\alpha(X),Y]]\\
&\stackrel{(1)}{=} [Y,[\alpha(Z),X]]+[Z,[X,\alpha(Y)]]\\
&\stackrel{(3)}{=} -[\alpha(Z),[X,Y]]-[X,[Y,\alpha(Z)]]\\
&-[X,[\alpha(Y),Z]]-[\alpha(Y),[Z,X]]\\
&\stackrel{(1)}{=}-[\alpha(Y),[Z,X]]-[\alpha(Z),[X,Y]]
\end{align*}
 

Similar threads

3
Replies
93
Views
12K
Replies
33
Views
8K
Replies
55
Views
9K
3
Replies
77
Views
14K
3
Replies
93
Views
8K
4
Replies
114
Views
8K
Replies
16
Views
5K
3
Replies
100
Views
9K
Replies
28
Views
6K
2
Replies
46
Views
12K
Back
Top