Graduate What Is the Significance of the Matrix Identity Involving \( S^{-1}_{ij} \)?

Click For Summary
The discussion centers on a matrix identity involving the inverse of a specific NxN matrix defined as S_{ij} = 2^{-(2N - i - j + 1)} \frac{(2N - i - j)!}{(N-i)!(N-j)!}. It is noted that the sum of the elements of the inverse matrix, \sum_{i,j=1}^N S^{-1}_{ij}, equals 2N, which aligns with the expectations of the original poster based on their research. The conversation explores the relationship between the matrix's determinant and its cofactors, suggesting that understanding these properties may clarify why the sum equals 2N. Additionally, the poster shares their application of this identity in maximizing a specific integral involving weights, achieving results consistent with their findings. The complexity of deriving a closed-form expression for the cofactor of an arbitrary NxN matrix is acknowledged as a challenge.
madness
Messages
813
Reaction score
69
Hi all,

I've come across an interesting matrix identity in my work. I'll define the NxN matrix as S_{ij} = 2^{-(2N - i - j + 1)} \frac{(2N - i - j)!}{(N-i)!(N-j)!}. I find numerically that \sum_{i,j=1}^N S^{-1}_{ij} = 2N, (the sum is over the elements of the matrix inverse). In fact, I expected to get 2N based on the problem I'm studying, but I don't know what this complicated matrix expression is doing or why it equals 2N. Does any of this look familiar to anyone here?

Thanks for your help!

P.S. If this is in the wrong subforum, please move it.
 
Physics news on Phys.org
Interesting, how did you come across this? using some numerical computing software like matlab?

@fresh_42 or @Mark44 might be interested in how you discovered this.
 
I haven't run through the math, but keep in mind that the inverse matrix element can be expressed as:
$$(S^{-1})_{ij} = \frac{1}{\det{S}}C_{ji}$$
where ##C_{ji}## is the element of the transposed cofactor matrix. Also remember that the determinant can be expressed as a cofactor expansion:
$$\det{S} = \sum_{i=1}^{N} S_{ij} C_{ij}$$
Also keep in mind that the cofactor expansion works for any row or any column, so that
$$(S^{-1})_{ij} = \frac{C_{ji}}{ \sum_{i=1}^{N} S_{ji} C_{ji}}$$
I dunno, maybe that helps. It might not hurt, too, to see if you can pull out a general formula for the cofactor.
 
Thanks for the help.

@jedishrfu I discovered this trying to maximise the following:

\frac{\left[ \int_0^\infty f(t) dt \right]^2}{\int_0^\infty f^2(t) dt } where f(t) = \sum_{i=1}^N w_i \frac{(ct)^{N-i}}{(N-i)!} e^{\lambda t} and w_i are weights which I want to maximise with respect to. I can show that the maximum is \frac{1}{-\lambda} \sum_{ij} \left(S^{-1}\right)_{ij} and using Matlab this turns out to be \frac{2N}{-\lambda} for N=1...15 (I stopped here as it became numerically unstable).

@TeethWhitener I can see that your approach must give the right answer, but finding a closed form expression for the cofactor seems difficult for an arbitrary NxN matrix.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 17 ·
Replies
17
Views
6K
Replies
5
Views
2K