Finding the Kernel of a Matrix Map

In summary: Thanks, Hawkeye, are all permutation matrices then also rotation matrices? The antidiagonal matrix ## A_{ij} ; a_{ii}=0 ; a_{ij}=-1 ; i \neq j ## satisfies ##A^2=Id ## but it cannot be a rotation matrix since otherwise ## sin \alpha = -sin \alpha = -1 ##. Did I make a mistake? EDIT: Or is this matrix conjugate to a rotation? EDIT2: I don't think it can be conjugate to a rotation matrix ; its determinant would still need to equal 1.In summary, the conversation is discussing ways to find the solution/kernel of the map ##f(A)=A^n-Id##, where ##A
  • #1
WWGD
Science Advisor
Gold Member
7,363
11,335
Hi All,
I am trying to see if there is a "nice" ( relatively straightforward) way of finding the
solution/kernel of the map : ##f(A)=A^n -Id ## , where A is an ## k \times k ## matrix and ##n## is a positive
integer. Question: what is the kernel of this map? Cranking out matrix coefficients gets messy
and very slow as n grows. I can think of a solution using the Fundamental Theorem of Linear
Algebra relating Ortho spaces of rows, columns and row- and column- spaces. Is there some other
"reasonable" way of finding all solutions (clearly Id,-Id are solutions for n even and Id is a solution for n odd
)?
 
Physics news on Phys.org
  • #2
some more details would be of interest. Since you mention orthogonality, I'm assuming your scalars are in ##\mathbb R## or ##\mathbb C##.

What's the motivation here-- are you trying to calculate a (finite) von neumann series?

To confirm: is ##Id## the identity matrix ##I##? If so, the kernel is really a question of whether or not ##\mathbf A## has an eigenvalue with magnitude 1 (and is an nth root of unity), and what the geometric multiplicities of associated eigenvectors are...

another thought: if you want to go the direct multiplication route there is the approach of first finding ##\mathbf A^2## then squaring that to get ##\mathbf A^4## then squaring that to get ##\mathbf A^8## then squaring that to get ##\mathbf A^{16}##, and so on until you get 'close' to ##\mathbf A^n## then cleverly finishing off the residual... as I recall this cuts down multiplications to something like ##O\big(log(n)\big)##,in a manner similar to bisection.
 
  • Like
Likes WWGD
  • #3
StoneTemplePython said:
some more details would be of interest. Since you mention orthogonality, I'm assuming your scalars are in ##\mathbb R## or ##\mathbb C##.

What's the motivation here-- are you trying to calculate a (finite) von neumann series?

To confirm: is ##Id## the identity matrix ##I##? If so, the kernel is really a question of whether or not ##\mathbf A## has an eigenvalue with magnitude 1 (and is an nth root of unity), and what the geometric multiplicities of associated eigenvectors are...

another thought: if you want to go the direct multiplication route there is the approach of first finding ##\mathbf A^2## then squaring that to get ##\mathbf A^4## then squaring that to get ##\mathbf A^8## then squaring that to get ##\mathbf A^{16}##, and so on until you get 'close' to ##\mathbf A^n## then cleverly finishing off the residual... as I recall this cuts down multiplications to something like ##O\big(log(n)\big)##,in a manner similar to bisection.
Yes, thanks, good points, yes, we would need a notion of orthogonality to make appeal to Fundamental Theorem, but I think it applies only to Real-valued matrices. Still, rotation by ## 2\pi/n ## is a solution , despite having no Real eigenvalues.
 
  • #4
In the class of complex matrices all the solutions of the equation ##A^n=I## are given by ##A=SDS^{-1}##, where ##S## is an invertible matrix, and ##D## is a diagonal matrix whose diagonal entries are ##n##th roots of unity, ##d_{j}^n=1##.

In the class of real matrices all the solutions are given by ##A=SRS^{-1}##, where ##S## is a real invertible matrix, and ##R## is a "rotation" matrix, i.e. ##R## is a block diagonal matrix whose diagonal blocks can be either rotations through the angle ##2\pi j/n##, ##j=0, 1, 2, \ldots, n-1## (these are ##2\times 2## blocks) or just real roots of unity (##1\times 1## blocks). The real roots of unity are ##\pm 1## if ##n## is even, and ##1## if ##n## is odd.

A rotation through the angle ##\alpha## matrix is given by $$\left( \begin{array}{cc} \cos\alpha & -\sin\alpha \\ \sin\alpha & \cos\alpha \end{array}\right).$$
 
  • #5
Hawkeye18 said:
In the class of complex matrices all the solutions of the equation ##A^n=I## are given by ##A=SDS^{-1}##, where ##S## is an invertible matrix, and ##D## is a diagonal matrix whose diagonal entries are ##n##th roots of unity, ##d_{j}^n=1##.

In the class of real matrices all the solutions are given by ##A=SRS^{-1}##, where ##S## is a real invertible matrix, and ##R## is a "rotation" matrix, i.e. ##R## is a block diagonal matrix whose diagonal blocks can be either rotations through the angle ##2\pi j/n##, ##j=0, 1, 2, \ldots, n-1## (these are ##2\times 2## blocks) or just real roots of unity (##1\times 1## blocks). The real roots of unity are ##\pm 1## if ##n## is even, and ##1## if ##n## is odd.

A rotation through the angle ##\alpha## matrix is given by $$\left( \begin{array}{cc} \cos\alpha & -\sin\alpha \\ \sin\alpha & \cos\alpha \end{array}\right).$$
Thanks, Hawkeye, are all permutation matrices then also rotation matrices? The antidiagonal matrix ## A_{ij} ; a_{ii}=0 ; a_{ij}=-1 ; i \neq j ## satisfies ##A^2=Id ## but it cannot be a rotation matrix since otherwise ## sin \alpha =-sin \alpha =-1 ##. Did I make a mistake?EDIT: Or is this matrix conjugate to a rotation? EDIT2: I don't think it can be conjugate to a rotation matrix ; its determinant would still need to equal 1.
 
Last edited:
  • #6
WWGD said:
Thanks, Hawkeye, are all permutation matrices then also rotation matrices? The antidiagonal matrix ## A_{ij} ; a_{ii}=0 ; a_{ij}=-1 ; i \neq j ## satisfies ##A^2=Id ## but it cannot be a rotation matrix since otherwise ## sin \alpha =-sin \alpha =-1 ##. Did I make a mistake?EDIT: Or is this matrix conjugate to a rotation? EDIT2: I don't think it can be conjugate to a rotation matrix ; its determinant would still need to equal 1.

I believe you are talking here about a ##2\times 2## matrix. Then your antidiagonal matrix is not a rotation matrix, but it is a matrix of form ##SDS^{-1}##, where $$D=\left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right).$$
 
  • #7
Hawkeye18 said:
I believe you are talking here about a ##2\times 2## matrix. Then your antidiagonal matrix is not a rotation matrix, but it is a matrix of form ##SDS^{-1}##, where $$D=\left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right).$$
Right, but I thought all solutions to ##A^n =Id ## consisted of rotations ( up to conjugation) , and the matrix I mentioned, antidiagonals =-1; 0 otherwise, is a solution (specifically, to ## A^2=Id ##), but not a rotation? But, I agree, if A is a rotation then its eigenvalues must be n-th roots of unity .
 
Last edited:
  • #8
Let me elaborate. In the class of real matrices, all the solutions are given by ##A=SRS^{-1}##, where ##R## is a "rotation" (note the quotes, that is not always a real rotation), and ##S## is invertible. "Rotation" here mean that ##R## is the block diagonal matrices, where each block is either a ##2\times 2## rotation matrix (through an allowed angle), or just ##1\times 1## block, with ##n##th root of unity in it. I probably did not make it clear, but you do not have to have both types of blocks: your matrix can have only ##2\times2## blocks, or only ##1\times1## blocks, or any combination of these 2 (that adds up to the dimension).

The matrix $$R=D=\left(\begin{array}{cc} 1 & 0\\ 0& -1\end{array}\right)$$ is of that type, and it is not a rotation (that why I used quotes). But your matrix can be represented as ##SDS^{-1}##, so your example does not contradict my statement.
 
  • Like
Likes WWGD
  • #9
I think you two may be talking past each other.

WWGD said:
Right, but I thought all solutions to ##A^n =Id ## consisted of rotations

This is wrong. For example: ##A^2 = I ## is an involution. Such a matrix does not need to be normal. (i.e. it is diagonalizable, but need not be unitarily diagonalizable.)

Permutation matrices are a special kind of orthogonal matrix (which is a special kind of unitary matrix). Rotation matrices are also orthogonal matrices. All orthogonal matrices are normal.

Another way to think about it is: Pick your favorite orthogonal matrix ##P##, where ##P^n = I##. Now do a non-unitary similarity transform, i.e. consider ##S## where ##S^{-1} \neq S^*##.

Now take a look at ##\big(S^{-1}PS\big)##. The spectrum of this is the same as for ##P## because the matrices are similar. But the ##P## is normal while the new matrix is not.

yet we still have

##P^n = I##

so

##\big(S^{-1}PS\big)^n =\big(S^{-1}P^n S\big) = \big(S^{-1}IS\big) = \big(S^{-1}S\big)= I ##
 
  • Like
Likes WWGD
  • #10
StoneTemplePython said:
I think you two may be talking past each other.
This is wrong. For example: ##A^2 = I ## is an involution. Such a matrix does not need to be normal. (i.e. it is diagonalizable, but need not be unitarily diagonalizable.)

Permutation matrices are a special kind of orthogonal matrix (which is a special kind of unitary matrix). Rotation matrices are also orthogonal matrices. All orthogonal matrices are normal.

Another way to think about it is: Pick your favorite orthogonal matrix ##P##, where ##P^n = I##. Now do a non-unitary similarity transform, i.e. consider ##S## where ##S^{-1} \neq S^*##.

Now take a look at ##\big(S^{-1}PS\big)##. The spectrum of this is the same as for ##P## because the matrices are similar. But the ##P## is normal while the new matrix is not.

yet we still have

##P^n = I##

so

##\big(S^{-1}PS\big)^n =\big(S^{-1}P^n S\big) = \big(S^{-1}IS\big) = \big(S^{-1}S\big)= I ##
Thanks, I am aware not all solutions are rotations , by a straightforward counting argument, but I thought someone here had stated they are rotations, up to similarity.
 
  • #11
WWGD said:
Thanks, I am aware not all solutions are rotations , by a straightforward counting argument, but I thought someone here had stated they are rotations, up to similarity.

I used "rotation" in quotes for the lack of better term, it is not always a real rotation. Your example is up to similarity what I call "rotation" see post # 8 above for the details.
 
  • #12
Hawkeye18 said:
I used "rotation" in quotes for the lack of better term, it is not always a real rotation. Your example is up to similarity what I call "rotation" see post # 8 above for the details.
Got it Hawk, thanks.
 

FAQ: Finding the Kernel of a Matrix Map

What is a matrix map?

A matrix map is a function that takes in a matrix as its input and produces another matrix as its output. It can also be thought of as a transformation of a matrix.

What is the kernel of a matrix map?

The kernel of a matrix map, also known as the null space, is the set of all vectors that are mapped to the zero vector by the matrix map. In other words, it is the set of all solutions to the equation Ax=0, where A is the matrix and x is a vector.

Why is finding the kernel of a matrix map important?

Finding the kernel of a matrix map can help us understand the behavior of the map and its effects on the input matrix. It can also be used to solve systems of linear equations and find special solutions.

How do you find the kernel of a matrix map?

To find the kernel of a matrix map, we can use Gaussian elimination to reduce the matrix to its reduced row echelon form. The free variables in the reduced form correspond to the columns in the original matrix that are not included in the pivot columns. The solutions to the equation Ax=0 can then be found by setting the free variables to arbitrary values.

Can the kernel of a matrix map be empty?

Yes, it is possible for the kernel of a matrix map to be empty. This happens when the map is one-to-one, meaning that each input has a unique output. In this case, the only solution to the equation Ax=0 is the trivial solution, where all the variables are equal to 0.

Similar threads

Back
Top