Unravelling Structure of a Symmetric Matrix

In summary, the article discusses the techniques and methodologies used to analyze and understand the structure of symmetric matrices. It highlights the significance of eigenvalues and eigenvectors in determining matrix properties, explores decomposition methods such as Cholesky and spectral decomposition, and emphasizes applications in various fields including physics, engineering, and data science. The study aims to provide insights into the behavior of symmetric matrices and their role in solving linear algebra problems effectively.
  • #1
thatboi
133
18
Hey guys,
I was wondering if anyone had any thoughts on the following symmetric matrix:
$$\begin{pmatrix}
0.6 & 0.2 & -0.2 & -0.6 & -1\\
0.2 & -0.2 & -0.2 & 0.2 & 1\\
-0.2 & -0.2 & 0.2 & 0.2 & -1\\
-0.6 & 0.2 & 0.2 & -0.6 & 1\\
-1 & 1 & -1 & 1 & -1
\end{pmatrix}
$$
Notably, when one solves for the eigenvalues and eigenvectors of this matrix, one finds that for the largest magnitude eigenvalues, the eigenvectors demonstrate an oscillatory behavior (the elements within the eigenvector switch between positive and negative), whereas for the smallest magnitude eigenvalue, the eigenvectors have a "nicer" behavior. This most likely has to do with the alternative +/- 1 in the matrix but I can't quite figure it out.
 
Physics news on Phys.org
  • #2
thatboi said:
Hey guys,
I was wondering if anyone had any thoughts on the following symmetric matrix:
$$\begin{pmatrix}
0.6 & 0.2 & -0.2 & -0.6 & -1\\
0.2 & -0.2 & -0.2 & 0.2 & 1\\
-0.2 & -0.2 & 0.2 & 0.2 & -1\\
-0.6 & 0.2 & 0.2 & -0.6 & 1\\
-1 & 1 & -1 & 1 & -1
\end{pmatrix}
$$
Notably, when one solves for the eigenvalues and eigenvectors of this matrix, one finds that for the largest magnitude eigenvalues, the eigenvectors demonstrate an oscillatory behavior (the elements within the eigenvector switch between positive and negative), whereas for the smallest magnitude eigenvalue, the eigenvectors have a "nicer" behavior. This most likely has to do with the alternative +/- 1 in the matrix but I can't quite figure it out.
I think this hypothesis can be easily tested.
 
  • #3
Hill said:
I think this hypothesis can be easily tested.
Right, so the last row contributes to the eigenvalue in the sense that it gives the last entry of the resultant column vector when the matrix is multiplied by the eigenvector. So if the eigenvector also has entries that alternates signs, then the dot product between the eigenvector and the last row will result in a "coherent" sum and thus produce a larger number than compared to a different eigenvector with a different configuration of signs. Is this the right way to think about it?
 
  • #4
thatboi said:
Right, so the last row contributes to the eigenvalue in the sense that it gives the last entry of the resultant column vector when the matrix is multiplied by the eigenvector. So if the eigenvector also has entries that alternates signs, then the dot product between the eigenvector and the last row will result in a "coherent" sum and thus produce a larger number than compared to a different eigenvector with a different configuration of signs. Is this the right way to think about it?
I meant to test it experimentally, by modifying the "suspected" elements and observing how the eigenvectors are affected.
 
  • #5
Hill said:
I meant to test it experimentally, by modifying the "suspected" elements and observing how the eigenvectors are affected.
Sure, I already did some modifications and it seemed to match what I said above (for example if I put a negative sign on only the second element of the last column and second element of the last row, then the eigenvector corresponding to the largest magnitude eigenvalue only has a negative sign on the second element as well). I was just wondering if there was any more intuition/structure in the matrix beyond what I said above.
 
  • #6
If you described how you generated that matrix, sometimes there are underlying reasons why certain eigenvectors exist.
 
Back
Top