Showing that the Entries of a Matrix Arise As Inner Products

  • Thread starter Bashyboy
  • Start date
  • Tags
    Matrix
In summary, the conversation discusses the existence of a collection of unit vectors that satisfy certain conditions in relation to a positive semi-definite matrix. The speaker provides their attempt at a solution, including a clever trick involving the rotation of unitary matrices. They also mention a specific inner product they are using. However, they ask for suggestions for a more elegant solution or any hints that could be provided.
  • #1
Bashyboy
1,421
5

Homework Statement


Let ##B \in M_n (\mathbb{C})## be such that ##B \ge 0## (i.e., it is a positive semi-definite matrix) and ##b_{ii} = 1## (ones along the diagonal). Show that there exists a collection of ##n## unit vectors ##\{e_1,...,e_n \} \subset \mathbb{C}^n## such that ##b_{ij} = \langle e_i, e_j \rangle##.

Homework Equations

The Attempt at a Solution


Note, this took a great deal of time to type, and I hope one would be so kind as to reply! :)

Now, obviously any set collection of ##n## unit vectors will satisfy the condition that ##\langle e_i , e_i \rangle = 1 = b_{ii}##, for all ##i \in \{1,...,n\}##. Hence, the choice of unit vectors will only depend upon the off-diagonal terms.

I tried proving the theorem in low dimensions, hoping that I might glimpse some natural choice of unit vectors that will extend to the ##n \times n## case. For instance, in the 2x2 case, we have that

##\begin{bmatrix} 1 & b_{12} \\ \overline{b_{12}} & 1 \\ \end{bmatrix} \ge 0 \implies |b_{12}| \le 1##

Letting ##b_{12} = |b_{12}| e^{i \theta_{12}}##, then the choice of unit vectors would be ##e_1 = \begin{bmatrix} e^{i \theta_{12}} \\ 0 \end{bmatrix}## and ##e_2 = \begin{bmatrix} |b_{12}| \\ \sqrt{1 - |b_{12}|^2} \end{bmatrix}##, both of which are obviously unit-vectors. Computing the inner-products, we arrive at ##\langle e_1 , e_2 \rangle = |b_{12}| e^{i \theta_{12}} = b_{12}## and ##\langle e_2, e_1 \rangle = \overline{\langle e_1 , e_2 \rangle} = \overline{b_{12}}##, which finishes the proof in the 2x2 case.

However, things in the 3x3 become particularly unilluminating, without there being any seemingly natural choice that would help in the nxn case. So, in the 3x3 case, we need to find ##e_1##, ##e_2##, and ##e_3## such that ##\langle e_1, e_2 \rangle = b_{12}##, ##\langle e_1, e_3 \rangle = b_{13}##, etc. Because unitary matrices preserve the inner product, I know I can choose a unitary ##U## which will rotate all the unit vectors; in particular, I can rotate ##e_1## so that it becomes ##\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}##, which simplify things by reducing the number of variables. Now, let

##e_2 = \begin{bmatrix} z_1 \\ z_2 \\ z_3 \\ \end{bmatrix}##

##e_3 = \begin{bmatrix} z_4 \\ z_5 \\ z_6 \end{bmatrix}##,

demanding that ##||e_2|| = 1## and ##||e_3|| = 1## be true. So, we want to find ##z_1,...,z_5## such that

##\langle e_1 , e_2 \rangle = b_{12} \implies z_1 = b_{12}##

##\langle e_1, e_3 \rangle = b_{13} \implies z_4 = b_{13}##

##\langle e_2 , e_3 \rangle = b_{23} \implies \overline{z_4} z_1 + \overline{z_5} z_2 + \overline{z_6} z_3 = b_{23}## or upon substitution ##\overline{b_{13}} b_{12} + \overline{z_5} z_2 + \overline{z_6} z_3 = b_{23}##As one can easily perceive, this is a hideous mess, and will only grow more hideous with larger dimensions. My question is, is there some clever trick or elegant theorem I could be employing? If so, would you mind providing some hints? The only clever trick I could come up with was the rotation with ##U##.
 
Physics news on Phys.org
  • #2
Consider the inner product ##\mathbf{x}B\mathbf{y}^T##.
 
  • #3
Where x and y are just any vectors in ##\mathbb{C}^n##? Why are you taking the transpose of ##y##? The inner-product I am using is ##\langle x,y \rangle := y^* x##.
 
  • #4
If anyone else has any suggestions, I would appreciate them being shared.
 

FAQ: Showing that the Entries of a Matrix Arise As Inner Products

1. What is the purpose of showing that the entries of a matrix arise as inner products?

The purpose of this is to demonstrate the relationship between inner products and matrix entries, and how they can be used interchangeably in certain situations. This can lead to a deeper understanding of linear algebra and its applications.

2. How is this concept related to vector spaces?

The concept of showing that matrix entries arise as inner products is closely related to vector spaces, as inner products are defined on vector spaces and can be used to represent matrix entries. This can be especially useful in proving properties of vector spaces and their transformations.

3. Can this be applied to any type of matrix?

Yes, this concept can be applied to any type of matrix, including square matrices, rectangular matrices, and even complex matrices. As long as the matrix has defined entries and follows the rules of linear algebra, this concept can be applied.

4. What are some practical applications of this concept?

One practical application is in the field of signal processing, where inner products can be used to represent entries in a matrix that represents a signal. This can be useful in analyzing and manipulating signals in various systems. Additionally, this concept is also commonly used in machine learning and data analysis.

5. Is it possible to prove the converse of this concept?

Yes, it is possible to prove the converse of this concept, which is showing that inner products can be represented as matrix entries. This is also a commonly used technique in linear algebra and has practical applications in various fields such as quantum mechanics and signal processing.

Similar threads

Back
Top