Bilinear Function and Gramian Matrix

In summary, the conversation discusses finding the left and right kernel of a bilinear function represented by a Gramian matrix. The approach of using eigenvectors is confirmed to be correct and a shorter version of the solution is provided.
  • #1
Sudharaka
Gold Member
MHB
1,568
1
Hi everyone, :)

Here's a question which I am not sure whether my approach is correct. My understanding about Bilinear functions and Gramian matrix is limited, so this might be totally wrong. Hope you can provide some insight. :)

Question:

A bilinear function \(f:\Re^3\times \Re^3 \rightarrow \Re \) is given in standard basis \(\{e_1,\,e_2,\,e_3\}\) by the Gram matrix,

\[G_f=\begin{pmatrix}4&3&2\\1&3&5\\3&6&9 \end{pmatrix}\]

Find the left and a right kernel of \(f\).

My Solution:

I would find the left kernel of \(f\) using,

\[\left\{v\in\Re^3:\, \begin{pmatrix}v_1& v_2& v_3\end{pmatrix} \begin{pmatrix} 4&3&2\\1&3&5\\3&6&9 \end{pmatrix} \begin{pmatrix} u_1\\u_2\\u_3 \end{pmatrix}=0\mbox{ for all } \begin{pmatrix} u_1 \\u_2 \\u_3 \end{pmatrix} \in \Re^3 \right\}\]

and the right kernel of \(f\) using,

\[\left\{u\in\Re^3:\, \begin{pmatrix}v_1& v_2& v_3\end{pmatrix} \begin{pmatrix} 4&3&2\\1&3&5\\3&6&9 \end{pmatrix} \begin{pmatrix} u_1\\u_2\\u_3 \end{pmatrix}=0\mbox{ for all } \begin{pmatrix} v_1 &v_2 &v_3 \end{pmatrix} \in \Re^3 \right\}\]

Is this approach correct? Is this how the left and right kernels are given when the bilinear function is represented by the so called Gramian matrix? :)
 
Physics news on Phys.org
  • #2
Sudharaka said:
Hi everyone, :)

Here's a question which I am not sure whether my approach is correct. My understanding about Bilinear functions and Gramian matrix is limited, so this might be totally wrong. Hope you can provide some insight. :)

Question:

A bilinear function \(f:\Re^3\times \Re^3 \rightarrow \Re \) is given in standard basis \(\{e_1,\,e_2,\,e_3\}\) by the Gram matrix,

\[G_f=\begin{pmatrix}4&3&2\\1&3&5\\3&6&9 \end{pmatrix}\]

Find the left and a right kernel of \(f\).

My Solution:

I would find the left kernel of \(f\) using,

\[\left\{v\in\Re^3:\, \begin{pmatrix}v_1& v_2& v_3\end{pmatrix} \begin{pmatrix} 4&3&2\\1&3&5\\3&6&9 \end{pmatrix} \begin{pmatrix} u_1\\u_2\\u_3 \end{pmatrix}=0\mbox{ for all } \begin{pmatrix} u_1 \\u_2 \\u_3 \end{pmatrix} \in \Re^3 \right\}\]

and the right kernel of \(f\) using,

\[\left\{u\in\Re^3:\, \begin{pmatrix}v_1& v_2& v_3\end{pmatrix} \begin{pmatrix} 4&3&2\\1&3&5\\3&6&9 \end{pmatrix} \begin{pmatrix} u_1\\u_2\\u_3 \end{pmatrix}=0\mbox{ for all } \begin{pmatrix} v_1 &v_2 &v_3 \end{pmatrix} \in \Re^3 \right\}\]

Is this approach correct? Is this how the left and right kernels are given when the bilinear function is represented by the so called Gramian matrix? :)

Sure.
Looks good.

You can simplify it a bit though.
You can find the left kernel of \(f\) using:
\[\left\{v\in \mathbb R^3:\, v^T \begin{pmatrix} 4&3&2\\1&3&5\\3&6&9 \end{pmatrix}=0 \right\}\]
That amounts to the same thing.
Do you see why?

Note that these are the eigenvectors of the transpose for the eigenvalue 0.
 
  • #3
I like Serena said:
Sure.
Looks good.

You can simplify it a bit though.
You can find the left kernel of \(f\) using:
\[\left\{v\in \mathbb R^3:\, v^T \begin{pmatrix} 4&3&2\\1&3&5\\3&6&9 \end{pmatrix}=0 \right\}\]
That amounts to the same thing.
Do you see why?

Note that these are the eigenvectors of the transpose for the eigenvalue 0.

Thanks very much for the confirmation. Of course I see it. Since,

\[\begin{pmatrix}v_1& v_2& v_3\end{pmatrix} \begin{pmatrix} 4&3&2\\1&3&5\\3&6&9 \end{pmatrix} \begin{pmatrix} u_1\\u_2\\u_3 \end{pmatrix}=0\]

holds for each vector \(\begin{pmatrix} u_1\\u_2\\u_3 \end{pmatrix}\) we can choose any two vectors \(\begin{pmatrix} u^1_1\\u^1_2\\u^1_3 \end{pmatrix}\) and \(\begin{pmatrix} u^2_1\\u^2_2\\u^2_3 \end{pmatrix}\) such that,

\[\begin{pmatrix} u^1_1\\u^1_2\\u^1_3 \end{pmatrix}\neq\begin{pmatrix} u^2_1\\u^2_2\\u^2_3 \end{pmatrix}\]

Then,

\[ \begin{pmatrix}v_1& v_2& v_3\end{pmatrix} \begin{pmatrix} 4&3&2\\1&3&5\\3&6&9 \end{pmatrix} \begin{pmatrix} u^1_1\\u^1_2\\u^1_3 \end{pmatrix} = \begin{pmatrix}v_1& v_2& v_3\end{pmatrix} \begin{pmatrix} 4&3&2\\1&3&5\\3&6&9 \end{pmatrix} \begin{pmatrix} u^2_1\\u^2_2\\u^2_3\end{pmatrix}=0\]

Hence,

\[\begin{pmatrix}v_1& v_2& v_3\end{pmatrix} \begin{pmatrix} 4&3&2\\1&3&5\\3&6&9 \end{pmatrix}\left( \begin{pmatrix} u^1_1\\u^1_2\\u^1_3 \end{pmatrix} -\begin{pmatrix}u^2_1\\u^2_2\\u^2_3\end{pmatrix} \right)=0\]

Since,

\[ \begin{pmatrix} u^1_1\\u^1_2\\u^1_3 \end{pmatrix} -\begin{pmatrix}u^2_1\\u^2_2\\u^2_3\end{pmatrix} \neq 0\]

\[\begin{pmatrix}v_1& v_2& v_3\end{pmatrix} \begin{pmatrix} 4&3&2\\1&3&5\\3&6&9 \end{pmatrix}=0\]

Is there a shorter version for this? :)
 
  • #4
Looks good! :)

Is there a shorter version for this?

Hmm, a shorter version?

How about:

Let A be the matrix.

Suppose we have a $v$ in the left kernel such that $v^T A \ne 0$.
Then there is a vector $u$ such that $v^T A u \ne 0$
This is a contradiction.

If on the other hand we pick any $v$ such that $v^T A = 0$, then for any $u$ we have $v^T A u = 0$.

Therefore the left kernel is the set of all $v$ with $v^T A = 0$. $\qquad \blacksquare$
 
  • #5
I like Serena said:
Looks good! :)
Hmm, a shorter version?

How about:

Let A be the matrix.

Suppose we have a $v$ in the left kernel such that $v^T A \ne 0$.
Then there is a vector $u$ such that $v^T A u \ne 0$
This is a contradiction.

If on the other hand we pick any $v$ such that $v^T A = 0$, then for any $u$ we have $v^T A u = 0$.

Therefore the left kernel is the set of all $v$ with $v^T A = 0$. $\qquad \blacksquare$

Thanks very much Serena. :) Indeed this is something much more simpler.
 

FAQ: Bilinear Function and Gramian Matrix

What is a bilinear function?

A bilinear function is a type of function that takes two inputs and produces an output, similar to a linear function. However, in a bilinear function, the output is a linear combination of the two inputs, rather than just one input. This means that the function is both linear in each input separately and also satisfies the property of bilinearity.

What is a Gramian matrix?

A Gramian matrix, also known as a Gram matrix, is a square matrix that contains the inner products of a set of vectors. This means that each entry in the matrix is the dot product of two vectors from the set. Gramian matrices are commonly used in linear algebra and signal processing to solve systems of equations and analyze signals.

What is the purpose of a Gramian matrix?

The purpose of a Gramian matrix is to provide a way to analyze the relationships between vectors in a set. By calculating the inner products and organizing them in a matrix, we can determine if the vectors are linearly independent, orthogonal, or if they span a particular space. Gramian matrices are also useful in solving systems of equations and finding the best approximation of a signal.

How is a Gramian matrix calculated?

To calculate a Gramian matrix, we first need a set of vectors. Then, we take the dot product of each vector with every other vector in the set and organize the results in a matrix. The size of the matrix will be the number of vectors in the set squared, and the entries will correspond to the dot products. The resulting matrix will be symmetric, and the diagonal entries will be the squared magnitudes of each vector.

What are the applications of bilinear functions and Gramian matrices?

Bilinear functions and Gramian matrices have various applications in mathematics, physics, and engineering. Some common applications include solving systems of linear equations, analyzing signals and systems, and finding the best approximation of a signal. They are also used in optimization problems, dimensionality reduction, and image processing.

Similar threads

Replies
8
Views
3K
Replies
10
Views
1K
Replies
34
Views
2K
Replies
2
Views
1K
Replies
1
Views
1K
Replies
14
Views
2K
Back
Top