Maximum inner product between two orthgonal vectors (in standard dot procut))

In summary, the conversation discusses the maximum inner product between two vectors satisfying conditions of orthogonality and norm-1, with a weighted inner product defined using a matrix with M components. The conversation also considers the case of M=2 and discusses the possible solutions for larger values of M. The conversation also mentions the restriction that the sum of the lambdas must equal M. The conversation concludes with the question of whether this is a homework problem, to which the response is no.
  • #1
FoldHellmuth
4
0
Hello buddies,

Here is my question. It seems simple but at the same time does not seem to have an obvious answer to me.

Given that you have two vectors [itex]\mathbf{u},\mathbf{v}[/itex].

  • They are orthogonal [itex]\mathbf{u}^T\mathbf{v}=0[/itex] by standard dot product definition.
  • They have norm one [itex]||\mathbf{u}||=||\mathbf{v}||=1[/itex] by standard dot product definition.
  • Define the weighted inner product as [itex]\mathbf{u}^T\left(\begin{matrix}\lambda_1\\&\ddots\\&&\lambda_M\end{matrix}\right)\mathbf{v}[/itex] where [itex]M[/itex] is the number of components. Then the norm according to this inner product is also one for both vectors [itex]\mathbf{u}^T\left(\begin{matrix}\lambda_1\\&\ddots\\&&\lambda_M\end{matrix}\right)\mathbf{u}=
    \mathbf{v}^T\left(\begin{matrix}\lambda_1\\&\ddots\\&&\lambda_M\end{matrix}\right)\mathbf{v}=1[/itex]. Notice the dot product is a particular case where the matrix is the identity.
  • Edit: I forgot to add this [itex]\lambda_1+\cdots+\lambda_M=M[/itex]. It does not make the problem more complicated as it just narrows the possible lambdas.

What is then the maximum inner product (in absolute value) among two vectors satisfying the previous conditions? I.e.
[itex]\operatorname{max}\limits_{\mathbf{u},\mathbf{v}}
\left|
\mathbf{u}^T\left(\begin{matrix}\lambda_1\\&\ddots\\&&\lambda_M\end{matrix}\right)\mathbf{v}
\right|[/itex]Cheers
 
Last edited:
Physics news on Phys.org
  • #2
Do you require that the BOTH norms, the standard one and the weighted one are 1?
 
  • #3
Hawkeye18 said:
Do you require that the BOTH norms, the standard one and the weighted one are 1?

Yes.
 
  • #4
Take M=2

write [itex] u=\begin{array}{c}\cos(\alpha)\\ \sin(\alpha)\end{array}; v=\begin{array}{c}-\sin(\alpha)\\ \cos(\alpha)\end{array}[/itex]

The weighted norm is then
[itex] u^T\lambda u = \lambda_1 \cos^2(\alpha) + \lambda_2 \sin^2(\alpha) =1[/itex]
[itex] v^T\lambda v = \lambda_1 \sin^2(\alpha) + \lambda_2 \cos^2(\alpha)=1[/itex]

The sum between these gives [itex] \lambda_1+\lambda_2 = 2[/itex]

The difference gives [itex](\lambda_1 - \lambda_2)(\cos^2(\alpha)-\sin^2(\alpha))=0[/itex]

Either you use the standard norm, [itex]\lambda_1=\lambda_2=1[/itex] or [itex]\alpha=\pi/2[/itex] (or the 3 other quadrants) and no further restrictions on [itex]\lambda_{1,2}[/itex]

Then [itex] u^T \lambda v = \frac{1}{2}(\lambda_2 - \lambda_1)[/itex]

For M>2, find the two [itex]\lambda[/itex] with the largest difference but sum 1 - but I am not entirely sure that there cannot be another, larger solution.

BTW, is this a homework problem?
 
  • #5
M Quack said:
The weighted norm is then
[itex] u^T\lambda u = \lambda_1 \cos^2(\alpha) + \lambda_2 \sin^2(\alpha) =1[/itex]
[itex] v^T\lambda v = \lambda_1 \sin^2(\alpha) + \lambda_2 \cos^2(\alpha)=1[/itex]

The sum between these gives [itex] \lambda_1+\lambda_2 = 2[/itex]

The difference gives [itex](\lambda_1 - \lambda_2)(\cos^2(\alpha)-\sin^2(\alpha))=0[/itex]

Either you use the standard norm, [itex]\lambda_1=\lambda_2=1[/itex] or [itex]\alpha=\pi/2[/itex] (or the 3 other quadrants) and no further restrictions on [itex]\lambda_{1,2}[/itex]

I think you meant [itex]\alpha=\pi/4[/itex] (or [itex]\pi/4+\pi[/itex])
For [itex]M=2[/itex], the solution only allows these values for the lambdas. I am interested in a generic [itex]M[/itex] which is less obvious.

I actually forgot to mention [itex]\lambda_1+\cdots+\lambda_M=M[/itex], i.e. the average lambda is one.

M Quack said:
For M>2, find the two [itex]\lambda[/itex] with the largest difference but sum 1 - but I am not entirely sure that there cannot be another, larger solution.

But you derived this conditions imposing orthogonality and norm-1 for vectors of two components. This probably does not carry on for bigger [itex]M[/itex].

M Quack said:
BTW, is this a homework problem?

Not at all. I am a theoretical radar engineer.
 
Last edited:

Related to Maximum inner product between two orthgonal vectors (in standard dot procut))

What is the definition of maximum inner product between two orthogonal vectors?

The maximum inner product between two orthogonal vectors is the largest possible value that can be obtained when multiplying the two vectors together using the standard dot product formula. It represents the degree of similarity or alignment between the two vectors.

How is the maximum inner product calculated?

The maximum inner product between two orthogonal vectors can be calculated by taking the product of the magnitudes of the two vectors and multiplying it by the cosine of the angle between them. This can also be represented as the sum of the products of their corresponding components.

Why is the maximum inner product important?

The maximum inner product is important because it allows us to measure the level of similarity or alignment between two vectors. It is often used in fields such as signal processing, machine learning, and data analysis to find the most similar or aligned data points.

What is the relationship between orthogonal vectors and the maximum inner product?

Orthogonal vectors, by definition, have an inner product of 0. This means that they are perpendicular to each other and have no alignment or similarity. The maximum inner product between two vectors occurs when they are perfectly aligned, which means they are not orthogonal.

How can the maximum inner product be used in real-world applications?

The maximum inner product can be used in various applications such as image and audio processing, data compression, and pattern recognition. For example, in image processing, the maximum inner product can be used to determine the level of similarity between two images, which can help with tasks such as image matching and object recognition.

Similar threads

Replies
7
Views
2K
Replies
2
Views
1K
Replies
6
Views
1K
Replies
1
Views
2K
Replies
2
Views
2K
Replies
4
Views
2K
Back
Top