Dot product as matrix products?

In summary, the conversation discusses the difference between two conventions for complex inner products. One convention is used by physicists, where the inner product is defined as the Hermitian conjugate of one vector multiplied by the other vector. The other convention is used by mathematicians, where the inner product is defined as the transpose of one vector multiplied by the other vector. The discussion also mentions that historically, the reason for using one convention over the other is often due to tradition.
  • #1
tgt
522
2
Why not make dot(u,v)=transpose(u)v rather than transpose(v)u?
 
Mathematics news on Phys.org
  • #2
What's the difference?
 
  • #3
At least difference emerges when transpose is replaced with Hermitian conjugate, when complex vectors are used. Physicist use the convention

[tex]
(u|v) = u^{\dagger} v
[/tex]

and IMO it is lot better than the mathematicians' convention

[tex]
(u|v) = v^{\dagger} u
[/tex]

When something is done in a dumb way, the reason is usually "for historical reasons". I guess that's the answer to the OP question this time too.
 
  • #4
Hurkyl said:
What's the difference?
One way generates a scalar and the other way generates an NxN matrix. Which is which depends on whether the vector is a 1xN row vector or a Nx1 column vector.
 
  • #5
D H said:
Hurkyl said:
What's the difference?
One way generates a scalar and the other way generates an NxN matrix. Which is which depends on whether the vector is a 1xN row vector or a Nx1 column vector.

This response is not logical!

If we assume [itex]u[/itex] and [itex]v[/itex] to be Nx1 vectors, then both [itex]u^Tv[/itex] and [itex]v^Tu[/itex] give a single component 1x1 matrix.

tgt did not ask about why to use Nx1 or 1xN matrices, so it is better not to start switching between them now.

The truth is that there are two different conventions for complex inner products, and they are

[tex]
(u|v) = \sum_{k=1}^N u^*_k v_k
[/tex]

and

[tex]
(u|v) = \sum_{k=1}^N u_k v^*_k
[/tex]

so I thought it would be natural to guess that the original question was related to this issue.
 
  • #6
jostpuur said:
At least difference emerges when transpose is replaced with Hermitian conjugate, when complex vectors are used.
I see now.
 
  • #7
D H said:
One way generates a scalar and the other way generates an NxN matrix. Which is which depends on whether the vector is a 1xN row vector or a Nx1 column vector.
No, you have misread. If u and v are column vectors (most common convention), then uTv is a scalar and uvT is a matrix.

But the question was about the difference between uTv and vTu, both of which are scalars. And the answer is that if the vectors are over the real numbers, there is no difference and if the vectors are over the complex numbers, one is the complex conjugate of the other. In the latter case, which we use as inner product is a matter of convention.
 
  • #8
Yes, I misread the OP as uTv versus uvT, as opposed to uTv versus vTu.
 

FAQ: Dot product as matrix products?

What is the dot product?

The dot product, also known as the scalar product, is a mathematical operation that takes two vectors and produces a scalar value. It is calculated by multiplying the corresponding components of the vectors and adding them together. The result is a single number that represents the magnitude of the projection of one vector onto the other.

How is the dot product related to matrix products?

The dot product can be seen as a special case of matrix multiplication, where one of the vectors is considered as a 1xN matrix and the other as an Nx1 matrix. The result of the dot product is the same as the result of multiplying these two matrices and summing all the elements.

What is the significance of the dot product in linear algebra?

The dot product is an important tool in linear algebra as it allows us to calculate the angle between two vectors, determine whether they are orthogonal (perpendicular), and find the projection of one vector onto another. It is also used in calculations involving vector lengths, distances, and projections in higher dimensions.

Can the dot product be applied to matrices of any size?

Yes, the dot product can be applied to matrices of any size as long as the number of columns in the first matrix is equal to the number of rows in the second matrix. The resulting matrix will have the same number of rows as the first matrix and the same number of columns as the second matrix.

How is the dot product used in real-world applications?

The dot product has various applications in fields such as physics, engineering, and computer science. It is used in calculating work and energy in physics, finding the similarity between documents in natural language processing, and determining the cosine similarity in recommendation systems. It is also used in computer graphics to calculate lighting and shading effects.

Similar threads

Replies
7
Views
1K
Replies
2
Views
1K
Replies
3
Views
2K
Replies
4
Views
2K
Replies
4
Views
3K
Replies
12
Views
549
Replies
4
Views
2K
Replies
4
Views
2K
Back
Top