Euclidean space: dot product and orthonormal basis

In summary: This formula is a *definition* of the angle between two vectors, which is why it's not something you can avoid.
  • #1
rkaminski
11
0
Dear All,

Here is one of my doubts I encountered after studying many linear algebra books and texts. The Euclidean space is defined by introducing the so-called "standard" dot (or inner product) product in the form:

[tex] (\boldsymbol{a},\boldsymbol{b}) = \sum \limits_{i} a_i b_i [/tex]

With that one can define the metric and the vector norm, the latter one as:

[tex] || \boldsymbol{a} || = \sqrt{(\boldsymbol{a},\boldsymbol{a})} [/tex]

etc. etc. However, we know that the first formula is valid only when we chose the orthonormal basis. That is the basis consisting of the vectors which are mutually orthogonal and with unit lengths:

[tex] (\boldsymbol{e}_i,\boldsymbol{e}_j) = \delta_{ij} [/tex]

[tex] || \boldsymbol{a}_i || = 1 [/tex]

Thus, to define the orthonormal basis one need to define dot product and norm first. On the other hand, the dot product formula works only in the case of othonormal basis. If we take any other basis this formula will not be valid (in Euclidean space?). Is that correct? The question is then in what order should be define all the terms to be consistent. Or perhaps there are more Euclidean spaces, each of its metric, and the above choice is arbitrary so it resembles the physical space the most? The question is then why it is like this? I would like to avoid the formula:

[tex] (\boldsymbol{a},\boldsymbol{b}) = || \boldsymbol{a} || \cdot || \boldsymbol{b} || \cdot \cos \theta [/tex]

since for this we need the norm and the angle... Closed loop and one of my doubts on how to deal the the topic...

Many thanks for explanations!

Radek
 
Physics news on Phys.org
  • #2
For any norm satisfying the parallelogram law
$$\|x + y\|^2 + \|x - y\|^2 = 2\|x\|^2 + 2\|y\|^2$$
we can define an inner product which is compatible with the norm (meaning that ##\langle x,x\rangle = \|x\|^2##) as follows. For a vector space over the real scalar field, define
$$\langle x,y \rangle = \frac{1}{4}\left(\|x + y\|^2 - \|x - y\|^2\right)$$
For a vector space over the complex scalar field, define
$$\langle x,y \rangle = \frac{1}{4}\left(\|x + y\|^2 - \|x - y\|^2 + i\|x + iy\|^2 - i\|x - iy\|^2\right)$$
If I recall correctly, this is an exercise in Axler's Linear Algebra Done Right. Also, I just did a search and found this PDF:

Norm-Induced Inner Products
http://www.pcs.cnu.edu/~jgomez/files/norm.pdf
 
Last edited by a moderator:
  • #3
Your answer explains that we can obtain an inner product from the norm. The attached PDF file gives more details. To me, it seems like just swapping the problem around. The question is rather why we define the inner product (or the norm) like I wrote, in the case of Euclidean spaces? What would be a reasoning behind all the definitions, and in which order they should be introduced to obtain a consistent theory.
 
  • #4
rkaminski said:
[tex] (\boldsymbol{a},\boldsymbol{b}) = \sum \limits_{i} a_i b_i [/tex]
[...]

However, we know that the first formula is valid only when we chose the orthonormal basis.

If you're talking about the space ##\mathbb R^n##, that definition makes sense even if you don't mention a basis. If it's some other finite-dimensional vector space, you will probably define the inner product in a different way.

rkaminski said:
[tex] (\boldsymbol{a},\boldsymbol{b}) = || \boldsymbol{a} || \cdot || \boldsymbol{b} || \cdot \cos \theta [/tex]

since for this we need the norm and the angle...
Right, this formula is often taken as the definition of the angle between two vectors.
 
  • #5
The Euclidean space, or "real inner product space" is defined as a real vector space equipped with additional operation, inner product (dot product), that assigns to a pair of vectors ##\mathbf x##, ##\mathbf y## a real number ##(\mathbf x, \mathbf y)## (sometimes denoted ##\mathbf x\cdot \mathbf y##). This inner product has to be symmetric, ##(\mathbf x, \mathbf y)= (\mathbf y , \mathbf x)##, linear in each argument and non-negative, ##(\mathbf x, \mathbf x)\ge 0##. The inner product is also supposed to be non-degenerate, meaning that ##(\mathbf x, \mathbf x)=0## only if ##\mathbf x =\mathbf 0## (the equality ##(\mathbf 0, \mathbf 0)=0## follows trivially from linearity).

The Euclidean space is usually also supposed to be finite-dimensional.

Once we are given an inner product, satisfying all the above properties, we can develop the whole theory, i.e. define norm, construct orthonormal bases, etc. The expression presented by OP gives an example of an inner product, one can easily check that all the properties are satisfied. This inner product is called the standard inner product in ##\mathbb R^n##. We first define the standard inner product, and then check that the standard basis in ##\mathbb R^n## is an orthonormal basis (with respect to the standard inner product).

So, to make long story short, we first define an inner product, and only after that we define orthogonality, so there is no vicious circle here.
 
  • #6
rkaminski said:
Dear All,

Here is one of my doubts I encountered after studying many linear algebra books and texts. The Euclidean space is defined by introducing the so-called "standard" dot (or inner product) product in the form:

[tex] (\boldsymbol{a},\boldsymbol{b}) = \sum \limits_{i} a_i b_i [/tex]

With that one can define the metric and the vector norm, the latter one as:

[tex] || \boldsymbol{a} || = \sqrt{(\boldsymbol{a},\boldsymbol{a})} [/tex]

etc. etc. However, we know that the first formula is valid only when we chose the orthonormal basis. That is the basis consisting of the vectors which are mutually orthogonal and with unit lengths:

[tex] (\boldsymbol{e}_i,\boldsymbol{e}_j) = \delta_{ij} [/tex]

[tex] || \boldsymbol{a}_i || = 1 [/tex]

Thus, to define the orthonormal basis one need to define dot product and norm first. On the other hand, the dot product formula works only in the case of othonormal basis.If we take any other basis this formula will not be valid (in Euclidean space?). Is that correct? The question is then in what order should be define all the terms to be consistent. Or perhaps there are more Euclidean spaces, each of its metric, and the above choice is arbitrary so it resembles the physical space the most? The question is then why it is like this?
What you are missing is that, given any basis for Rn, defining the dot product as [itex]<u, v>= \sum u_iv_i[/itex], with [itex]u_i[/itex], [itex]v_i[/itex], makes the basis orthonormal!

I would like to avoid the formula:

[tex] (\boldsymbol{a},\boldsymbol{b}) = || \boldsymbol{a} || \cdot || \boldsymbol{b} || \cdot \cos \theta [/tex]

since for this we need the norm and the angle... Closed loop and one of my doubts on how to deal the the topic...

Many thanks for explanations!

Radek
 
  • #7
Dear All,

I think I understand the idea, and the flow of the reasoning. Especially what Hawkeye18 wrote. We first define the space with the selected inner product, on that basis we define the norm, the orthonormal basis ets. Now my question would be why exactly in "real" Euclidean space (our normal physical space) we take the inner product to be the equal to:
[tex] (\boldsymbol{a},\boldsymbol{b}) = \sum \limits_{i} a_i b_i [/tex]
in the orthonormal basis. Of course this works but what is the reasoning behind it? Many thanks.

Radek
 
  • #8
rkaminski said:
Dear All,

I think I understand the idea, and the flow of the reasoning. Especially what Hawkeye18 wrote. We first define the space with the selected inner product, on that basis we define the norm, the orthonormal basis ets. Now my question would be why exactly in "real" Euclidean space (our normal physical space) we take the inner product to be the equal to:
[tex] (\boldsymbol{a},\boldsymbol{b}) = \sum \limits_{i} a_i b_i [/tex]
in the orthonormal basis. Of course this works but what is the reasoning behind it? Many thanks.
Given an orthonormal basis ##(e_n)_{n=1}^{N}## for ##\mathbb{R}^N##, we want
$$\langle e_j, e_k \rangle = \begin{cases}
1 & \text{if}\ \ j = k \\
0 & \text{if}\ \ j \neq k
\end{cases}$$
and we want the inner product to be linear on both sides: if ##v,w,x \in \mathbb{R}^N## and ##a,b \in \mathbb{R}##, then
$$\langle av + bw, x\rangle = a\langle v,x\rangle + b\langle w,x\rangle$$
and
$$\langle x, av + bw\rangle = a\langle x,v\rangle + b\langle x,w\rangle$$
These conditions force the form you indicated. To see this, let ##v,w## be arbitrary vectors in ##\mathbb{R}^N##. Then, since ##(e_n)## is a basis, there are unique coefficients ##a_n## and ##b_n## such that
$$v = a_1 e_1 + \cdots + a_N e_N\ \ \text{and}\ \ w = b_1 e_1 + \cdots + b_N e_N$$
Therefore,
$$\langle v,w\rangle = \langle a_1 e_1 + \cdots + a_N e_N, b_1 e_1 + \cdots + b_N e_N \rangle$$
By linearity, the right hand side is equal to
$$\sum_{j=1}^{N} \sum_{k=1}^{N} a_j b_k \langle e_j, e_k\rangle$$
Now using the fact that ##\langle e_j, e_k\rangle## is either ##1## when ##j=k##, and ##0## otherwise, this reduces to
$$\sum_{j=1}^{N} a_j b_j = a_1 b_1 + \cdots + a_N b_N$$
 
  • #9
Suppose you have an abstract finite-dimeninsional vector space, and you want to introduce an nine product there. A simple way is to take a basis and declare it to be orthogonal. then if ##\mathbf b_1, \mathbf b_2, \ldots, \mathbf b_n## is this basis, and ##\mathbf x = \sum x_k \mathbf b_k## and ##\mathbf y = \sum y_k \mathbf b_k## then ##(\mathbf x , \mathbf y) = \sum x_ky_k##, as it was shown in the previous post. In fact, any possible inner product can be obtained this way.

So, in ##\mathbb R^n## we take the simplest basis (the standard one and declare it to be orthogonal).

Here I was thinking as a mathematician. But let me now think as a physicist. And the standard coordinate system for a physicist is an orthogonal one, and the scale should be the same in all direction, which exactly means the orthonormal basis.

I think for a physicist the natural definition of a dot product in 2D or 3D is just ##\|\mathbf a\| \| \mathbf b\| \cos \theta##, it has natural physical interpretation via projection onto a line. Note, that "classical" physicist does not ask what the length is, he just knows how to find it. And the same for the angle. From this definition it can be shown in 2D and 3D that the dot product satisfies all the properties of an inner product, so by picking an orthogonal basis (system of coordinates) we get the standard formula for the dot product.

Note, that everything here could be made absolutely rigorous, in 2D and 3D, not only for a physicist, but for a mathematical, using axioms of Euclidean geometry in in the plane and in the space (the one you studied in high school).

But mathematicians often prefer doing it the other way around: first introduce ##\mathbb R^n## with the standard inner product there, and then give to each object in classical geometry its representation in ##\mathbb R^2## or ##\mathbb R^3##. This way gives you much simpler and more elegant proofs, but the disadvantage is that it is less intuitive. But on the other hand, it shows that Euclid's geometry is non-contradictionary, that there is a object satisfying all Euclid's axioms.
 

FAQ: Euclidean space: dot product and orthonormal basis

What is Euclidean space?

Euclidean space is a mathematical concept that describes a flat, three-dimensional space where the rules of Euclidean geometry apply. It is often referred to as "ordinary" or "mathematical" space.

What is the dot product in Euclidean space?

The dot product, also known as the scalar product, is a mathematical operation that takes two vectors in Euclidean space and produces a scalar. It is calculated by multiplying the corresponding components of the two vectors and then summing the results.

How is the dot product used in Euclidean space?

The dot product is used to measure the similarity between two vectors. It is also used to calculate the length of a vector, the angle between two vectors, and to project one vector onto another.

What is an orthonormal basis in Euclidean space?

An orthonormal basis is a set of vectors in Euclidean space that are mutually perpendicular (orthogonal) and have a length of 1 (normalized). This means that they form a right-handed coordinate system and can be used to represent any vector in the space.

How is an orthonormal basis used in Euclidean space?

An orthonormal basis is used to simplify calculations and geometric concepts in Euclidean space. It allows for easy representation and manipulation of vectors, and can be used to find the length, angle, and projection of a vector. It is also used in linear algebra and geometry to solve problems involving Euclidean space.

Similar threads

Replies
4
Views
544
Replies
16
Views
2K
Replies
5
Views
2K
Replies
9
Views
2K
Replies
4
Views
990
Replies
33
Views
2K
Back
Top