Dot product in non-orthogonal basis system

In summary, the conversation discusses the dilemma of defining a dot product in a non-orthogonal basis in a vector space. The individual is trying to understand how to construct the dot product without prior knowledge of lengths and angles, which are typically used in defining the dot product. The conversation also mentions the need for an orthonormal basis to define a good dot product, and the confusion surrounding the use of coordinates in this process. Ultimately, the individual is seeking clarification and understanding on how to define the dot product in their given scenario.
  • #1
Lajka
68
0
Hi again,

I don't want it to seem like I'm spamming topics here, but I was hoping I could get help with this dillema, too.

So, let's say that, in affine 2-dimensional space, we have some two, non-orthogonal, independent vectors, and we also pick some point for an origin O. This clearly forms a basis and a coordinate system for that space, thus making it a [tex]R^{2}[/tex] linear vector space.

Now, every vector [tex]v[/tex] can then be written as [tex]v = \sum^{2}_{k=1} v_{k}e_{k}[/tex], where [tex]v_k[/tex]s are the coordinates with respect to this non-orthogonal basis [tex] { e_{k} } [/tex]

My question is, how to construct the dot product here?
Let me explain my confusion.
It's clear that the coordinates for [tex]e_{1}[/tex] will be [tex][1 0]^{t}[/tex], and for [tex]e_{2}[/tex] will be [tex][0 1]^{t}[/tex]. So, if I just multiply coordinates and sum them, I will get zero! This can't be true, because dot product must not depend of the choice of the basis. The dot product of these two vectors must be non-zero.
Then I found this http://fatman.cchem.berkeley.edu/xray/VectorSpaces.pdf" , and I liked it. Especially this
I7HGh.png

and
FEEGz.png

However, here's the problem. This matrix M, which is called the metric tensor in the paper, uses the lengths and angles between our non-orthogonal basis vectors to calculate its elements. But the concept of the length (norm) and the angle is something I am yet to define later using the dot product! How can he use that? I don't what lengths and angles are yet!

Look at this
8WvDW.png

How do I know how much is ab or ba? Or a^2 for that matter, the norm? I'm TRYING to define the inner product here, and they're asking how much is ab! I don't know yet! All I know is that, in this case, [tex]a = (1 0 0)^{T}[/tex], and [tex]b = (0 1 0)^{T}[/tex], because they're basis vectors, and these are their coordinates w.r.t. to themselves! And that doesn't mean anything to me.
Somebody has to tell me how much is ab, or bc, or any other combination, right? How can I tell that by myself? I don't understand, maybe to take a ruler in my hands and measure them on my paper, where I supposedly draw them?

For orthogonal basis vectors, they usually do, they tell you do something like " oh, yeah, [tex]e_{k}e_{j} = \delta_{kj}[/tex]", but how do THEY know this? How do we even pick orthogonal vectors for our basis, in the very beginning? How do we know they're orthogonal? We can't use coordinates for these are the basis vectors (we defined coordinates with respect to them, so it's useless!), we don't have lengths and angles, this all comes after we define the dot product.

If we want to find out if my basis vectors are orthogonal, we have to do the dot product. If we want to define dot product, we have to find metric tensor M. If we want to find M, we need to find the angles between our basis vectors. If we want to find the angles, we need to know the dot product!
I'm in a loop here, please help me escape.

The long story short: I'm trying to define a dot product in my vector space, using my two basis vectors I picked arbitrarily in affine space, along with some origin O. But in order to do that, I'm asked to tell, in the middle of the process, how much is the dot vector between my basis vectors. I don't know, that's the point!

So, where am I wrong here?
 
Last edited by a moderator:
Mathematics news on Phys.org
  • #2
Hi Lajka! :smile:

I think you've already identified the problem quite well. You cannot define a dot product in a space without orthonormal basis. Of course, you can always define a suitable inner product that will satisfy all the axioms of an inner product: your definition (a,b).(c,d)=ac+bd satisfies the axioms of an inner product. But of course, this will not give you the usual notions of orthonormality.

The moral is that you first need a natural orthonormal basis, and only then can you define an inner product that agrees with this orthonormality.
 
  • #3
Hi micromas, thanks for replying :)
You cannot define a dot product in a space without orthonormal basis.
Hm, I see. So, if I consider a vector space like the one below
STZDf.png

does that mean I can't have dot product defined here then? And if that's the case, how will I determine the coordinates then? I thought I would have to use the dual basis and the dot product, but if I can't have the dot product, I guess I don't know.

As you can see, I'm kinda really confused about this now, maybe a good night of sleep will help me understand this better in the morning.
 
  • #4
Lajka said:
Hi micromas, thanks for replying :)

Hm, I see. So, if I consider a vector space like the one below
STZDf.png

does that mean I can't have dot product defined here then?

You can define a dot product here. But orthonormal vectors of the dot product will not be the orthonormal vectors you want. To really define a good dot product, you'll need to know what you want your orthonormal vectors to be and then take these vectors as your basis.

And if that's the case, how will I determine the coordinates then?

You don't need a dot product to define coordinates. A dot product is only good for defining length and angles.
For coordinates, take any vector v. Since you have a basis, you can write v as av1+bv2. Then (a,b) would be your coordinates. You don't need the dot product for this.
 
  • #5
But isn't dot product invariant of the chosen basis? I thought that being orthogonal means only one thing, and that is to be perpendicular one to another, regardless of what basis vectors I choose? Aka "dot product is independent of the coordinate system"?
Oh boy, I'm kinda more confused now.

Let me try with another pic, hopefully I'll explain better now what confuses me.
R3mh3.png

[tex] \{ e_{i} \}[/tex] is an orthonormal basis in this space (green color).
Now, [tex]v = v_{1}e_{1} + v_{2}e_{2}[/tex], and all is great. Dot product is also defined as [tex]u \cdot v = \sum^{2}_{k=1} u_{i}v_{i}[/tex]. (if you could tell me how we got this, that would be great, but I'm just going to use it anyway)

Coordinates for basis vectors are [tex]e_{1} = [1\,0]^{T}[/tex], and [tex]e_{2} = [0\,1]^{T}[/tex]. This makes sense because coordinates are defined with respect to the basis vectors. They tell you how "much" of each basis vector is necessary to construct your desired vector. If I'm not mistaken, basis vectors will always have these coordinates in their own vector space, no matter how they look (orthogonal, non-orthogonal).

Ok, now let's look at the vectors [tex]e'_{1}[/tex] and [tex]e'_{2}[/tex]. They are not orthogonal, and their scalar product [tex]e'_{1} \cdot e'_{2}[/tex] will have some value, say, A.

Let's forget completely now our current orthonormal basis [tex] \{ e_{i} \}[/tex], like it never was there.
L9Ix8.png

Now [tex] \{ e'_{i} \}[/tex] is my basis. Now [tex]v = v'_{1}e'_{1} + v'_{2}e'_{2}[/tex]. It's the same vector as before, same initiate and end point, just new coordinates now, with respect to its new basis.
I also expect now that [tex]e'_{1} = [1\,0]^{T}[/tex], and [tex]e'_{2} = [0\,1]^{T}[/tex], since these are "new coordinates".

HOWEVER, believing that the dot product is invariant, i still expect that
[tex]e'_{1} \cdot e'_{2} = A[/tex]
even with these new coordinates. That is only possible if the formula for the dot product is different now.

But if I understood you correctly, you say that it will indeed be
[tex]e'_{1} \cdot e'_{2} = 0[/tex]
and these vectors will be considered as new orthogonal vectors in this space.

If that's the case, what happened with the dot product invariance as a concept? Or maybe I actually don't understand what that really means...
 
  • #6
Ah, yes, I see what you mean. Well, after base change, the dot product will still have [itex]e_1^\prime\cdot e_2^\prime=A[/itex]. But this means that the calculation of the dot product has changed somewhat.

It was easiest when you just work with orthonormal vectors e1 and e2. But after a change of basis, the dot product will have changed it's definition and can no longer be calculated as (a,b).(c,d)=ac+bd.

For example, when changing your base to (1,2) and (-1,2), denote the matrix of this base change as

[tex]A=\left(\begin{array}{cc} 1 & -1\\ 2 & 2\end{array}\right)[/tex]

Then the vector product becomes:

[tex](a,b).(c,d)=(a\ b)A^TA(c\ d)^T[/tex]

But note, to define the dot product in the first case, you had to use e1 and e2 again. You will always need some notion of orthonormality before you actually define the dot product. If you don't know which vectors are orthonormal towards each other, then you can't define a dot product
 
  • #7
Okay, just for the sake of clarity and my sanity, is there any way to construct a vector space with an usual inner product, as we all know it, but without this picture?

KoRJD.png


Or is this picture and these vectors downright necessary for it?

I was just kinda trying to prove to myself that I could maybe construct the inner product space from the scratch (with the usual inner product) using only oblique coordinates. You have used matrix A, which consists of vectors (1,2) and (-1,2), but these vectors represent coordinate vectors in a previous, orthonormal basis. I was trying to avoid any connection to an orthonormal system, that's why I said "let's forget completely orthonormal basis" in my last post. Assume that you don't have the information that [tex]e'_{1}[/tex] and [tex]e'_{2}[/tex] were (1,2) and (-1,2) in a former basis ([tex]\{e_{i}\}[/tex]).

For all you know, the former basis never existed. All you have is this one, with oblique basis vectors [tex]e'_{1}[/tex] and [tex]e'_{1}[/tex] and their coordinates.

I'm tired and sleepy, so you'll forgive if I'm appearing to catch things slowly, but I'm thinking that your responses
You can define a dot product here. But orthonormal vectors of the dot product will not be the orthonormal vectors you want. To really define a good dot product, you'll need to know what you want your orthonormal vectors to be and then take these vectors as your basis.
But note, to define the dot product in the first case, you had to use e1 and e2 again. You will always need some notion of orthonormality before you actually define the dot product. If you don't know which vectors are orthonormal towards each other, then you can't define a dot product
mean exactly that - that the above picture is absolutely necessary in order to have a 'normal' inner product defined. Am I right?

I hope I'm making at least a little sense.
 
  • #8
It is an interesting question: Which comes first? - orthogonality or the dot product. For vectors that represent line segments, I can believe that orthogonality comes first in the practical sense and perhaps even in the theoretical sense. For the more general idea of "inner product", I think the inner product comes first because for things like spaces of functions, there seems to be no simple concept of orthogonality except by defining it terms of the inner product.

Another interesting question:
A more basic idea than orthogonality of a basis is the idea of the uniqueness of representation in a basis. Suppose we have a basis [itex] \{b_i\} [/itex] and we know that each vector [itex] v [/itex] in the space has a unique respresentation as a linear combination of these basis vectors. Suppose we have an inner product on the space and we know the value of the inner product [itex] <bi,bj> [/itex] for each pair of vectors in the basis. Can we prove the existence of orthogonal vectors and the existence of an orthogonal basis?
 
  • #11
Vector Spaces are Abstract

I'm a bit late to the party, but I believe I have something to add. The hole in the logic of which comes first inner product or orthogonality (chicken or egg) has sunk many a students' boats. Here's the patch for the hole:

Linear spaces are basically abstract sets with additional structure specified. Structure means there are relationships that exist among the members of the set, as opposed to a set made up of a more or less arbitrary collection of unrelated or random members. In the linear space, all the element members are related mathematically/arithmetically to each other as a a weighted sum of any combination of the other members. As members of a basic algebraic structure, the elements themselves have no internal structure and the linear space is abstract. Our understanding of the properties of the space and its members stems solely from its upholding and satisfying this axiom. There are more heavily equipped linear spaces, such as those for which a dimension is specified. All in all, these internal relationships tell us that a linear space has a notion of proportionality among at least some members of the space (all the members in a 1-D space), and all the members are proportional to some other members vis the weighted sum.

The inner product is a whole different animal. A linear space with an inner product specified does not have additional internal structure on the space. What the inner product does is indicate a relationship between the members of the space (and space as a whole) and the outside world. In this case, it shows that two member vectors can combine and map to an element of its underlying field. This opens a huge door, allowing the outside world to connect with the linear space by means of interpretation. But again, the internal relationship among the members remains unchanged.

Now, the outside world that we connect to the linear space in a first introduction is the euclidean plane (or higher dimensional versions of same) or some other similar original geography. We use geometry to define notions of parallel and perpendicular (orthogonal), then we create a set of axes, and finally, project them into an R2 or R3 linear space. The axes specified on the original geography need not be orthogonal, nor unit increments. They may have any length units desired of the geometer. The coordinates on these axes are equated with the coefficients of the vectors present in any weighted sum in the linear space. These coefficients may be visualized in a Cartesian depiction, or basically a graph with grid points where each grid point is an integer or other prominent value of the coefficient. In this depiction, we think of the vectors as ordered pairs and triplets, but really, they are just a listing of their coefficients relative to a basis.

The idea of inner product is used to algebraically encode the notions of parallel and perpendicular as defined in the euclidean geometry and not the other way around. These notions do not exist in a linear space, even one equipped with an inner product, unless the linear space is used to encode a geometry. Only then can the members of the linear space be interpreted to have orthogonality.

As far as basis and uniqueness, any linearly independent set of N (N is the dimension of the space specified) may serve as a basis. Even if inner products are defined and known for all elements of the space, parallelism and orthogonality still have no meaning. The inner product calculation between two vectors will be different for each basis set, but will always result in the same value of the calculation if the arithmetic is correct.

Each basis may be illustrated in its own "orthonormal" Cartesian depiction and it will seem like each basis has its own separate space. However, this is a convenience for the mathematician that creates an optical illusion. In reality, they are just different organized, systematic visualizations of the entire space of coefficients that "generate" all the vectors in the abstract linear space as weighted sums of the bases. The illustrations are not a geometric interpretation of the vectors and the vectors are not an encoding of any geometry unless explicitly specified to be so. If we take all the vectors one has drawn in a Cartesian depiction for one basis and compile them together with those from the Cartesian depictions of other bases and then project them onto the original geography (if one has been specified, that is), then you will see how the vectors and bases (and the axes that go along with each set of bases) all relate to each other geometrically. If you wish to think of it in reverse, the best you can do is say here's this linear space with an inner product specified. Now, using euclidean geometry as a metaphor, let's visualize the linear space by showing geometric relationships that correspond metaphorically to the algebraic ones.
 
Last edited:

Related to Dot product in non-orthogonal basis system

1. What is a dot product in a non-orthogonal basis system?

The dot product in a non-orthogonal basis system is a mathematical operation that measures the similarity or alignment between two vectors in a space that does not have orthogonal (perpendicular) basis vectors. It is also known as the scalar product or inner product.

2. How is the dot product calculated in a non-orthogonal basis system?

In a non-orthogonal basis system, the dot product is calculated by multiplying the components of each vector and then summing them together. Mathematically, this can be represented as a•b = ∑(ai * bi), where a and b are the two vectors and ai and bi are their respective components.

3. What is the difference between dot product in an orthogonal and non-orthogonal basis system?

In an orthogonal basis system, the dot product is calculated by multiplying the corresponding components of each vector and then adding them together, resulting in a simpler calculation. In a non-orthogonal basis system, the dot product takes into account the angle between the two vectors and requires a more complex calculation.

4. What are the applications of dot product in a non-orthogonal basis system?

The dot product in a non-orthogonal basis system has various applications in physics, engineering, and computer science. It can be used to calculate work and energy in non-orthogonal coordinate systems, determine the angle between two vectors in three-dimensional space, and perform transformations in computer graphics and image processing.

5. Can the dot product be negative in a non-orthogonal basis system?

Yes, the dot product can be negative in a non-orthogonal basis system. This occurs when the angle between the two vectors is obtuse (greater than 90 degrees). In this case, the dot product will have a negative value, indicating that the vectors are more "opposite" or less aligned with each other.

Similar threads

  • General Math
Replies
7
Views
1K
  • General Math
Replies
4
Views
3K
  • Linear and Abstract Algebra
Replies
9
Views
750
  • General Math
Replies
4
Views
2K
  • General Math
Replies
25
Views
2K
Replies
4
Views
11K
Replies
7
Views
2K
Replies
5
Views
4K
Replies
14
Views
1K
Back
Top