Confusion between vector components, basis vectors, and scalars

  • #1
e2m2a
359
14
TL;DR Summary
Confused about vector components, basis vectors and scalars.
There is an ambiguity for me about vector components and basis vectors. I think this is how to interpret it and clear it all up but I could be wrong. I understand a vector component is not a vector itself but a scalar. Yet, we break a vector into its "components" and then add them vectorially to get the vector. But if vector components are scalars we cannot add scalars to get a vector. So is the solution to this confusion as follows. The vector component indeed is only a scalar but when we multiply vector component, the scalar, times the basis vector we get a vector. (A scalar times a vector is a vector. And I assume basis vectors are vectors.) Hence, what we are doing to get the vector is not adding the vector components but the vector components times the basis vectors and then add the product of the vector component times the basis vector vectorially to get a vector. Is this how it's done?
 
Physics news on Phys.org
  • #2
Maybe it is more clear if you think of vector as invariants objects.
So the components of the vector are contravariant tensor, and the basis is a covariant tensor.


You didn't add components to got the vector, you multiply the components with the basis and them get the vector. I think it is a little dangerous to think of the components of vectors separatelly. It is like to suppose a matrix is a scalar because $a_{11}$ is a complex number, say $3i$. Now while it is true that $3i$ is $3i$ in any space/reference, it is not true that $a_{11}$ is the same in any reference frame.

So, in the case here, say , using the known cartesian coordinates. In the same way, while it is true that in another frame, it is not true that in another frame. So while it is true that we use scalar to represent the components of the vector, this does not mean it is a scalar.
 
  • #3
Your thinking is correct. Basis vectors are usually normalized, so you can think of them as unit vectors. In the simple case of 3d Cartesian space when you write the form an orthonormal basis set, . The components are scalars and are obtained from the scalar product, .

And yes, to get vector in this case, you are adding components times basis vectors which you can think of as the addition of three vectors having the form .
 
  • Like
Likes vanhees71
  • #4
e2m2a said:
I understand a vector component is not a vector itself but a scalar.
It is under a certain interpretation of the term "scalar", but you have to be careful.

Strictly speaking, a "scalar" is a number that is invariant, i.e., it doesn't depend on your choice of coordinates (or, equivalently, on your choice of basis vectors). But a vector component, as normally understood, is dependent on your choice of basis vectors, which would mean it would not be a scalar.

However, if you think of a vector component in terms of the sum you describe, where a particular vector is expressed as the sum of its components times their corresponding basis vectors, then we can think of the components as scalars because we have fixed the choice of basis vectors when we construct the sum. Or, to put it another way, we can think of a vector component, as @kuruman says, as the scalar product (inner product, or dot product) of the vector and the specific basis vector corresponding to the component. The dot product of two specific vectors is an invariant--it will be the same regardless of your choice of basis. (But of course the "basis" vector in the dot product will only be a basis vector for one choice of basis.) So in this sense, yes, vector components are scalars.
 
  • Like
Likes PhDeezNutz and vanhees71
  • #5
This works as follows: Let (, where is the dimension of the (real) vector space). Then you can define a matrix

By assumption this is a symmetric non-degenerate matrix with only positive eigenvalues, because the scalar product is by definition a positive definite bilinear form.

In this context the basis vectors with lower indices are called co-variant basis vectors. If you have another basis , then there is a invertible matrix such that

where the Einstein summation convention is used. According to this convention it is understood that over any index which occurs twice in a formula you have to sum, and there must always be one of these indices a lower and the other be an upper index (two equal indices at the same vertical place are strictly forbidden; if such a thing occurs in the formulas of this socalled Ricci calculus, you made a mistake!).

Now vectors are invariant objects, i.e., you can decompose any vector uniquely wrt. to any basis with the corresponding vector components,

I.e., the components transform "covariantly",

or defining

Then you have

For the scalar products of vectors you find

The same holds for the components wrt. the other basis

where the transformation law for the metric components read

i.e., you have to apply the rule for covariant transformations to each lower index of . The inverse of this formula is of course

Now you can also introduce a contravariant basis for each given covariant basis by demanding that

To find them we need to inverse of the matrix which we denote as , i.e., we have

The matrix is invertible because the scalar product is non-degenerate, because it's positive definite, i.e., the matrix has only positive eigenvalues and thus it's determinant is non-zero. Indeed, defining

does the job, because

Then you have

So you get the contravariant vector components by multiplying with the co-basis vectors. On the other hand you have

So you get covariant components of as


The are contravariant, i.e., they transform analogously to the contravariant vector components:

From this you get

From this we have

because the inverse matrix of is uniquely given by the matrix . So we have

i.e., the transform contravariantly, as claimed above.
 
  • Like
Likes PhDeezNutz

Similar threads

Back
Top