Understanding Vectors: Magnitude & Direction

In summary, a vector is an arrow and a tail: it has magnitude and direction. It is used to describe direction, forces, acceleration, etc.
  • #1
Trying2Learn
377
57
TL;DR Summary
An exaplanation that lives between simple engineering and abstract math.
Good Morning

(And apologies if this is not the right forum -- it is not a homework problem.)

On the one hand, a vector is an arrow and a tail: it has magnitude and direction. It is used to describe direction, forces, acceleration, etc.

However, there are more mathematical definitions: a member of a space equipped with a bilinear product that has a basis from which all vectors can be defined. etc.

Can anyone suggest (this is for my nephew) a source (hopefully online) that provides a description of a vector that rises above the simple "arrow with a head and tail," (ie.: direction and magnitude) yet motivates the student to want to learn more about what they are (e.g: they have a basis which can be used to describe all elements, etc.)

I do not need much. I just want to get him to see that more is going on here. He is a senior in high school with perfect grades (so he learns fast).
 
  • Like
Likes berkeman and vanhees71
Physics news on Phys.org
  • #2
I do this with my students. I let them sit in small groups and discuss what physical properties have magnitude and direction, and what only has magnitude.

Then after we have sorted that out. I write a vector on the whiteboard and start to talk about components and unit vectors. What kind of unit vectors can we use in "everyday" life? Up/Down, Forward/Back, Left/Right. And then we agree that such directions have a huge ambiguity - who's up/down are we gonna stick to? Does it matter? Then I draw on the same whiteboard, two coordinate systems - and we try to agree that the vector (arrow) is the same but its components are different in the two coordinate systems.
 
  • Like
Likes dextercioby, ComplexVar89, jtbell and 4 others
  • #3
Yes, that I get. I like it. I will convey this. Thank you.

But I am even more interested in the middle world between THAT description you gave, and the mathematical one that informs about a basis, and how they add and the operations, between them
 
  • #4
Can you provide a real example?

Operations, are you referring to scalar product and cross product?

Any "University physics" book should have among it first chapters an introduction to vectors.
 
  • Like
Likes vanhees71
  • #7
I think, vectors and linear algebra are a very good not too difficult example for what ever higher abstractions are good for.

I think indeed a good approach is the geometric one, i.e., starting with arrows in Euclidean point space. You can systematically build up all the notions of a Euclidean vector space and a Euclidean affine space.

Quite naturally you are led to the algebraisation of geometry, and the relation of geometrical questions like the interection of straight lines in a plane or a straight line with a plane in 3D space, etc. to the theory of solutions of linear (affine) sets of equations.

Finally you can rip vectors of all their "intuitive" geometrical meaning and axiomatize the theory. Then you find examples for realizations of vector spaces nearly everywhere in math, and you don't need to prove everything for each special case, but it's once and for all proved for all abstract spaces and thus it applies to all the special cases too.

The message is that ever more abstraction makes the issues simpler rather than more complicated. You strip an idea from all balast of the special cases to their bare bones and derive very generally valid properties, which then apply to all the special cases when needed.
 
  • Like
Likes DaveE, Ibix and Trying2Learn
  • #8
vanhees71 said:
I think, vectors and linear algebra are a very good not too difficult example for what ever higher abstractions are good for.

I think indeed a good approach is the geometric one, i.e., starting with arrows in Euclidean point space. You can systematically build up all the notions of a Euclidean vector space and a Euclidean affine space.

Quite naturally you are led to the algebraisation of geometry, and the relation of geometrical questions like the interection of straight lines in a plane or a straight line with a plane in 3D space, etc. to the theory of solutions of linear (affine) sets of equations.

Finally you can rip vectors of all their "intuitive" geometrical meaning and axiomatize the theory. Then you find examples for realizations of vector spaces nearly everywhere in math, and you don't need to prove everything for each special case, but it's once and for all proved for all abstract spaces and thus it applies to all the special cases too.

The message is that ever more abstraction makes the issues simpler rather than more complicated. You strip an idea from all balast of the special cases to their bare bones and derive very generally valid properties, which then apply to all the special cases when needed.
This was what I was looking for. Now I must try to put it in my words. THANK YOU!
 
  • #9
In my opinion, the most important property (to be discussed as soon as possible) is "how vectors add" (parallelogram rule or tail-to-head)...
...and this property exists before any [specific] "magnitude" is defined.

In physics, we can motivate this with
the observation that given two forces on a point-object,
we can compute the net-force, which can replace the two forces we were given.
Kinematically, given a sequence of displacements from a point A to Z,
we can write the displacement from A directly to Z as a vector sum.
These examples are prototypes for lots of other vector quantities that appear in physics.
 
Last edited:
  • #10
vanhees71 said:
I think, vectors and linear algebra are a very good not too difficult example for what ever higher abstractions are good for.

I think indeed a good approach is the geometric one, i.e., starting with arrows in Euclidean point space. You can systematically build up all the notions of a Euclidean vector space and a Euclidean affine space.

Quite naturally you are led to the algebraisation of geometry, and the relation of geometrical questions like the interection of straight lines in a plane or a straight line with a plane in 3D space, etc. to the theory of solutions of linear (affine) sets of equations.

Finally you can rip vectors of all their "intuitive" geometrical meaning and axiomatize the theory. Then you find examples for realizations of vector spaces nearly everywhere in math, and you don't need to prove everything for each special case, but it's once and for all proved for all abstract spaces and thus it applies to all the special cases too.

The message is that ever more abstraction makes the issues simpler rather than more complicated. You strip an idea from all balast of the special cases to their bare bones and derive very generally valid properties, which then apply to all the special cases when needed.

You say this:
"Then you find examples for realizations of vector spaces nearly everywhere in math"

I see it in the skew symmetric matrices (below: one after the other); and how, in 3D, with three standard forms, all skew symmetric matrices can built (as if the three fundamental forms were the basis).

0 -1 0
1 0 0
0 0 0 0 0 1
0 0 0
-1 0 00 0 0
0 0 -1
0 1 0
Can you provide other examples for a precious high school student?

For myself, I try to imagine that, due to the rods and cones in our eyes, that red, green and blue are the basis "vectors" for light. Or is that stupidity?
 
  • #11
In 3D, [pseudo]-vectors are often used to describe skew-symmetric matrices (which are arguably more fundamental). Examples of pseudo-vectors are cross-products of ordinary vectors (like torque) and the magnetic field... things associated with the "right-hand-rule".
 
  • Like
Likes malawi_glenn
  • #12
Trying2Learn said:
This was what I was looking for. Now I must try to put it in my words. THANK YOU!
A possible motivation for all this is if he's interested in GR/astrophysics/cosmology. You can get a long way in physics with the "vectors are arrows at a point" model in mind, but in my experience GR is where it runs into a brick wall and you need a more abstract model. Understanding it first would probably help...

The above probably applies to QM as well.
 
  • Like
Likes malawi_glenn
  • #13
Trying2Learn said:
Can anyone suggest (this is for my nephew) a source (hopefully online) that provides a description of a vector that rises above the simple "arrow with a head and tail," (ie.: direction and magnitude) yet motivates the student to want to learn more about what they are (e.g: they have a basis which can be used to describe all elements, etc.)
Here's a video which might help. It includes some terminology and concepts which your nephew is probably unfamiliar with - but it's only 7 minutes long. You could take a look and decide if it's suitable.
 
  • #14
Trying2Learn said:
But I am even more interested in the middle world between THAT description you gave, and the mathematical one that informs about a basis, and how they add and the operations, between them
I don't know that the middle ground you suggest exists. He can study linear algebra textbooks, he should be able to find one that explains things to him satisfactorily.
 
  • #15
Trying2Learn said:
You say this:
"Then you find examples for realizations of vector spaces nearly everywhere in math"

I see it in the skew symmetric matrices (below: one after the other); and how, in 3D, with three standard forms, all skew symmetric matrices can built (as if the three fundamental forms were the basis).

0 -1 0
1 0 0
0 0 0 0 0 1
0 0 0
-1 0 00 0 0
0 0 -1
0 1 0
Can you provide other examples for a precious high school student?

For myself, I try to imagine that, due to the rods and cones in our eyes, that red, green and blue are the basis "vectors" for light. Or is that stupidity?
An example our teacher provided in high school was the (real) Fibonacci sequences. They are defined by
$$a_{n+2}=a_n+a_{n+1}.$$
Each sequence is uniquely determined by giving the "initial values", ##a_1## and ##a_2##. Let's write ##\vec{a}## for the sequence ##(a_n)##.

It's easy to see that the set of Fibonacci sequences build a 2D vector space, when defining the addition and the multiplication with a (real) number by
$$\vec{c}=\vec{a}+\vec{b} \Leftrightarrow c_n=a_n+b_n$$
and
$$\vec{c}=\lambda \vec{a} \Leftrightarrow c_n=\lambda a_n.$$
That the vector space is 2D is clear from the fact that you can write any Fibonacci sequence as a linear combination of
$$\vec{e}_1: \quad e_{11}=1, \quad e_{12}=0 \quad \text{and} \quad \vec{e}_2: \quad \vec{e}_{21}=0, \quad \vec{e}_22=1.$$
Indeed, obviously for any ##\vec{a}##
$$\vec{a}=a_1 \vec{e}_1+a_2 \vec{e}_2.$$
These are all pretty obvious and easy to prove properties about Fibonacci sequences.

The question now is, whether there's a closed solution of the recursion relation, i.e., whether you can give a formula, ##a_n=f(n)##, with some function ##f(n)##. That's indeed the case. We just need to find two linearly independent Fibonacci sequences, we can give in closed form.

The most simple ansatz is to try a geometric sequence, i.e.,
$$a_n=q^n.$$
We only have to find ##q##'s such that this is a Fibonacci sequence. With some luck we may find two possible values for ##q##, such that we get two Fibonacci sequences which are both geometric sequences but not just proportional to each other, and that's indeed the case: To make ##(q^n)## a Fibonacci sequence we must have
$$q^{n+2}=q^n+q^{n+1} \; \Rightarrow \; q^2=1+q \; \Rightarrow \; q^2-q-1=0.$$
The solutions of this quadratic equation obviously is
$$q_{12}=\frac{1}{2} \pm \sqrt{1/4+1}=\frac{1}{2} (1\pm \sqrt{5}).$$
So indeed the sequences ##\vec{Q}_1: \quad (Q_{1n})=(q_1^n)## and ##\vec{Q}_2: \quad (Q_{2n})=(q_2^n)## are both Fibonacci sequences, which are obviously not proportional to each other and thus are linearly independent vectors.

Each Fibonacci series can thus be written as a linear combination of these two series, and thus you've indeed a closed form. It's a nice exercise to figure out the linear combination, for a given sequence ##\vec{a}##. For that you need the matrix ##T_{jk}## that's defined by
$$\vec{Q}_k=T_{jk} e_j.$$
Obviously
$$\vec{Q}_1=q_1 \vec{e}_1 + q_1^2 \vec{e}_2, \quad \vec{Q}_2=q_2 \vec{e}_1 + q_2^2 \vec{e}_2,$$
i.e.,
$$(T_{jk})=\begin{pmatrix} q_1 & q_2 \\ q_1^2 & q_2^2 \end{pmatrix}.$$
Now you find from (Einstein summation convention used) ##\vec{a}=a_j \vec{e}_j=a_k' \vec{Q}_k=a_k' T_{jk} \vec{e}_j##, i.e.,
$$\begin{pmatrix} a_1 \\ a_2 \end{pmatrix}=\hat{T} \begin{pmatrix} a_1' \\ a_2' \end{pmatrix}.$$
So you need the inverse,
$$\begin{pmatrix} a_1' \\ a_2' \end{pmatrix} = \hat{T}^{-1} \begin{pmatrix} a_1 \\ a_2 \end{pmatrix}.$$
 

FAQ: Understanding Vectors: Magnitude & Direction

What is a vector in physics and mathematics?

A vector is a mathematical entity that has both magnitude and direction. In physics and mathematics, vectors are used to represent quantities that have these two characteristics, such as displacement, velocity, and force. They are often depicted as arrows, where the length of the arrow represents the magnitude and the direction of the arrow indicates the direction of the vector.

How do you calculate the magnitude of a vector?

The magnitude of a vector can be calculated using the Pythagorean theorem if the vector components are known. For a vector \( \mathbf{v} = (v_x, v_y) \) in two dimensions, the magnitude \( |\mathbf{v}| \) is given by \( |\mathbf{v}| = \sqrt{v_x^2 + v_y^2} \). In three dimensions, for a vector \( \mathbf{v} = (v_x, v_y, v_z) \), the magnitude is \( |\mathbf{v}| = \sqrt{v_x^2 + v_y^2 + v_z^2} \).

What is the direction of a vector and how is it determined?

The direction of a vector is the angle that the vector makes with a reference axis, typically the positive x-axis. It can be determined using trigonometric functions. For a vector \( \mathbf{v} = (v_x, v_y) \), the direction angle \( \theta \) can be found using \( \theta = \tan^{-1} \left( \frac{v_y}{v_x} \right) \). In three dimensions, the direction is often described using angles with respect to the coordinate axes, known as direction cosines.

What are unit vectors and how are they used?

Unit vectors are vectors with a magnitude of one and are used to indicate direction. They are often used to express other vectors in terms of their components along standard axes. For example, in two dimensions, the unit vectors \( \mathbf{i} \) and \( \mathbf{j} \) point in the directions of the x-axis and y-axis, respectively. Any vector \( \mathbf{v} \) can be expressed as \( \mathbf{v} = v_x \mathbf{i} + v_y \mathbf{j} \).

How do you add and subtract vectors?

Vectors are added and subtracted component-wise. For two vectors \( \mathbf{u} = (u_x, u_y) \) and \( \mathbf{v} = (v_x, v_y) \), the sum \( \mathbf{u} + \mathbf{v} \) is \( (u_x + v_x, u_y + v_y) \

Similar threads

Replies
30
Views
6K
Replies
9
Views
2K
Replies
6
Views
974
Replies
9
Views
7K
Replies
3
Views
1K
Replies
20
Views
3K
Back
Top