Vector Space Question: Basis Vectors and Relatedness Explained

In summary, you are describing how to create a transformation matrix that will align a set of vectors in two different spaces. This would involve mapping the vectors from one space to the other and using the resulting matrix as the transformation.
  • #1
moyo
30
0
Could we have two vector spaces each with its own set of basis vectors. but these basis vectors are related according to the following way. A particular set of vectors in the first vector space may exist "all over the place" but when you represent the same information in the second vector space , the discrete vectors in the first space can still be made out in the second space but , line up end to end to form one composite vector in it.
 
Physics news on Phys.org
  • #2
It is not easy for me to understand what you are exactly asking. In particular:

moyo said:
Could we have two vector spaces each with its own set of basis vectors. but these basis vectors are related according to the following way. A particular set of vectors in the first vector space may exist "all over the place"

What does "all over the place" mean here?

moyo said:
but when you represent the same information in the second vector space ,

What do you mean by "represent"?

moyo said:
the discrete vectors in the first space can still be made out in the second space

What do you mean by "can be made out"? Do you mean that the members of the first basis are linear combinations of members of the second basis?

moyo said:
but , line up end to end to form one composite vector in it.

Are you talking here specifically about concatenating $n$-vectors to form a new vector?

It could help if you could provide an example of what you are thinking about. You could start with real or complex spaces $V$ and $W$ spanned by bases $\{v_1,\ldots,v_m\}$ and $\{w_1,\ldots,w_n\}$ and try to explain your question using symbols and well-defined terms.
 
  • #3
Krylov said:
It is not easy for me to understand what you are exactly asking. In particular:
What does "all over the place" mean here?
What do you mean by "represent"?
What do you mean by "can be made out"? Do you mean that the members of the first basis are linear combinations of members of the second basis?
Are you talking here specifically about concatenating $n$-vectors to form a new vector?

It could help if you could provide an example of what you are thinking about. You could start with real or complex spaces $V$ and $W$ spanned by bases $\{v_1,\ldots,v_m\}$ and $\{w_1,\ldots,w_n\}$ and try to explain your question using symbols and well-defined terms.

Hi , sorry for being unclear.

My question with an example is ...take a vector space R^2 euclidian space with the forlowwing two functions. f(x)= x /2 and f(x)= 2x +1. These two functions occupy different areas spacially in R^2. Now say we were to manipulate R^2's basis vectors , somehow, to create new basis vectors for another space M, could we manipulate them in such a way that the two functions mentioned above coincide , i.e. occupy the same points within M. And what sort of manipulation would you need to perform. I take it that it would still be some sort of transformation matrix but my question is essentially could you represent any function as a transformation matrix, which seems to be necessary for that to happen. I.e. could the transformation matrix exist in its own space.
 
  • #4
moyo said:
Hi , sorry for being unclear.

My question with an example is ...take a vector space R^2 euclidian space with the forlowwing two functions. f(x)= x /2 and f(x)= 2x +1. These two functions occupy different areas spacially in R^2. Now say we were to manipulate R^2's basis vectors , somehow, to create new basis vectors for another space M, could we manipulate them in such a way that the two functions mentioned above coincide , i.e. occupy the same points within M. And what sort of manipulation would you need to perform. I take it that it would still be some sort of transformation matrix but my question is essentially could you represent any function as a transformation matrix, which seems to be necessary for that to happen. I.e. could the transformation matrix exist in its own space.

I suppose we could map the two functions to each other and use that function as the basis for the transformation matrix. This would also be scalable to n functions. Then for different alignements of those functions in M we opt to not use the basis vectors in the tranformation matrix vector space as such but composite vectors instead.
 
  • #5
Soory if i sound naive or am being naive...

I have the following scenario..

I have a number of vectors that when added together give one composite vector. The first vectors represent aspects of a premise while the composite vector represents the conclusion. I have arranged it that way.

Now we have a set of sets of vectors or hyper matrices consiting of "the aspect of the premise and the sum of all of them , i.e. the conclusions, vectors" concatenated.

There are many of these matrices.

Now all these matrices are representations of yet another aspect. I.e. they are all proofs. They share that equivalence.

so in some other vector space they occupy the same point. or are the same vector.

what would be an appropriate set of basis vectors for that last space as a function of the basis vectors we started of with. And we have another kind of relationship between the first vector space and the second one before we reach the third. How can we express it...the one where we concatenate the premise vectors to the conclusions vectors to form another set of vectors in another space.

This is a real problem i am trying to solve. i have ommited information that is irrelevant.
 
Last edited:
  • #6
I went back and tried to study a bit more ...but with youtube videos, so forgive me if i am still naive...

I have a few questions that i would appreciate answers for

If i have a vector space of the following form. There is a multi dimensional space that these vectors live on, and a particular matrix formed from some vectors has a determinant of zero.

Now are we able to apply curvature to the vector space in order to increase the value of the determinant for zero to something positive? And how would we do this?
 
  • #7
Vector spaces do not have "curvature". They have vectors, scalars, and the operations of adding to vectors and multiplying a scalar and a vector.

If a matrix, corresponding to some linear transformation, has 0 determinant then the linear transformation has non-trivial kernel. That is, there is exist some subspace such that every vector in that subspace is mapped to the 0 vector. What you can do is restrict the linear transformation to the orthogonal complement of the kernel. The matrix corresponding to that restricted linear transformation will have non-zero kernel.
 
  • #8
Country Boy said:
Vector spaces do not have "curvature". They have vectors, scalars, and the operations of adding to vectors and multiplying a scalar and a vector.

If a matrix, corresponding to some linear transformation, has 0 determinant then the linear transformation has non-trivial kernel. That is, there is exist some subspace such that every vector in that subspace is mapped to the 0 vector. What you can do is restrict the linear transformation to the orthogonal complement of the kernel. The matrix corresponding to that restricted linear transformation will have non-zero kernel.

I have these videos that seem to say there are such concepts?

https://www.youtube.com/watch?v=NlcvU67YWpQ&list=PLJ8OrXpbC-BNHmhFI4i_EK3vUmuMgZ6wb&index=54

https://www.youtube.com/watch?v=cq3Yf8OGZpo&list=PLJ8OrXpbC-BNHmhFI4i_EK3vUmuMgZ6wb&index=70
 
  • #9
I have the following problem. i need to track information. Once we complete a phrase..the representation must resolve to zero. Then another phrase is innitiated by adjusting the representation till it is no longer zero, then doing something else to resolve to zero once more.

In the system i had proposed the determinat of zero shows closure of a phrase , then the vectors are transformed so the don't have a determinant of zero anymore by curling the space...then it is resolved by adding other vectors that will make the system resolve to a determinant of zero...i may just be in over my head :)
 
  • #10
moyo said:
Those are NOT talking about "vector spaces". They are talking about vector fields on surfaces or in space. They are completely different concepts. "Vector spaces" are dealt with in Linear Algebra. "Vector fields" are a topic in Differential Geometry.
 
  • #11
moyo said:
I have the following problem. i need to track information. Once we complete a phrase..the representation must resolve to zero. Then another phrase is innitiated by adjusting the representation till it is no longer zero, then doing something else to resolve to zero once more.

In the system i had proposed the determinat of zero shows closure of a phrase , then the vectors are transformed so the don't have a determinant of zero anymore by curling the space...then it is resolved by adding other vectors that will make the system resolve to a determinant of zero...i may just be in over my head :)

I am trying to frame this problem that i have. Would you suggest that it IS possible or not using vector fields form differential geometry. i know i have to do the research on my own , but a little direction would be nice.
 

FAQ: Vector Space Question: Basis Vectors and Relatedness Explained

What is a vector space?

A vector space is a mathematical structure that consists of a set of objects (vectors) that can be added together and multiplied by scalars (usually real or complex numbers). It follows certain axioms, such as closure under addition and scalar multiplication, and the existence of a zero vector and additive inverses.

What are the basic operations in a vector space?

The basic operations in a vector space include vector addition, scalar multiplication, and vector subtraction. Vector addition involves adding two vectors together to produce a new vector, while scalar multiplication involves multiplying a vector by a scalar to produce a new vector. Vector subtraction is simply adding the additive inverse of a vector to another vector.

How is a vector space different from a Euclidean space?

A Euclidean space is a specific type of vector space that has a finite number of dimensions and follows the rules of Euclidean geometry. However, a vector space can have an infinite number of dimensions and does not necessarily follow the rules of Euclidean geometry. Additionally, a Euclidean space is typically used to represent physical space, while a vector space is an abstract mathematical concept.

What is the importance of vector spaces in mathematics?

Vector spaces are a fundamental concept in linear algebra and are used in many areas of mathematics, including geometry, physics, and computer science. They provide a powerful tool for representing and manipulating mathematical objects, and can be used to solve a wide range of problems in various fields.

How are vector spaces applied in real-world situations?

Vector spaces have many real-world applications, such as in physics (e.g. representing forces and velocities), engineering (e.g. analyzing systems of linear equations), and computer graphics (e.g. representing 3D objects). They are also used in machine learning and data analysis, where vectors can represent features of data points and vector operations can be used to perform calculations and make predictions.

Similar threads

Replies
9
Views
2K
Replies
17
Views
9K
Replies
8
Views
2K
Replies
5
Views
2K
Replies
2
Views
2K
Replies
43
Views
6K
Back
Top