What Does a 2x2 Matrix Represent in Vector Spaces?

  • Thread starter Thread starter RyozKidz
  • Start date Start date
  • Tags Tags
    Matrix
AI Thread Summary
A 2x2 matrix represents a linear transformation in a vector space, specifically in a coordinate system. It can be viewed as a combination of two vectors, each represented by a row or column in the matrix. The entries of the matrix can be interpreted as coefficients in a system of linear equations, but they do not necessarily need to originate from such a system. Matrices serve as a way to perform operations on vectors, allowing for transformations like rotation or scaling. Ultimately, the structure of a 2x2 matrix reflects the relationships between two-dimensional vectors.
RyozKidz
Messages
25
Reaction score
0
a given matrix of 2X2 is \stackrel{A}{C} \stackrel{B}{D}

can anyone pls tell me wat can be told by this matrix?
or is this a coordinates of a vector like a row vector or column vector??
And how does this matrix of 2X2 come from without any linear equation??
 
Mathematics news on Phys.org
Well, the matrix is not a vector as vectors only have either 1 row or 1 column, which this does not. You really can't say anymore about this matrix without some information as to what the entries are. The capital letters could imply the entries are matrices themselves.

Really, a matrix does not need to "come from" a system of linear equations, but if you wish you could interpret it as the coefficient matrix of the system of equations: Ax+By = X, Cx+Dy = Y.
 
RyozKidz said:
a given matrix of 2X2 is \stackrel{A}{C} \stackrel{B}{D}

can anyone pls tell me wat can be told by this matrix?
or is this a coordinates of a vector like a row vector or column vector??
And how does this matrix of 2X2 come from without any linear equation??

Matrices really come from the idea of vector spaces. A matrix is a "representation of a linear transformation on a vector space, in a given system of coordinates". That's a bit of a mouthful, so in simplified terms:

Imagine your ordinary 2D plane with a cartesian system of co-ordinates. Any point (x,y) can be represented by a position vector from the origin to that point. So you can think of a plane as a set of points or a set of position vectors - let's do the latter.

A transformation of a vector space is just a rule that maps any vector to another vector in the space. So you could imagine a transformation that rotates all vectors clockwise by 37 degrees. Or one that adds (3,5) to each vector. Or one that doubles each vector. And so on.

Of all possible transformations we could consider, a linear transformation is one with a couple of special properties. Let's call the transformation A(), the vectors a, b, c and so on, and use m,n,... for real numbers (I'm too lazy to use LateX). Then a linear transformation is one for which A(a + b) = A(a) + A(b) for all vectors, and A(ma) = mA(a) for all vectors and scalars. In words, 'the transform of a sum of vectors is the sum of the transforms of each vector' and 'the transform of a scalar multiple of a vector is the scalar multiple of the transform of the vector'.

Now here's what a matrix is: it's a way of representing a particular linear transform in a particular set of coordinates. Suppose we know that a linear transform takes i to mi + nj, and takes j to pi + qj (where i and j are the standard unit vectors along the x and y axes). Then we can write that transform as a matrix:

(m n)
(p q)

And multiplying any vector by that matrix (on the left, of course) now gives the transform of that vector. You can see this by doing the matrix multiplication explicitly, writing a general vector as, say, ri + sj, and using the linearity properties mentioned above.

Often matrices are first introduced to students as just 'tables of numbers'. This is all fine if you just want to crunch through numbers, and the definitions of matrix addition and multiplication by a scalar are 'intuitively obvious'. But then they teach you about matrix multiplication, and the obvious response is 'why on Earth is it such a complicated, apparently made-up rule'? And the answer is to do with what matrices really are: the definition of matrix multiplication comes out of considering them as transformations of vectors. It also helps when they introduce you to determinants, identity and inverse matrices.
 
Nancarrow said:
Matrices really come from the idea of vector spaces. A matrix is a "representation of a linear transformation on a vector space, in a given system of coordinates". That's a bit of a mouthful, so in simplified terms:

Imagine your ordinary 2D plane with a cartesian system of co-ordinates. Any point (x,y) can be represented by a position vector from the origin to that point. So you can think of a plane as a set of points or a set of position vectors - let's do the latter.

A transformation of a vector space is just a rule that maps any vector to another vector in the space. So you could imagine a transformation that rotates all vectors clockwise by 37 degrees. Or one that adds (3,5) to each vector. Or one that doubles each vector. And so on.

Of all possible transformations we could consider, a linear transformation is one with a couple of special properties. Let's call the transformation A(), the vectors a, b, c and so on, and use m,n,... for real numbers (I'm too lazy to use LateX). Then a linear transformation is one for which A(a + b) = A(a) + A(b) for all vectors, and A(ma) = mA(a) for all vectors and scalars. In words, 'the transform of a sum of vectors is the sum of the transforms of each vector' and 'the transform of a scalar multiple of a vector is the scalar multiple of the transform of the vector'.

Now here's what a matrix is: it's a way of representing a particular linear transform in a particular set of coordinates. Suppose we know that a linear transform takes i to mi + nj, and takes j to pi + qj (where i and j are the standard unit vectors along the x and y axes). Then we can write that transform as a matrix:

(m n)
(p q)

And multiplying any vector by that matrix (on the left, of course) now gives the transform of that vector. You can see this by doing the matrix multiplication explicitly, writing a general vector as, say, ri + sj, and using the linearity properties mentioned above.

Often matrices are first introduced to students as just 'tables of numbers'. This is all fine if you just want to crunch through numbers, and the definitions of matrix addition and multiplication by a scalar are 'intuitively obvious'. But then they teach you about matrix multiplication, and the obvious response is 'why on Earth is it such a complicated, apparently made-up rule'? And the answer is to do with what matrices really are: the definition of matrix multiplication comes out of considering them as transformations of vectors. It also helps when they introduce you to determinants, identity and inverse matrices.

{Now here's what a matrix is: it's a way of representing a particular linear transform in a particular set of coordinates. Suppose we know that a linear transform takes i to mi + nj, and takes j to pi + qj (where i and j are the standard unit vectors along the x and y axes). Then we can write that transform as a matrix:

(m n)
(p q) }


so a 2x2 matrix is forms by 2 vector which from 2 row coloumn??
example..

vector one : ai + bj
vector two : ci + dj

conclusion is = vector one and two are combine become \stackrel{a}{b} \stackrel{c}{d} or \stackrel{a}{c} \stackrel{b}{d}
 
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...

Similar threads

Back
Top