- #1
sponsoredwalk
- 533
- 5
I'm so angry You know basic matrix multiplication? Every book I've
looked in, & I've spent all frickin' day on googlebooks & amazon checking
this out, defines matrix multiplication in the shorthand summation notation
or else as the standard row-column algorithm you should parrot off.
Every forum I've read defines matrix multiplication as these things -
I mean it's definition, who am I to question it? - or else says
'it's because it works, it's just a convenient way to define it this way'
or else uses some linear transformation explanation that I haven't studied
yet (but will in 1 chapter!) but this linear transformation thing doesn't look
convincing to me from what I understand of it. Basically the only person
who went to the trouble of explaining this properly was Sal of
khanacademy http://www.khanacademy.org/video/linear-algebra--matrix-product-examples?playlist=Linear Algebra" Did you know about this (I'll explain in a minute)?
Where &/or in what book did you learn about it?
Well, not every book defines it as these things. http://books.google.ie/books?id=2w-...resnum=4&ved=0CDIQ6AEwAw#v=onepage&q&f=false" by Lang gives
a slightly better explanation in terms of dot products but it wasn't
satisfying enough. It was enough of a hint at the right way to do this
but he didn't explain it properly unfortunately.
Basically I am partly posting this to find out more about a specific operation
known as the transpose. I'm actually a little confused because in one
book, http://books.google.ie/books?id=Gv4...resnum=1&ved=0CCwQuwUwAA#v=onepage&q&f=false", he defines a vector in two ways:
X = (x,y,z)
or
___|x|
X= |y|
___|z|
(Obviously the ___ are just to get the | | things to form a column shape :p)
which are equivalent but then in the video I linked to above Sal calls
the vector:
___|x|
X= |y|
___|z|
as if it's normal but calls X = (x,y,z) the transpose, Xᵀ = (x,y,z).
I remember from my old http://tutorial.math.lamar.edu/Classes/LinAlg/LinAlg.aspx" of linear algebra (which I hated & quit
because it made no sense memorizing algorithms and faking my way through proofs) that
the transpose is different somehow and is used in inverting a matrix
I think but is a vector and it's transpose the same thing or something?
Anyway, using this idea of a transpose it makes the whole concept of
matrix multiplication 100% completely, lovingly, passionately, painfully,
hatefully, relievingly intelligible. I think the picture is clear,
http://img155.imageshack.us/img155/2433/blaea.jpg
In part 3 you just take the transpose of each row of the 2x3 matrix &
dot it with the 3x1 matrix. I just wrote part 4 in as well because
in that book around page 4 he defines both modes of dot product as
being the same, which they are.
It seems like a trick the way I've decomposed the matrix though,
I mean I could use these techniques to multiply matrices regardless
of their dimensions.
I've just written down a method using these to multiply two matrices
of dimensions 2x3, i.e. (2x3)•(2x3) and gotten a logical answer.
If we copy the exact algorithm I've used in the picture then multiplying
two matrices of equal size is indeed meaningless as you take the dot
product of two differently sized vectors but if I play with the techniques
used to decompose a big matrix I can swing it so that I get a logical
answer.
[1,2,3][a,b,c] = [1,2,3][a] [1,2,3][b] [1,2,3][c]
[4,5,6][d,e,g] = [4,5,6][d] [4,5,6][e] [4,5,6][g]
(This is the same as I do in the picture, then instead of transposing
straight away I just use more of this decomposition only this time I
decompose the left matrix instead of the right one):
[1,2,3][a] [1,2,3][b] [1,2,3][c] = [1][a] [2][a] [1][b] [2][b] [3][b] ...
[4,5,6][d] [4,5,6][e] [4,5,6][g] = [4][d] [5][d] [6][d] [4][e] [5][e] ...
I can view this as a dot product:
[1]•[a] [2]•[a] [1]•[b] [2]•[b] [3]•[b] ...
[4]•[d] [5]•[d] [6]•[d] [4]•[e] [5]•[e] ...
(Obviously I just wrote 2 •'s @ each row vector to keep it neat ;))
and I end up with some ridiculously crazy matrix essentially being
meaningless but still following the "rules" I came up with something.
This is important, my little knowledge of Hamilton is that he just defined
ijk = -1 because it worked. Maybe this is true, and from what I know
using this kind of algebra is useful in special relativity but I think it's
literally a cheat, there is no explanation other than "it works".
Hopefully I'm wrong! But, with my crazy matrix up here, why is it wrong?
Is it really just that "it works" when we do it the way described in
the picture but it doesn't work (i.e. it doesn't describe physical reality)
when I do it the way I did here? What does this say about mathematics
being independent from reality when we ignore things like this, my
ridiculous matrix, and focus on the ones that describe reality?
I know it's stupid but I don't knw why :-p
I'm also worried because just magically defining these things seems to
be common, looking at differential forms I think, & this is because I
haven't studied them properly, that you literally invoke this [B]witchcraft [/B]
when doing algebra with the dx's and dy's.
I seriously hope that there are reasons behind all of this, thinking
about the [URL]https://www.physicsforums.com/showthread.php?t=423992"[/URL] I made there was a perfect reason why
things like [B]i[/B]x[B]j[/B] = [B]k[/B] & [B]j[/B] x [B]i[/B] = -[B]k[/B] make sense but here I'm worried.
In the cross product example the use of a determinant, an abuse of
notation, is a clear sign we're invoking magic spells to get the right
answers but with matrix multiplication I haven't even located the
source of the sourcery yet & it's driving me crazy :blushing:
Honestly, tell me now if I've got more of this to expect with
differential forms or will I get a solid answer? :-p
[B]TL;DR[/B] - The method in the picture of multiplying matrices seems to me
to be the most logical explanation of matrix multiplication, but why is
it done that particular way & not the way I described in this part:
[1,2,3][a,b,c] = [1,2,3][a] [...
of the post? Also, with differential forms when you multiply differential's
dx's and dy's etc... you are using magic sorcery adding minuses yada
yada yada by definition, how come? Is there a beautiful reason for
all of this like that described in the [URL]https://www.physicsforums.com/showthread.php?t=423992"[/URL]? Oh, and what's
the deal with transposes? Transposing vectors is the reason why I can
use this method in the picture, but I mean I could stupidly take the matrix
[1,2,3]
[4,5,6]
as being either:
(1,2,3) transposed from it's column vector form, or
(1,4) & (2,5) & (3,6) as being the vectors, it's so weird...
Also, I could have taken part 2 of the picture differently, multiplying
the Y matrix by 3 1x2 X vectors, again it's so weird... :cry:
[SIZE=1][I]/pent_up_rant...[/I][/SIZE]
looked in, & I've spent all frickin' day on googlebooks & amazon checking
this out, defines matrix multiplication in the shorthand summation notation
or else as the standard row-column algorithm you should parrot off.
Every forum I've read defines matrix multiplication as these things -
I mean it's definition, who am I to question it? - or else says
'it's because it works, it's just a convenient way to define it this way'
or else uses some linear transformation explanation that I haven't studied
yet (but will in 1 chapter!) but this linear transformation thing doesn't look
convincing to me from what I understand of it. Basically the only person
who went to the trouble of explaining this properly was Sal of
khanacademy http://www.khanacademy.org/video/linear-algebra--matrix-product-examples?playlist=Linear Algebra" Did you know about this (I'll explain in a minute)?
Where &/or in what book did you learn about it?
Well, not every book defines it as these things. http://books.google.ie/books?id=2w-...resnum=4&ved=0CDIQ6AEwAw#v=onepage&q&f=false" by Lang gives
a slightly better explanation in terms of dot products but it wasn't
satisfying enough. It was enough of a hint at the right way to do this
but he didn't explain it properly unfortunately.
Basically I am partly posting this to find out more about a specific operation
known as the transpose. I'm actually a little confused because in one
book, http://books.google.ie/books?id=Gv4...resnum=1&ved=0CCwQuwUwAA#v=onepage&q&f=false", he defines a vector in two ways:
X = (x,y,z)
or
___|x|
X= |y|
___|z|
(Obviously the ___ are just to get the | | things to form a column shape :p)
which are equivalent but then in the video I linked to above Sal calls
the vector:
___|x|
X= |y|
___|z|
as if it's normal but calls X = (x,y,z) the transpose, Xᵀ = (x,y,z).
I remember from my old http://tutorial.math.lamar.edu/Classes/LinAlg/LinAlg.aspx" of linear algebra (which I hated & quit
because it made no sense memorizing algorithms and faking my way through proofs) that
the transpose is different somehow and is used in inverting a matrix
I think but is a vector and it's transpose the same thing or something?
Anyway, using this idea of a transpose it makes the whole concept of
matrix multiplication 100% completely, lovingly, passionately, painfully,
hatefully, relievingly intelligible. I think the picture is clear,
http://img155.imageshack.us/img155/2433/blaea.jpg
In part 3 you just take the transpose of each row of the 2x3 matrix &
dot it with the 3x1 matrix. I just wrote part 4 in as well because
in that book around page 4 he defines both modes of dot product as
being the same, which they are.
It seems like a trick the way I've decomposed the matrix though,
I mean I could use these techniques to multiply matrices regardless
of their dimensions.
I've just written down a method using these to multiply two matrices
of dimensions 2x3, i.e. (2x3)•(2x3) and gotten a logical answer.
If we copy the exact algorithm I've used in the picture then multiplying
two matrices of equal size is indeed meaningless as you take the dot
product of two differently sized vectors but if I play with the techniques
used to decompose a big matrix I can swing it so that I get a logical
answer.
[1,2,3][a,b,c] = [1,2,3][a] [1,2,3][b] [1,2,3][c]
[4,5,6][d,e,g] = [4,5,6][d] [4,5,6][e] [4,5,6][g]
(This is the same as I do in the picture, then instead of transposing
straight away I just use more of this decomposition only this time I
decompose the left matrix instead of the right one):
[1,2,3][a] [1,2,3][b] [1,2,3][c] = [1][a] [2][a] [1][b] [2][b] [3][b] ...
[4,5,6][d] [4,5,6][e] [4,5,6][g] = [4][d] [5][d] [6][d] [4][e] [5][e] ...
I can view this as a dot product:
[1]•[a] [2]•[a] [1]•[b] [2]•[b] [3]•[b] ...
[4]•[d] [5]•[d] [6]•[d] [4]•[e] [5]•[e] ...
(Obviously I just wrote 2 •'s @ each row vector to keep it neat ;))
and I end up with some ridiculously crazy matrix essentially being
meaningless but still following the "rules" I came up with something.
This is important, my little knowledge of Hamilton is that he just defined
ijk = -1 because it worked. Maybe this is true, and from what I know
using this kind of algebra is useful in special relativity but I think it's
literally a cheat, there is no explanation other than "it works".
Hopefully I'm wrong! But, with my crazy matrix up here, why is it wrong?
Is it really just that "it works" when we do it the way described in
the picture but it doesn't work (i.e. it doesn't describe physical reality)
when I do it the way I did here? What does this say about mathematics
being independent from reality when we ignore things like this, my
ridiculous matrix, and focus on the ones that describe reality?
I know it's stupid but I don't knw why :-p
I'm also worried because just magically defining these things seems to
be common, looking at differential forms I think, & this is because I
haven't studied them properly, that you literally invoke this [B]witchcraft [/B]
when doing algebra with the dx's and dy's.
I seriously hope that there are reasons behind all of this, thinking
about the [URL]https://www.physicsforums.com/showthread.php?t=423992"[/URL] I made there was a perfect reason why
things like [B]i[/B]x[B]j[/B] = [B]k[/B] & [B]j[/B] x [B]i[/B] = -[B]k[/B] make sense but here I'm worried.
In the cross product example the use of a determinant, an abuse of
notation, is a clear sign we're invoking magic spells to get the right
answers but with matrix multiplication I haven't even located the
source of the sourcery yet & it's driving me crazy :blushing:
Honestly, tell me now if I've got more of this to expect with
differential forms or will I get a solid answer? :-p
[B]TL;DR[/B] - The method in the picture of multiplying matrices seems to me
to be the most logical explanation of matrix multiplication, but why is
it done that particular way & not the way I described in this part:
[1,2,3][a,b,c] = [1,2,3][a] [...
of the post? Also, with differential forms when you multiply differential's
dx's and dy's etc... you are using magic sorcery adding minuses yada
yada yada by definition, how come? Is there a beautiful reason for
all of this like that described in the [URL]https://www.physicsforums.com/showthread.php?t=423992"[/URL]? Oh, and what's
the deal with transposes? Transposing vectors is the reason why I can
use this method in the picture, but I mean I could stupidly take the matrix
[1,2,3]
[4,5,6]
as being either:
(1,2,3) transposed from it's column vector form, or
(1,4) & (2,5) & (3,6) as being the vectors, it's so weird...
Also, I could have taken part 2 of the picture differently, multiplying
the Y matrix by 3 1x2 X vectors, again it's so weird... :cry:
[SIZE=1][I]/pent_up_rant...[/I][/SIZE]
Last edited by a moderator: