Linear Algebra Texts Recommended for Rigorous Approach

  • Thread starter JasonRox
  • Start date
In summary, the text we are using sucks in my opinion. I would like a more rigorous approach to linear algebra. I recommend "Linear Algebra Done Right" by Sheldon Axler.
  • #36
I've never seen these details worked out, nor have I any desire to see them written out.

I am perfectly willing to accept that if one defines det of a matrix by some cofactor expansion method, that one can prove these elementary facts, but I do not see 0rthodontist proving any of them. All I see is a reference to an equally horrible to prove fact. For instance, from the last post we now have to prove that row operations cannot make nonzero determinants to zero determinants.

All that being said, even if we demonstrate that these pulled from nowhere cofactor definitions do satisfy these results, we still do not explain why on Earth this has any bearing on the idea of volume change.

If we adopt the exterior algebra point of view it is absolutely trivial, and trivial to demonstrate that the determinant satisfies

[tex] {\rm det}(a_{ij})=\sum_{\sigma \in S_n}{\rm sign}(sigma)a_{1\sigma(1)}\ldots a_{n\sigma(n)}[/tex]
 
Last edited:
Physics news on Phys.org
  • #37
There are three types of elementary row operations:

1. Switch two rows
2. Multiply a row by a constant
3. Add one row to another

Each such operation has a very simple matrix representation (they look "more or less" like the identity matrix). It's easy to show that each such matrix is invertible. So row operations cannot make nonzero determinants to zero determinants.

Anyways, det(A)det(B) = det(AB) is a result from a first course in linear algebra, in fact I think I might have done it in high school algebra. Would it make sense to teach exterior algebras in high school?
 
  • #38
Would it make sense to teach nxn matrices for all n and proper proofs in high school? No, you offer a justification and wave your hands and claim it is ok, like all maths at that level. There is a difference between using a result, justifying a result and then proving a result. We use real numbers in high school, who was taught that they are the unique totally ordered field? The question is about *proving* that determinants behave properly, not accetping and using the fact that they do. Besides, I still note 0rthodontist has not given his definition of determinant. Is the cofactors belief of mine correct? I also fail to see why your justification that multiplying by an invertible matrix doesn't make nonzero determinants zero is valid *without assuming that determinants behave properly*.
 
  • #39
matt grime said:
I also fail to see why your justification that multiplying by an invertible matrix doesn't make nonzero determinants zero is valid *without assuming that determinants behave properly*.
Oh you're right, it doesn't. When I wrote that, I forgot that he was trying to prove det(A)det(B) = det(AB) in the first place. The rest of my post was written after I saw that he was trying to prove that, but by that point I wasn't thinking about the stuff I had just written about elementary matrices.
 
  • #40
Well, basically the only unproven point in the proof I gave is the row reduction properties of determinants. Everything else was clear.

Yes, the cofactor definition is the one I am using.

AKG's argument does not depend on determinants. If A and B are invertible then AB is invertible, because AB(B^-1)(A^-1) = I.

Anyway, can you prove that the exterior algebra view is equivalent to the cofactor view in a few words, or would that also take a page or two? (this is a rhetorical question since I likely would not understand your proof)
 
Last edited:
  • #41
ridiculous comment #509:

my favorite treatment of determinants is to prove first the formula for how an exterior power commutes with direct sums:

i.e. the rth wedge product of a direct sum of two modules, is isomorphic to the direct sum of the tensor products of all pairs of lower wedge powers (s,t) of the two modules, such that s+t = r.this implies by induction the existence and uniqueness of determinants subject to the usual alternating axioms, a formula for them, their multiplicativity property, and the computation of the exterior products of all finite free modules.

this treatment is contained in the notes for math 845-3, page 56, on my webpage, for free, for persons not sufficiently challenged by their own linear algebra courses.

these consequences of the theorem are proved there in 2 pages, and then the theorem itself is proved in 3 further pages.
 
Last edited:
  • #42
0rthodontist said:
Well, basically the only unproven point in the proof I gave is the row reduction properties of determinants. Everything else was clear.

but those are the only things you need to prove.

AKG's argument does not depend on determinants. If A and B are invertible then AB is invertible, because AB(B^-1)(A^-1) = I.

Read AKG's own reply to my post. or consider the following: the position stated is that given X, and some invertible operation on X to get Y then X is not zero iff Y is not zero. Now put X=1, the operation as adding -1, and see what happens. If you don't prove this invertible operation actually behaves properly with respect to the property of 'being zero' then you can't use that as a proof.

Anyway, can you prove that the exterior algebra view is equivalent to the cofactor view in a few words, or would that also take a page or two? (this is a rhetorical question since I likely would not understand your proof)

Look at the formula I gave for the determinant: it is an expression of degree n monomials in the entries of the determinant and S_n acts by changing signs (this is equivalent to swapping rows/cols) hence they are the same quantity. (look at mathwonks uniqueness property in his notes).

The properties of S_n's action also tell you that elementary row ops do what you think, and that det is a multiplicative homomorphism, and the definition of volume means that det corresponds to the scale factor.
 
Last edited:
  • #43
matt grime said:
but those are the only things you need to prove.
I just said that.

Anyway Lay proves it. I'll give his proof here when I have a little time.
Read AKG's own reply to my post. or consider the following: the position stated is that given X, and some invertible operation on X to get Y then X is not zero iff Y is not zero. Now put X=1, the operation as adding -1, and see what happens. If you don't prove this invertible operation actually behaves properly with respect to the property of 'being zero' then you can't use that as a proof.
Noninvertible matrices have zero determinants... well, perhaps this fact does depend on determinants. Anyway, I can use the row operation properties of determinants because I'm not trying to prove those. In that case saying that row operations do not take nonzero determinants to zero determinants is just a matter of looking at the constant they multiply the determinant by, which is what I originally intended.

Look at the formula I gave for the determinant: it is an expression of degree n monomials in the entries of the determinant and S_n acts by changing signs (this is equivalent to swapping rows/cols) hence they are the same quantity. (look at mathwonks uniqueness property in his notes).

The properties of S_n's action also tell you that elementary row ops do what you think, and that det is a multiplicative homomorphism, and the definition of volume means that det corresponds to the scale factor.
You lost me... What is S?
 
Last edited:
  • #44
S_n is the permutation group on n elements.So, your proof of the row operation result is going to rest on some result you can't prove that is exactly equivalent to what you need to prove? It is perfectly reasonable to ask you to prove that, especially since you're making claims about the elementary nature of it. You cannot use something that is seemingly harder to prove, and a proof of which you cannot provide, to prove this in a manner that satisfies my curiosity about your position. If indeed it is all a simple matter of just manipulating rows, let's see it.

Revisiting AKG's point (you did read his own reply, right), you cannot say that since A is invertible det(AB) is not zero iff det(B) is not zero without assuming several things, not least is that not invertible is the same as det 0 in your definition, and seemingly that det is multiplicative since you're relying on the fact that xy=0 iff x or y=0. Thus we see this would fail for something other than matrices over a field in many ways (invertible over Z is iff det is 1, for instance).
 
Last edited:
  • #45
matt grime said:
So, your proof of the row operation result is going to rest on some result you can't prove that is exactly equivalent to what you need to prove? It is perfectly reasonable to ask you to prove that, especially since you're making claims about the elementary nature of it. You cannot use something that is seemingly harder to prove, and a proof of which you cannot provide, to prove this in a manner that satisfies my curiosity about your position. If indeed it is all a simple matter of just manipulating rows, let's see it.
No, my proof that A has rank < n implies det A = 0, which is part of my proof that det(AB) = det(A)det(B), not part of the row operation result, is going to rest on the row operation result.

Lay's proof of the row operation result is slightly tricky, not a simple matter of manipulating rows, and it rests on the theorem that you can expand a determinant along any row or column, which he does not prove because he states it would be a lengthy digression. So I have to find a proof for that before I can give you Lay's proof of the row operation result.

... GIVEN the row operation result, everything else is simple.
 
Last edited:
  • #46
0rthodontist said:
... it rests on the theorem that you can expand a determinant along any row or column, which he does not prove because he states it would be a lengthy digression.

I know it's in Nicholson's "Linear Algebra with Applications" if you are really interested. It's another horrid but not difficult thing that, while very important to this scheme of defining determinants, is not usually covered in detail. I know I skipped it when I had to teach determinants this way. While it's distasteful to force students to accept results on my word, nothing would have been gained be covering the details. If you hope to prove everything rigorously this way you cannot avoid the horror.
 
  • #47
0rthodontist said:
... GIVEN the row operation result, everything else is simple.


to summarize, your position that manipulating rows and cols gives you all the understanding you need in linear algebra, is sufficient to prove an elementary result, det(AB)=det(A)det(b), providing we are willing to accept a result that is tricky and not provable by manipulating rows? And you wonder why I think you position on linear, and abstract, algebra is not tenable...
 
  • #48
I mean everything else in the proof is simple. I never asserted that row operations are all you need, and anyway I have left that whole discussion.
 
Back
Top