How much math does a math professor remember?

  • Thread starter andytoh
  • Start date
  • Tags
    Professor
In summary, the conversation discusses the ability of math professors to remember and reproduce proofs of theorems they learned in undergraduate math courses. Some argue that understanding and problem solving skills are more important than memorization, while others share examples of professors who have forgotten certain proofs. It is concluded that while mathematicians may not intentionally memorize proofs, they are able to remember and reproduce them through understanding and problem solving skills.
  • #36
andytoh said:
I've always wondered about this question. I've taken university math courses and gotten A+'s. But then years later, if I never used topics in that course again, I realize how much I have forgotten.

A math professor who does research in, say, number theory would essentially never use, say, the Gauss-Bonnet Theorem that he had learned many years ago in Differential Geometry. Would the number theorist be able to pick a textbook problem in the Gauss-Bonnet chapter and solve it from the top of his head? Are math professors so mentally powerful that the phrase "if you don't use it, you lose it" does not apply to them? Do they remember every math topic they have learned as much as they did just before walking into their final exam many years ago?

For example, how many math professors reading this post can prove the Inverse Function Theorem of second year calculus from scratch?

Welcome to the human race.
As far as I know, not one Professor I had in university was able to reproduce high school trig identities... some were positively worse than I was in elementary computations... most made stupid mistakes on the board from time to time. One or two even made logically flawed side remarks for the sake of interest that were later shot down by the students.

From a biological perspective, the brain simply downgrades dendritic connections that aren't being used. Repeated activation of the same synapses over time induces LTP, which will serve to keep the traces in your mind for some time.

I suspect that this natural forgetting imposes a natural limit on human intelligence in the long run. Some scientists have been able to increase the intelligence of mice by up-regulating their LTP through increasing the number of NMDA receptors at Princeton.

The truly intelligent are those who not merely do not forget, but somehow manage to integrate new knowledge with the old, seamlessly.

I think some psychologists have shown that too much knowledge reduces speed of retrieval and actually dampens creative problem-solving capacity.
 
Last edited:
Mathematics news on Phys.org
  • #37
andytoh said:
True, but the more you know and remember, the more you can do as well. I'm sure many mathematicians got stuck in a problem because they were lacking some knowledge (or had forgotten some results) that would have helped immensely.

Oh well. Tough luck. Excessive knowledge hinders intuition and blunts your problem-solving skills. In this respect, some degree of 'forgetting' could be good, since it gives you a chance to rearrange your thinking.

I think 10,000 hours is a good benchmark for time required to become a professional mathematician of respectable prowess. Then again, this is relative to existing players in the field. Only the best experts ever reach 10,000 hours of practise in their fields - regardless of whether this is music, chess, physics or whatever... You might want to think whether this investment is really worth your time (you probably would have gotten 5000 hrs of advanced mathematics done by the time you reached your PhD) and count all the opportunity costs.
 
  • #38
The last unversity math course I taught was over 20 years ago, and the majority of my work since that time has been in engineering applications rather than mathematics. I remember basic mathematical theorems and their proofs in the same way I remember old friends. I can't reproduce the exact conversations we had, nor exactly how they looked, but if I run across them on the street the details immediately return. What I find important is not remembering specifics but knowing where to look if I do need to return to the theory. For the basics, once I read the theorem it quickly comes back to mind and I can sketch the proof in my mind fairly easily. For more advanced topics, I recognize the theorems but would have difficulty reproducing the proof. I have a hard time even with my own thesis.
 
  • #39
andytoh said:
I personally intend to remember every defintion, theorem, and their proofs that I read. To help myself out in this regard, I type out the proofs of all the theorems, and where there are gaps (gaps in the sense that the omitted detail is obvious to the writer but not immediately to me) I fill in the details myself. In case I forget the proof later on, I can reread what I typed out.

This may sound time-consuming, but I found that I spend just as much time reading the proof and fully understanding it anyway, so it is no real time loss for me at all. Fully understanding the remembering the proofs have also helped me understand the definitions and theorems much better and apply them to solve new problems.

After seeing a sample of my typing, selfAdjoint responded in another thread of mine:

It's easy enough to understand a non-trivial proof from an undergraduate textbook and memorize the intuitions and general ideas behind them (15 minutes tops?!). That gives you about 4 undergraduate theorems per hour. Of course, it's difficult to go beyond 4 new theorems a day, since your mind will probably begin to mix them up if you tried. For most of us, memory lags behind understanding.

The difficult part is in translating the general idea into an explicit and rigorous mathematical statement. I presume this is the part where people need to practice, practice, practice.

Perhaps have a diary recording nothing else but your learning of theorems - when you read them, when you reviewed them, and where the gaps in the understanding were...

Sometimes I get too lazy to read up on an elementary result employed in a proof, especially when it seems intuitively plausible. This is where the weaknesses in the mathematical superstructure of an average student in mathematics probably lie. A little idleness here and there.

I also wonder if the following is an exercise in futility: if we not only tried to understand a proof, but more importantly, perhaps tried to understand the train of thought that led the author to construct that proof. If mathematicians were to write out how they stumbled on the insight that led to the solution of the problem... then perhaps mathematical thinking would be advanced significantly. Which mathematics student has not felt irked at the utilisation of a particularly ad-hoc result that emerged seemingly out of nowhere?
 
Last edited:
  • #40
nightdove said:
I also wonder if the following is an exercise in futility: if we not only tried to understand a proof, but more importantly, perhaps tried to understand the train of thought that led the author to construct that proof. If mathematicians were to write out how they stumbled on the insight that led to the solution of the problem... then perhaps mathematical thinking would be advanced significantly.

I've thought about this as well. But I have yet to see one math book that shows a proof in this manner. Let me make my own example:

Prove: If A={a_1,...,a_n} spans a vector space V, then every linearly independent set in V contains at most n elements.

Thinking process:
Let B={b_1,b_2,...} be a linearly independent set. We want to show that B cannot have more than n elements. But how? Hmmm...well, because A spans V, each of the b_i's is a linear combination of the a_i's. What would happen if we took, say, b_1 and joined it with A? The new set A'={b_1, a_1,...,a_n} (with n+1 elements) would have to be linearly dependent, right? Yes indeed, but so what? Well, that would mean that one of the a_i is a linear combination of the other elements of A'. So we can remove this particular a_i, and the resulting set, which has n elements again, would still span V.
Hey! Why don't we repeat this process until all the a_i's are gone and we end up getting A'={b_1,...,b_n}, which would still span V? But would we achieve by doing this? AHHHH! If there was another element b_(n+1) in B, then this element would have to be a linear combination of {b_1,...,b_n}, since {b_1,...,b_n} spans V. But that would contradict the assumption that B is a linearly independent set. There we go! Thus B cannot have more than n elements! Ok, let's write out the proof properly now...



Isn't this what coming up with a proof is really all about, gathering the ideas? The above is not a rigorous proof of course, but it captures the IDEAS and the THINKING PROCESS. Personally, I believe we should understand the ideas of a proof first (as in the above) before we study the formal proof itself. To be honest, I would for certain remember how to prove the above theorem after reading the above, whereas if I read a formal proof I may forget it in a few months.
 
Last edited:
  • #41
Maybe a better question is "can a mathematician see something that he hasn't used in 10 years, recognize it for what it is, and quickly:
find reference material related to the topic
read that material with complete comprehension
I would think that the concepts and broader ideas are more important than the minute details. I'm sure that although many of those professors mentioned may have forgotten some of the trig identities, if they wanted to, they could sit down and derive them. But, most would choose to simply open a text, and within a few seconds to a minute or so, could have complete recollection.

I think my thought is simpler to put into computer programming terms:
A computer programmer may learn 5 or even 10 different programming languages. But, if he's been working in java for 10 years, he may forget some of the syntax used in C++. If he needed to write a program in C++, he would be able to use the correct logic, but may need to glance at a reference book or two for the correct syntax.
 
  • #42
here is a well known description of the abilities of professors and others in academia: warning - it is PG - rated.

ACADEMIA
The Dean leaps tall buildings in a single bound; is more powerful than a locomotive; is faster than a speeding bullet; walks on water; gives policy to God.

The Professor leaps short buildings in a single bound; is more powerful than a switch engine; is just as fast as a speeding bullet; walks on water if it is calm; talks with God.

The Associate Professor leaps short buildings with a running start and favorable winds; is almost as powerful as a switch engine; is faster than a speeding BB; walks on water if it is indoors; talks with God if special request is approved.

The Assistant Professor barely clears a quonset hut; loses a tug of war with a switch engine; can fire a speeding bullet; swims well; is occasionally addressed by God.

The Teaching Assistant runs into buildings; recognizes locomotives two out of three times; has trouble deciding which end of the gun is dangerous; stays afloat with a life jacket; thinks he/she is God.

The Department Secretary lifts buildings and walks under them in a single bound; kicks locomotives off the tracks; catches speeding bullets with his/her teeth and eats them; freezes water with a single glance; he/she is God
 
  • #43
andytoh said:
I've thought about this as well. But I have yet to see one math book that shows a proof in this manner. Let me make my own example:

Prove: If A={a_1,...,a_n} spans a vector space V, then every linearly independent set in V contains at most n elements.

Thinking process:
Let B={b_1,b_2,...} be a linearly independent set. We want to show that B cannot have more than n elements. But how? Hmmm...well, because A spans V, each of the b_i's is a linear combination of the a_i's. What would happen if we took, say, b_1 and joined it with A? The new set A'={b_1, a_1,...,a_n} (with n+1 elements) would have to be linearly dependent, right? Yes indeed, but so what? Well, that would mean that one of the a_i is a linear combination of the other elements of A'. So we can remove this particular a_i, and the resulting set, which has n elements again, would still span V.
Hey! Why don't we repeat this process until all the a_i's are gone and we end up getting A'={b_1,...,b_n}, which would still span V? But would we achieve by doing this? AHHHH! If there was another element b_(n+1) in B, then this element would have to be a linear combination of {b_1,...,b_n}, since {b_1,...,b_n} spans V. But that would contradict the assumption that B is a linearly independent set. There we go! Thus B cannot have more than n elements! Ok, let's write out the proof properly now...



Isn't this what coming up with a proof is really all about, gathering the ideas? The above is not a rigorous proof of course, but it captures the IDEAS and the THINKING PROCESS. Personally, I believe we should understand the ideas of a proof first (as in the above) before we study the formal proof itself. To be honest, I would for certain remember how to prove the above theorem after reading the above, whereas if I read a formal proof I may forget it in a few months.

That linear algebra proof was a rather simple one. Proofs in linear algebra are almost always very intuitive to me. But yes, no matter how easy I found them when I first studied them, I can safely say I have forgotten nearly all of them, now that I'm in the commercial world.

For example, even some of the very simple proofs in probability theory about the Poisson and Normal distribution I may have forgotten, despite these having very straightforward geometrical insights.

For real analysis, I find that the proofs tend to have a very simple geometric logic, but a complicated descriptive structure necessitated by the rigour of the course tends to blind you to the very obvious implicit geometry. I have found that attempting to visualise the R^3 case, almost always enables you to capture the "solution insight" into a proof problem in R^n. Sometimes, even a discrete analog may give you an idea about how to prove the theorem - for instance, the theorem about the existence of a single accumulation point implying an infinite number of accumulation points, by applying logic from discrete mathematics (limited number of pigeonholes, infinite amount of letters).
 
Last edited:
  • #44
andytoh said:
For example, how many math professors reading this post can prove the Inverse Function Theorem of second year calculus from scratch?
Being a good mathematician certainly isn't about being able to recite such and such theorem, or know every exercise from such and such's book.
 
  • #45
nonetheless essentially any senior university mathematician can fairly easily prove all the little examples people are giving like inverse function theorem, riemann integrability, etc... that's what teaching them does for you.
 
Back
Top