- #1
- 3,766
- 297
Hello all,
I may get a contract to teach numerical analysis. I did quite a lot of numerical work during my PhD but that was a while ago. Now when I look at most books on the topic, I get the feeling that a lot is outdated, and I also feel that a lot of what I knew is outdated as well because of the possibility to do arbitrary precision computations.
I mean that most examples used to try to illustrate the danger of rounding off errors are no loner an issue now if one uses "arbitrary precision" (I know, that's not really an honest terminology). I mean that I can simply keep a lot of precision in Mathematica and all these examples become completely well behaved (and one can also use arbitrary precision in Python).
Now I realize that for large scale computations, using a huge amount of precision may slow down things a lot and that if there are truly catastrophic cancellations one can still not be saved with using arbitrary precision, but I feel uneasy because it seems to me that all the books I have looked at still work within the old paradigm of low precision calculations. It's not clear to me how advantageous it is to teach more complex algorithms to get rid of rounding off errors when one can simply force the software to increase dramatically the precision of the numbers used.
So my questions are:
a) Does anyone know a textbook that would be mindful of this fact and be teaching a "modern" approach to numerical analysis? That would cover well the pros and cons of increasing the precision for different types of applications (interpolation, integration, differentiation, solutions of nonlinear equations) versus using better algorithms?
b) Any advice from anyone who is teaching that material/taking classes in that topic/using numerical analysis in their work?
Thank you in advance!
I may get a contract to teach numerical analysis. I did quite a lot of numerical work during my PhD but that was a while ago. Now when I look at most books on the topic, I get the feeling that a lot is outdated, and I also feel that a lot of what I knew is outdated as well because of the possibility to do arbitrary precision computations.
I mean that most examples used to try to illustrate the danger of rounding off errors are no loner an issue now if one uses "arbitrary precision" (I know, that's not really an honest terminology). I mean that I can simply keep a lot of precision in Mathematica and all these examples become completely well behaved (and one can also use arbitrary precision in Python).
Now I realize that for large scale computations, using a huge amount of precision may slow down things a lot and that if there are truly catastrophic cancellations one can still not be saved with using arbitrary precision, but I feel uneasy because it seems to me that all the books I have looked at still work within the old paradigm of low precision calculations. It's not clear to me how advantageous it is to teach more complex algorithms to get rid of rounding off errors when one can simply force the software to increase dramatically the precision of the numbers used.
So my questions are:
a) Does anyone know a textbook that would be mindful of this fact and be teaching a "modern" approach to numerical analysis? That would cover well the pros and cons of increasing the precision for different types of applications (interpolation, integration, differentiation, solutions of nonlinear equations) versus using better algorithms?
b) Any advice from anyone who is teaching that material/taking classes in that topic/using numerical analysis in their work?
Thank you in advance!