- #1
observer1
- 82
- 11
Good Day
Let's say I have developed a new method to extract, more efficiently (yes, "more efficiently" is ill-defined; but bear with me) the differential equations that describe a specific phenomena (please just assume it).
So now I have a system of coupled second order differential equations with non-constant coefficients.
I have a few tasks now: 1) teach this method to students; 2) demonstrate that it (IT = my method in extracting the equations, not solving them) works; 3) optimized solution method.
With regard to 1 and 2, I must now create a test case. And I am confronted with all sorts of information about integration schemes: implicit/explicit, speed, cost, memory, and so many other issues.
Now, back in the day (30 years ago) when CPU's were slow and memory expensive, there were so many papers on finite element methods: reduced integration, Crout reduction and memory storage, hourglassing, etc. They all seem like a fart in a hurricane in today's world of fast CPU's and cheap memory.
Can the same thing be suggested about numerical method (yes, and I agree, that to a large extent,the finite element method itself is a glorified interpolation scheme -- but let's not go there) and its concomitant culture of research papers each purporting to reveal a faster and faster method?
My question is this... Assuming I have a stiff system and all the time in the world to create really tiny time steps and a lot of memory... does the choice of the integration scheme matter? (Runge-Kutta, Newton-Raphson, Newmark Beta, Central Difference, etc.)?
In reality, I know this should be an implicit method, but I don't have the time to implement it as speed is not my focus right now.
In time, IF I pass the first two hurdles, I can return to optimizing the integration scheme and seek collaborators. But should I be concerned about such issues at the start? Can one really pick a very bad method that cannot be quickly "fixed" by making the time step even smaller?
Let's say I have developed a new method to extract, more efficiently (yes, "more efficiently" is ill-defined; but bear with me) the differential equations that describe a specific phenomena (please just assume it).
So now I have a system of coupled second order differential equations with non-constant coefficients.
I have a few tasks now: 1) teach this method to students; 2) demonstrate that it (IT = my method in extracting the equations, not solving them) works; 3) optimized solution method.
With regard to 1 and 2, I must now create a test case. And I am confronted with all sorts of information about integration schemes: implicit/explicit, speed, cost, memory, and so many other issues.
Now, back in the day (30 years ago) when CPU's were slow and memory expensive, there were so many papers on finite element methods: reduced integration, Crout reduction and memory storage, hourglassing, etc. They all seem like a fart in a hurricane in today's world of fast CPU's and cheap memory.
Can the same thing be suggested about numerical method (yes, and I agree, that to a large extent,the finite element method itself is a glorified interpolation scheme -- but let's not go there) and its concomitant culture of research papers each purporting to reveal a faster and faster method?
My question is this... Assuming I have a stiff system and all the time in the world to create really tiny time steps and a lot of memory... does the choice of the integration scheme matter? (Runge-Kutta, Newton-Raphson, Newmark Beta, Central Difference, etc.)?
In reality, I know this should be an implicit method, but I don't have the time to implement it as speed is not my focus right now.
In time, IF I pass the first two hurdles, I can return to optimizing the integration scheme and seek collaborators. But should I be concerned about such issues at the start? Can one really pick a very bad method that cannot be quickly "fixed" by making the time step even smaller?
Last edited: