- #1
MaxL
- 60
- 0
So I've always done simple ODEs by the method of separation of variables. You know,
dy/dx = A*y
dy/y = A*dx
IndefiniteIntegral[1/y dy] = IndefiniteIntegral[A dx]
ln(y) = A*x + Constant
y = Constant*e^(A*x)
It's easy to remember and it usually works. A lot of the PDEs I know how to do involve this process at some point.
The problem is, all my professors were quick to point out that this is an abuse of notation. That is, dy/dx isn't really a fraction, and can't necessarily be treated as such--it just happens to work out well if you pretend in a simple ODE.
So what's the rigorous way to do this ODE? What's really going on when you use the 'pretend derivatives are fractions' trick?
dy/dx = A*y
dy/y = A*dx
IndefiniteIntegral[1/y dy] = IndefiniteIntegral[A dx]
ln(y) = A*x + Constant
y = Constant*e^(A*x)
It's easy to remember and it usually works. A lot of the PDEs I know how to do involve this process at some point.
The problem is, all my professors were quick to point out that this is an abuse of notation. That is, dy/dx isn't really a fraction, and can't necessarily be treated as such--it just happens to work out well if you pretend in a simple ODE.
So what's the rigorous way to do this ODE? What's really going on when you use the 'pretend derivatives are fractions' trick?