When Can Physicists Treat Symbols Like Ordinary Variables in Equations?

  • Thread starter cliowa
  • Start date
  • Tags
    Dx
In summary, Physicists do it all the time: playing around with those symbols like \frac{dx}{dt}. They just treat them like ordinary variables, which they certainly are not.
  • #1
cliowa
191
0
Physicists do it all the time: playing around with those symbols like [tex]\frac{dx}{dt}[/tex]. They just treat them like ordinary variables, which they certainly are not. Let me give you an example: If you want to change an integration variable from, say, x to r and you know that [tex]r^{2}=a^{2}+R^{2}-2Rx[/tex] they would simply differentiate, rearrange the stuff and write down [tex]r\cdot dr=-R\cdot dx[/tex].
Now, I don't believe this to be wrong, but I don't understand why (under what circumstances) one is allowed to operate like that. I didn't find any good answers so I thought I'd ask here.
Thanks a lot in advance. Best regards

Cliowa
 
Physics news on Phys.org
  • #2
Hmm... maybe I haven't gotten too far into physics to see the real playing around, but my physics teacher does stuff like multiplying one side by dx/dx or something to manipulate equations and all.
I guess nothing like that r dr =-R dx thing you did, though. :)
 
  • #3
A lot of the time, they are using the implicit function theorem, along with the inverse function theorem of differentiation.
Let us rewrite what you have a bit, assuming a and R to be constants:
Define a function of two variables [itex]F(x,r)=r^{2}-(a^{2}+R^{2})+2Rx[/itex]

Now, regard the equation for the zeroes of F:
[tex]F(x,r)=0[/tex]
Evidently, only a few of the points in the (x,r)-PLANE will be the solutions of this equation. If we try to change the x-value from a known zero [itex](x_{0},r_{0})[/itex] we must in general ALSO change the value of "r" in order for F(x,r)=0 to remain true.

Now, the implicit function theorem states that (under fairly general conditions) in the vicinity of a zero [itex](x_{0},r_{0})[/itex] there will exist a zero-point curve [itex](x,\hat{r}(x))[/itex], so that we have:
[tex]F(x,\hat{r}(x))=0[/tex] over some x-interval.
But on this interval, this equation holds for ALL x, and we may differentiate it, with respect to x:
[tex]\frac{\partial{F}}{\partial{x}}+\frac{\partial{F}}{\partial{r}}\frac{d\hat{r}}{dx}=0,\to\frac{d\hat{r}}{dx}=-\frac{\frac{\partial{F}}{\partial{x}}}{\frac{\partial{F}}{\partial{r}}}[/tex]
That is, we've managed to express the slope of the zero-pointcurve [itex]\hat{r}[/itex] in terms of the negative ratio of F's partial derivatives!
In your case, we have:
[tex]\frac{\partial{F}}{\partial{x}}=2R,\frac{\partial{F}}{\partial{r}}=2r=2\hat{r}(x),\to\frac{d\hat{r}}{dx}=-\frac{R}{\hat{r}(x)}[/tex]

The condition that this is legitimate in he vicinity of a zero [itex](x_{0},r_{0})[/itex] is that
[tex]\frac{\partial{F}}{\partial{r}}|_{(x_{0},r_{0})}\neq{0}[/tex]
In your case, it is therefore seen that this is legitimate at any zero where r is different from zero.

I'll let you digest this before proceeding.
 
Last edited:
  • #4
Strictly speaking, [itex]\frac{dy}{dx}[/itex] is not a fraction. But it is the limit of a fraction: [itex]\frac{f(x+h)-f(x)}{h}[/itex]. That means that we can treat it like a fraction: for example, the chain rule says [itex]\frac{dz}{dx}= \frac{dz}{dy}\frac{dy}{dx}[/itex] looks like we are "cancelling" the "dy" terms. We aren't really but we can go back before the limit, cancel terms there, then take the limit. That's why, after defining the derivative, [itex]\frac{dy}{dx}[/itex] we then define the differentials, dx and dy separately- so that we have a notation allowing us to treat the derivatives as if it were a fraction.
 
  • #5
Now, let us regard the above from a somewhat different point of view that is closer to how you have been presented this:
We start with the equation:
[tex]r^{2}=a^{2}+R^{2}-2Rx (A)[/tex]

Now we may ask ourselves:
If we look at two close-lying, solutions of (A) how is the change in "x" value from solution 1 to sol. 2 related to the change of "r" value from solution 1. to sol.2?

Let solution 1 be called [itex](x,r)[/itex] (for simplicity), solution 2 [itex](x+\bigtriangleup{x},r+\bigtriangleup{r})[/itex]

Since, by assumption, BOTH are solutions to (A), we have:
[tex]r^{2}=a^{2}+R^{2}-2Rx (1)[/tex]
and:
[tex](r+\bigtriangleup{r})^{2}=a^{2}+R^{2}-2R(x+\bigtriangleup{x})(2)[/tex]
(2)-(1) yields, when ignoring the quadratic term in the change of r:
[tex]2r\bigtriangleup{r}=-2R\bigtriangleup{x}[/tex]
Or, in the "limit", we get [itex]2rdr=-2Rdx[/itex]
This is seen to reproduce your cited result.
It is crucial that you see that this follows when we restrict ourselves to a curve of solutions to (A), which is basically the same as saying there must exist a zero-point curve [itex]\hat{r}[/itex] so that [itex]F(x,\hat{r})=0[/itex]

The "physicists"' view gets away with a slightly less amount of notation than what I used in the previous post.
 
  • #6
Wow, thanks a lot guys. That was quite helpful.

arildno said:
It is crucial that you see that this follows when we restrict ourselves to a curve of solutions to (A), which is basically the same as saying there must exist a zero-point curve [itex]\hat{r}[/itex] so that [itex]F(x,\hat{r})=0[/itex]

I think I understand most of this, except for one thing: The existence of a zero-point curve is only guaranteed for in a vicinity of [tex](x_{0},r_{0})[/tex], right? When I'm doing this change of variables thing later on, i.e. when integrating, I might not confine my integration limits to a vicinity of [tex](x_{0},r_{0})[/tex]. How can I be sure the whole thing still works then?
 
  • #7
HallsofIvy said:
Strictly speaking, [itex]\frac{dy}{dx}[/itex] is not a fraction. But it is the limit of a fraction: [itex]\frac{f(x+h)-f(x)}{h}[/itex]. That means that we can treat it like a fraction: for example, the chain rule says [itex]\frac{dz}{dx}= \frac{dz}{dy}\frac{dy}{dx}[/itex] looks like we are "cancelling" the "dy" terms. We aren't really but we can go back before the limit, cancel terms there, then take the limit. That's why, after defining the derivative, [itex]\frac{dy}{dx}[/itex] we then define the differentials, dx and dy separately- so that we have a notation allowing us to treat the derivatives as if it were a fraction.

You know, I get the general picture, but I feel what we're doing isn't quite as easy as you described it: We're not looking at the fraction, rearranging the fraction and then taking the limit, but we're taking the limit on one side of the equal sign and then rearranging (with the other side) as though one wasn't a limit, but as if it were variables. That's what bothers me: We don't seem to be treating the two sides of the equal sign the same way! Please do correct me if I'm wrong on this one.
 
  • #8
HallsOfIvy said:
we then define the differentials, dx and dy separately- so that we have a notation allowing us to treat the derivatives as if it were a fraction.
Since this thread is to bring out what we're "really doing" in this field, I feel it's necessary to nitpick this: the dx and dy in the expression dy/dx are not differentials. (and / isn't division either: it's just one giant symbol)

Although it is true that df = f' dx, that's just because df happens to be a multiple of dx. You can't really "divide" them. (very much like one vector can happen to be a multiple of another vector)
 

FAQ: When Can Physicists Treat Symbols Like Ordinary Variables in Equations?

What are dt and dx in scientific terms?

dt and dx are commonly used symbols in mathematics and physics to represent infinitesimal changes in time and space, respectively. They are often used in equations to describe rates of change or derivatives.

How are dt and dx related?

dt and dx are related through the concept of a derivative. In calculus, the derivative of a function represents the instantaneous rate of change at a specific point. This can be thought of as the ratio of the infinitesimal change in the output (dt) to the infinitesimal change in the input (dx).

Why are dt and dx important in scientific research?

Dt and dx are important because they allow us to mathematically describe and analyze changes in time and space. This is crucial in many scientific fields such as physics, engineering, and economics, where understanding and predicting rates of change is essential.

Can dt and dx be measured in experiments?

No, dt and dx cannot be directly measured in experiments as they represent infinitesimal changes. However, they can be estimated and calculated using data collected from experiments, and this information can be used to make predictions and draw conclusions.

How can dt and dx be applied to real-world problems?

Dt and dx can be applied to real-world problems in a variety of ways. For example, in physics, they are used to model the motion of objects and predict their behavior. In economics, they are used to analyze market trends and predict future changes. In engineering, they are used to design and optimize systems and processes. Overall, dt and dx are powerful tools for understanding and solving complex problems.

Back
Top