Why must partialS/partialα_n = -β_n where S is the complete integral?

  • #1
giraffe714
14
2
TL;DR Summary
I don't understand why the derivative of the Jacobi complete integral with respect to the constant α must be another constant, and furthermore why that constant is negative.
As stated in the TLDR, I don't understand why the derivative of the Jacobi complete integral with respect to the constant α must be another constant, and furthermore why that constant is negative. The textbook I'm following, van Brunt's The Calculus of Variations proves it by taking:
$$ \frac{\partial}{\partial \alpha_1} (H + \frac{\partial S}{\partial t}) = \frac{\partial^2 S}{\partial \alpha_1 \partial t} + \sum_{k=1}^n \frac{\partial^2 S}{\partial \alpha_1 \partial q_k} \frac{\partial H}{\partial p_k} = 0 $$
and then just stating that
$$ \frac{\partial S}{\partial \alpha_1} = -\beta_1 $$
is satisfied identically but i cant figure out a) how those two equations are even related and b) from what I can tell, if ## \frac{\partial S}{\partial \alpha_1} = -\beta_1 ## where ##\beta_1## is constant, that means ##\frac{\partial^2 S}{\partial \alpha_1 \partial t} ## must be zero? but if it's zero, then in the original equation
$$ \frac{\partial}{\partial \alpha_1} (H + \frac{\partial S}{\partial t}) = \frac{\partial^2 S}{\partial \alpha_1 \partial t} + \sum_{k=1}^n \frac{\partial^2 S}{\partial \alpha_1 \partial q_k} \frac{\partial H}{\partial p_k} = 0 $$
the term ## \sum_{k=1}^n \frac{\partial^2 S}{\partial \alpha_1 \partial q_k} \frac{\partial H}{\partial p_k} ## must be zero and I just don't see why that's true?
 
Physics news on Phys.org
  • #2
giraffe714 said:
The textbook I'm following, van Brunt's The Calculus of Variations proves it by taking:
$$ \frac{\partial}{\partial \alpha_1} (H + \frac{\partial S}{\partial t}) = \frac{\partial^2 S}{\partial \alpha_1 \partial t} + \sum_{k=1}^n \frac{\partial^2 S}{\partial \alpha_1 \partial q_k} \frac{\partial H}{\partial p_k} = 0 $$
and then just stating that
$$ \frac{\partial S}{\partial \alpha_1} = -\beta_1 $$
I happened to find online access to this text, so I will refer to specific pages and equations in the book.

The equation ##\dfrac{\partial S}{\partial \alpha_1} = -\beta_1## is just a specific example of the general equation ##P_k = -\dfrac{\partial S}{\partial Q_k}## found in (8.25) on page 173. Note that at the bottom of page 175 we have ##Q_k = \alpha_k## and ##P_k = \beta_k##.

giraffe714 said:
from what I can tell, if ## \frac{\partial S}{\partial \alpha_1} = -\beta_1 ## where ##\beta_1## is constant, that means ##\frac{\partial^2 S}{\partial \alpha_1 \partial t} ## must be zero?
No, ##\dfrac{\partial^2 S}{\partial \alpha_1 \partial t} ## does not have to be zero. This can be confusing. ##S(t, q, \alpha)## generally depends on ##t## both explicitly and also implicitly through the various ##q_k(t)##. So, ##\dfrac{\partial^2 S}{\partial \alpha_1 \partial t} ## will also generally depend on ##t## explicitly and implicitly through the ##q_k(t)##. As time progresses, ##t## and ##q(t)## change in such a way that ##\beta_1 = - \dfrac{\partial S}{\partial \alpha_1} ## remains constant during the time evolution of the system. Similarly for the other ##\beta_k##.

However, ##\dfrac{\partial^2 S}{\partial \alpha_1 \partial t} ## is generally not zero. This is because the notation ##\dfrac{\partial^2 S}{\partial t \partial \alpha_1} ## is interpreted as $$\frac{\partial^2 S}{\partial t \partial \alpha_1} = \left. \frac{\partial}{\partial t} \left( \frac{\partial S}{\partial \alpha_1} \right)\right|_{q_k, \alpha_k}$$ where the ##q_k## are held constant while taking the partial derivative with respect to ##t##. See footnote 7 on page 178.

As an example, look at the expression for ##\beta_2## on page 181 for the geometrical optics example: $$\beta_2 =\frac{\alpha_2 A}{\mu^2} - t$$ where ##A## is the function of ##q_1(t)## given on page 180. ##\beta_2## is a constant of the motion. But $$\frac{\partial^2 S}{\partial t \partial \alpha_2} = \frac{\partial}{\partial t}(-\beta_2) = - \frac{\partial }{\partial t} \left(\frac{\alpha_2 A}{\mu^2} - t \right ) = 1$$ since ##A## is fixed while taking the partial with respect to ##t##.

---------

As a side note, I get a slightly different result for ##\beta_1## and ##\beta_2## for the geometrical optics example. I get $$\beta_1 = \dfrac{2 \alpha_1 A}{\mu^2} - q_2$$ $$\beta_2 = \dfrac{2 \alpha_2 A}{\mu^2} - t.$$ These have a factor of 2 in the fractions that do not appear in the expressions in the book. But I could have made a mistake. My results for ##q_1(t)## and ##q_2(t)## are $$q_1 = \frac{\mu^2}{4\alpha_2^2}(\beta_2+t)^2 + \frac{\alpha_1^2+\alpha_2^2}{\mu^2}$$ $$q_2 = \frac{\alpha_1}{\alpha_2}t + \frac{\alpha_1}{\alpha_2}\beta_2 - \beta_1$$
 
Last edited:
  • Like
Likes WWGD
  • #3
TSny said:
. S(t,q,α) generally depends on t both explicitly and also implicitly through the various qk(t).
That actually makes sense, thank you. I think I was just missing that S is a function of q_k which are also functions of t so that clears it up. One question though, just to make sure: the fact that ## P_k = \beta_k = const. ## doesn't *follow* from ## Q_k = \alpha_k = const. ##, instead they're just *chosen* to be this way - and further there is also no actual guarantee that an S that satisfies this can be found or exists?
TSny said:
But ∂2S∂t∂α2=∂∂t(−β2)=−∂∂t(α2Aμ2−t)=1 since A is fixed while taking the partial with respect to t.
And just to be perfectly clear - when we're taking this partial derivative, we disregard the q_k(t) dependence, which is why it doesn't have to be zero. If this is true then I assume the total time derivate of S would be constant, correct?
 
Last edited:
  • #4
giraffe714 said:
One question though, just to make sure: the fact that ## P_k = \beta_k = const. ## doesn't *follow* from ## Q_k = \alpha_k = const. ##, instead they're just *chosen* to be this way
I don't think the ##P_k##'s are "chosen" to be constants. See the first part of section 8.4.1 which explains why both the ##P_k##'s and the ##Q_k##'s are constants. They are constants because of Hamilton's equations $$\dot Q_k = -\frac{\partial \hat H}{\partial P_k}$$ $$\dot P_k = -\frac{\partial \hat H}{\partial Q_k}$$ with the requirement that ##\hat H = 0##. Thus ##\dot Q_k = 0## and ##\dot P_k = 0##.


giraffe714 said:
- and further there is also no actual guarantee that an S that satisfies this can be found or exists?
##S## exits as long as a solution to the Hamilton-Jacobi equation (8.31) exists.

giraffe714 said:
And just to be perfectly clear - when we're taking this partial derivative, we disregard the q_k(t) dependence, which is why it doesn't have to be zero.
Yes, that's right.

giraffe714 said:
If this is true then I assume the total time derivate of S would be constant, correct?
The total time derivative of ##S(q_k, \alpha_k, t)## is not a constant. The total time derivative of ##\dfrac{\partial S}{\partial \alpha_k}## is zero. See (8.38). And this corresponds to ##\dfrac{\partial S}{\partial \alpha_k} = -\beta_k## being constant.

The time derivative of ##S## is interesting: $$\frac{dS}{dt} = \sum_k \frac{\partial S}{\partial q_k} \dot q_k + \frac{\partial S}{\partial t} = \sum_k p_k \dot q_k - H(q_k, p_k, t) = L$$ where ##L## is the Lagrangian of the system in the original coordinates ##q_k##. Here, we used ##\frac{\partial S}{\partial q_k} = p_k## and ##\frac{\partial S}{\partial t} = -H##.
 

Similar threads

Replies
3
Views
901
Replies
2
Views
777
Replies
4
Views
1K
Replies
3
Views
2K
Replies
3
Views
693
Replies
4
Views
795
  • Advanced Physics Homework Help
Replies
15
Views
2K
  • Programming and Computer Science
Replies
31
Views
2K
  • Advanced Physics Homework Help
Replies
5
Views
1K
Back
Top