Intermediate Math Problem of the Week 9/18/2017

  • Challenge
  • Thread starter PF PotW Robot
  • Start date
In summary, this conversation discusses an intermediate math problem of the week involving a first order differential form in two variables. The participants note a symmetry in the equation and discuss how to find different methods to solve it. They also mention the existence of a differentiable function that is conserved along any solution of the equation. However, there is some confusion about the role of the variables x and y and whether they are independent or dependent in the solution. The conversation ends with a participant stating that whenever they work on math they assumed they understood, they always end up with unanswered questions.
  • #1
PF PotW Robot
Here is this week's intermediate math problem of the week. We have several members who will check solutions, but we also welcome the community in general to step in. We also encourage finding different methods to the solution. If one has been found, see if there is another way. Occasionally there will be prizes for extraordinary or clever methods.

Solve the ODE $$(y^3+xy^2+y) \, dx + (x^3+x^2y+x) \, dy=0$$

(PotW thanks to our friends at http://www.mathhelpboards.com/)
 
  • Like
Likes ISamson, TeethWhitener, S.G. Janssens and 3 others
Physics news on Phys.org
  • #2
I notice there's a symmetry in the equation where in the first term a y can be extracted and in the second an x and the first term matches the second if you interchange x and y. However, I'm not sure where to go from here.
 
  • Like
Likes Ackbach
  • #3
PotW Tobor said:
Solve the ODE $$(y^3+xy^2+y) \, dx + (x^3+x^2y+x) \, dy=0$$
jedishrfu said:
I notice there's a symmetry in the equation where in the first term a y can be extracted and in the second an x and the first term matches the second if you interchange x and y. However, I'm not sure where to go from here.
Let ##M(x,y) := y^3+xy^2+y## and ##N(x,y) := x^3+x^2y+x##. Then ##M_y \neq N_x##, where the subscripts denote partial derivatives. Hence the given ODE is inexact and we proceed to find an integrating factor ##\mu## which - by definition - has to satisfy the PDE
$$
\mu_y M - \mu_x N = \mu(N_x - M_y). \qquad (\ast)
$$
In general this is not very easy, but here the symmetry of the original equation helps. First note that ##N_x - M_y = 3(x^2 - y^2)##, then guess a (simple but non-trivial) functional form for ##\mu## that satisfies ##(\ast)##. Finally, solve the resulting exact ODE in the standard way.
 
  • Like
Likes Ackbach, jedishrfu and Greg Bernhardt
  • #4
Krylov said:
Let ##M(x,y) := y^3+xy^2+y## and ##N(x,y) := x^3+x^2y+x##. Then ##M_y \neq N_x##, where the subscripts denote partial derivatives. Hence the given ODE is inexact and we proceed to find an integrating factor ##\mu## which - by definition - has to satisfy the PDE
$$
\mu_y M - \mu_x N = \mu(N_x - M_y). \qquad (\ast)
$$
In general this is not very easy, but here the symmetry of the original equation helps. First note that ##N_x - M_y = 3(x^2 - y^2)##, then guess a (simple but non-trivial) functional form for ##\mu## that satisfies ##(\ast)##. Finally, solve the resulting exact ODE in the standard way.
Yeah, I had a question about this:
The contention is that ##N_x = 3x^2+2xy+1## and ##M_y = 3y^2+2xy+1## so that, when you subtract them, you get the expression you allude to. The problem I have with this is that presumably, ##y## is a function of ##x##. (After all, isn't that what we're trying to solve?) So why isn't ##N_x = 3x^2+2xy+x^2y'+1##? Why do we get to ignore ##y##'s dependence on ##x##?
 
  • Like
Likes S.G. Janssens and Ackbach
  • #5
I looked at it as first order differential form in two variables.
 
  • Like
Likes TeethWhitener, S.G. Janssens and Ackbach
  • #6
TeethWhitener said:
Yeah, I had a question about this:
The contention is that ##N_x = 3x^2+2xy+1## and ##M_y = 3y^2+2xy+1## so that, when you subtract them, you get the expression you allude to. The problem I have with this is that presumably, ##y## is a function of ##x##. (After all, isn't that what we're trying to solve?) So why isn't ##N_x = 3x^2+2xy+x^2y'+1##? Why do we get to ignore ##y##'s dependence on ##x##?
We are not ignoring the dependence of ##y## on ##x##. Consider - for ##C^1##-functions ##M## and ##N## on some open, simply connected domain ##\Omega## in the plane - the ODE
\begin{equation}\label{symm}
M(x,y)\,dx + N(x,y)\,dy = 0,
\end{equation}
which is a symmetric way of writing
\begin{equation}\label{asymm}
M(x,y) + N(x,y)\frac{dy}{dx} = 0.
\end{equation}
(If you are bothered by the differentials in \eqref{symm}, then just regard \eqref{symm} as formal notation for \eqref{asymm}. Otherwise, do as in post #5.) Exactness of \eqref{symm} just means that the LHS of \eqref{symm} is an "exact differential", i.e. there exists a differentiable function ##E## (the "energy") on ##\Omega## such that
\begin{equation}\label{energy}
E_x = M, \qquad E_y = N.
\end{equation}
It is a theorem that \eqref{symm} is exact if and only if
\begin{equation}\label{integ}
M_y = N_x.
\end{equation}
Now, suppose the existence of such a function ##E## for \eqref{symm} has been established. Then the chain rule shows that ##E## is conserved along any solution of \eqref{symm}. Conversely, if for some point ##(x_0,y_0)## in ##\Omega## it holds that ##E_y(x_0,y_0) \neq 0##, then the equation
$$
E(x, y) = E(x_0, y_0)
$$
defines a local ##C^1##-solution of \eqref{symm} through ##(x_0, y_0)##. So, the key to obtaining solutions of \eqref{symm} is to find ##E##, which amounts to solving \eqref{energy}, which in turn can only be done when \eqref{integ} holds. If it does not, then we better first transform ##M## and ##N## by multiplying with an integrating factor.
 
Last edited:
  • Like
Likes TeethWhitener and Ackbach
  • #7
Thank you both for responding. Maybe I should open up another thread about this. I understand how to solve the problem (i.e., I can crank through the calculation)
E.g., The integrating factor is ##\frac{1}{x^3y^3}## Edit: I screwed up a negative sign in my original post.
and I understand that the solution works, but I don't understand why it works for the given problem (i.e., why we can ignore ##x##-dependence of ##y## when taking the partial derivative of ##N##).
jedishrfu said:
I looked at it as first order differential form in two variables.
I think this is probably ultimately the answer I'm looking for, but I don't think I have the math chops yet to understand it.
Krylov said:
there exists a differentiable function EEE (the "energy") on ΩΩ\Omega such that
Ex=M,Ey=N.​
(3)(3)Ex=M,Ey=N.\begin{equation}\label{energy} E_x = M, \qquad E_y = N. \end{equation}
It is a theorem that (1)(1)\eqref{symm} is exact if and only if
My=Nx.​
Yes, this follows from commutativity of partial derivatives.
Krylov said:
Now, suppose the existence of such a function EEE for (1)(1)\eqref{symm} has been established. Then the chain rule shows that EEE is conserved along any solution of (1)(1)\eqref{symm}
This is fine: ##E_x+E_yy_x = 2E_x = 0 \therefore E = const \text{ along }x## and vice versa for ##y## dependence (not sure it's rigorous, but it's at least heuristic).
Krylov said:
Conversely, if for some point (x0,y0)(x0,y0)(x_0,y_0) in ΩΩ\Omega it holds that Ey(x0,y0)≠0Ey(x0,y0)≠0E_y(x_0,y_0) \neq 0, then the equation
E(x,y)=E(x0,y0)E(x,y)=E(x0,y0)​
E(x, y) = E(x_0, y_0)
defines a local C1C1C^1-solution of (1)(1)\eqref{symm} through (x0,y0)(x0,y0)(x_0, y_0).
I think maybe this is where I'm getting tripped up. This seems to indicate that the solution is not ##y:\mathbb{R} \rightarrow \mathbb{R}, x\mapsto y(x)##, but rather ##E: \Omega \subseteq \mathbb{R}^2 \rightarrow \mathbb{R}(const?), (x,y)\mapsto E(x_0,y_0)##, which would seem to indicate to me that ##x## and ##y## are both independent as long as they satisfy the constraint ##E(x,y) = E(x_0,y_0)##. I'll have to think a little harder about this.
 
Last edited:
  • #8
TeethWhitener said:
Edit: I screwed up a negative sign in my original post. and I understand that the solution works, but I don't understand why it works for the given problem (i.e., why we can ignore ##x##-dependence of ##y## when taking the partial derivative of ##N##).

That's just from the definition of a partial derivative, right? That is, if ##f=f(x,y),## then
$$\frac{\partial f}{\partial x}\equiv \lim_{h\to 0}\frac{f(x+h,y)-f(x,y)}{h}.$$
In this definition, ##y## just goes along for the ride. Or am I not understanding your question?
 
  • #9
Ackbach said:
That's just from the definition of a partial derivative, right? That is, if ##f=f(x,y),## then
$$\frac{\partial f}{\partial x}\equiv \lim_{h\to 0}\frac{f(x+h,y)-f(x,y)}{h}.$$
In this definition, ##y## just goes along for the ride. Or am I not understanding your question?
The point I'm trying to make is: if ##y## is a function of ##x##, then ##\frac{dF(y)}{dx} = \frac{dF(y)}{dy}\frac{dy}{dx}##.

Edit: Maybe this isn't right: Consider ##y(x)## and ##F(x,y) = xy##. Then, since ##y## is a function of ##x##, the partial derivative ##\partial_x F = y+x(dy/dx)## by the product rule. If we allow that ##y=y(x)##, we're assuming implicitly that ##dy/dx = 0##.
 
  • Like
Likes Ackbach
  • #10
TeethWhitener said:
The point I'm trying to make is: if ##y## is a function of ##x##, then ##\frac{dF(y)}{dx} = \frac{dF(y)}{dy}\frac{dy}{dx}##.

I'm with you now. I think the ##N## or ##M## functions in the integrating factor methods always view ##x## and ##y## as independent variables a priori. The fact that there is an ODE in the first place is going to force some sort of dependence between them, but we don't yet know what that is until we've made progress in solving the ODE.

As we can fairly straight-forwardly check candidate solutions against the original DE, this might matter less. It seems to be the case often in ODE-solving that if we can find a solution, by whatever means, then the method was fine. Guess-and-check forms the backbone of a lot of methods! Of course, you're on more solid ground if you can prove existence and uniqueness of solutions (which is also useful for numerical solutions!).

And I'm probably not saying anything you don't already know.
 
  • Like
Likes TeethWhitener
  • #11
Ackbach said:
I'm with you now. I think the ##N## or ##M## functions in the integrating factor methods always view ##x## and ##y## as independent variables a priori.
That would have been my first guess, but for one small problem. But first, it requires me to answer the original problem of the week (basically following the method @Krylov outlined above):
The integrating factor is ##\mu(x,y)=\frac{1}{x^3y^3}##, as mentioned earlier. I can elaborate on how I guessed this solution if anyone wants to hear about it. Multiplying the integrating factor back into the original ODE gives:
$$\mu(x,y) M(x,y) dx + \mu(x,y) N(x,y) dy = 0$$
which is exact (i.e., There exists a ##F(x,y)## s.t. ##F_x = \mu(x,y) M(x,y)## and ##F_y = \mu(x,y) N(x,y)##).
Integrating, e.g., ##\mu(x,y) M(x,y) dx## gives
$$F(x,y) = \int \mu(x,y) M(x,y) dx = -\frac{1}{2x^2}-\frac{1}{2x^2y^2}-\frac{1}{xy}+f(y)$$
And now ##F_y(x,y) = \mu(x,y) N(x,y)## allows us to find ##f'(y) = 1/y^3##. Integrating and plugging back into the above expression gives the final answer:
$$F(x,y) = -\left(\frac{1}{2x^2} + \frac{1}{2y^2} + \frac{1}{2x^2y^2} + \frac{1}{xy}\right) = C$$
(I suppose you don't need the negative sign out front...)
Discussion:
The problem with saying that ##x## and ##y## are independent is seen when we take the expression above for ##F(x,y)## and implicitly differentiate it with respect to ##x##:
$$-\frac{1}{x^3}-\frac{1}{y^3}\frac{dy}{dx}-\frac{1}{x^3y^2}-\frac{1}{x^2y^3}\frac{dy}{dx}-\frac{1}{x^2y}-\frac{1}{y^2x}\frac{dy}{dx} = 0$$
Multiplying through by ##x^3y^3\, dx## and collecting terms gives us our original:
$$(y^3+xy^2+y)\,dx + (x^3+x^2y+x)\,dy=0$$
So it's clear that this method works even assuming a dependence of ##y## on ##x##.
I dunno, I'm finding more and more that whenever I work on math that I've assumed I understood, I always end up with questions that I just can't puzzle my way out of.
 
  • Like
Likes S.G. Janssens and Ackbach
  • #12
TeethWhitener said:
Yes, this follows from commutativity of partial derivatives.
The existence of ##E## is equivalent to the equality ##M_y = N_x##. If either holds, then this implies that ##E## is a ##C^2##-function and, indeed, the second order partials commute.
TeethWhitener said:
This is fine: ##E_x+E_yy_x = 2E_x = 0 \therefore E = const \text{ along }x## and vice versa for ##y## dependence (not sure it's rigorous, but it's at least heuristic).
I would say: If ##(x, y(x))## is a solution, then
$$
\frac{d}{dx}E(x, y(x)) = E_x(x, y(x)) + E_y(x, y(x))\frac{dy}{dx} = M(x,y(x)) + N(x,y(x))\frac{dy}{dx} = 0.
$$
TeethWhitener said:
I think maybe this is where I'm getting tripped up. This seems to indicate that the solution is not ##y:\mathbb{R} \rightarrow \mathbb{R}, x\mapsto y(x)##, but rather ##E: \Omega \subseteq \mathbb{R}^2 \rightarrow \mathbb{R}(const?), (x,y)\mapsto E(x_0,y_0)##, which would seem to indicate to me that ##x## and ##y## are both independent as long as they satisfy the constraint ##E(x,y) = E(x_0,y_0)##. I'll have to think a little harder about this.
What I meant when I wrote that the equality ##E(x,y) = E(x_0, y_0)## defines a local ##C^1##-solution through ##(x_0, y_0)##, what I meant is that - provided ##E_y(x_0,y_0) \neq 0## and using the Implicit Function Theorem - this equality can be solved for ##y## as a ##C^1##-function of ##x## in a neighbourhood of ##(x_0, y_0)##.

This is the same thing as saying that the equation ##F(x, y) := x^2 + y^2 = 1## defines ##y## as a ##C^1##-function of ##x## in a neighbourhood of every point ##(x_0,y_0)## on the unit circle for which ##F_y(x_0,y_0) = 2 y_0 \neq 0##, i.e. for all points on the unit circle except ##(\pm 1, 0)##.

Also, it is the reason why a lot of people consider the function ##E## itself as the solution of the ODE, but in my opinion it is better to say that ##E## defines local solutions in a neighbourhood of every initial condition ##(x_0, y_0)##, provided that either ##E_y(x_0,y_0) \neq 0## - in which case ##y## can be written as a function of ##x## - or ##E_x(x_0,y_0) \neq 0## - in which case ##x## can be written as a function of ##y##.
 
Last edited:
  • Like
Likes Ackbach and Greg Bernhardt
  • #13
TeethWhitener said:
Edit: Maybe this isn't right: Consider ##y(x)## and ##F(x,y) = xy##. Then, since ##y## is a function of ##x##, the partial derivative ##\partial_x F = y+x(dy/dx)## by the product rule. If we allow that ##y=y(x)##, we're assuming implicitly that ##dy/dx = 0##.
Be careful: In this case, the partial derivative of ##F## with respect to ##x## at the point ##(x,y(x))## is
$$
\frac{\partial F(x,y(x))}{\partial x} = y(x)
$$
but the total derivative of ##x \mapsto F(x,y(x))## at ##x## is
$$
\frac{d F(x,y(x))}{dx} = \frac{\partial F(x,y(x))}{\partial x} + \frac{\partial F(x,y(x))}{\partial y}\cdot \frac{dy}{dx} = y(x) + x \frac{dy}{dx}
$$
 
  • Like
Likes Ackbach
  • #14
Krylov said:
Be careful: In this case, the partial derivative of ##F## with respect to ##x## at the point ##(x,y(x))## is
$$
\frac{\partial F(x,y(x))}{\partial x} = y(x)
$$
but the total derivative of ##x \mapsto F(x,y(x))## at ##x## is
$$
\frac{d F(x,y(x))}{dx} = \frac{\partial F(x,y(x))}{\partial x} + \frac{\partial F(x,y(x))}{\partial y}\cdot \frac{dy}{dx} = y(x) + x \frac{dy}{dx}
$$
Is this true? Let ##y(x) = x^2##. Then ##F(x,y(x)) = xy(x) = x^3##. Doesn't your argument above claim that ##\partial_x (xy(x)) = \partial_x x^3 = y(x) = x^2##? If instead we have ##\partial_x (xy(x)) = y(x) + x(\partial_x y(x)) = x^2 + x\cdot 2x = 3x^2## is that not correct?
 
  • Like
Likes Ackbach
  • #15
TeethWhitener said:
Discussion:
The problem with saying that ##x## and ##y## are independent is seen when we take the expression above for ##F(x,y)## and implicitly differentiate it with respect to ##x##:
$$-\frac{1}{x^3}-\frac{1}{y^3}\frac{dy}{dx}-\frac{1}{x^3y^2}-\frac{1}{x^2y^3}\frac{dy}{dx}-\frac{1}{x^2y}-\frac{1}{y^2x}\frac{dy}{dx} = 0$$
Multiplying through by ##x^3y^3\, dx## and collecting terms gives us our original:
$$(y^3+xy^2+y)\,dx + (x^3+x^2y+x)\,dy=0$$
So it's clear that this method works even assuming a dependence of ##y## on ##x##.
But ##x## and ##y## are not independent: There is a relation imposed on them by the condition ##F(x,y) = \text{constant}##. However, ##F## itself is a function of two independent variables.

TeethWhitener said:
Is this true? Let ##y(x) = x^2##. Then ##F(x,y(x)) = xy(x) = x^3##. Doesn't your argument above claim that ##\partial_x (xy(x)) = \partial_x x^3 = y(x) = x^2##?
No, I claimed that the partial derivative of the function ##F## with respect to its first argument, evaluated at the point ##(x,y(x))## is equal to ##y(x)##. On the other hand, you first substitute for ##y(x)## and then you differentiate. This is not the same.

TeethWhitener said:
If instead we have ##\partial_x (xy(x)) = y(x) + x(\partial_x y(x)) = x^2 + x\cdot 2x = 3x^2## is that not correct?
Of course I understand what you mean, but I did not agree with your notation and terminology, specifically when you wrote
TeethWhitener said:
the partial derivative ##\partial_x F = y+x(dy/dx)##.
I do hope this is more helpful than irritating. At least it is intended as such. I appreciated the discussion very much.
 
  • Like
Likes Ronald Channing and Ackbach
  • #16
I'm down to a polynomial equation ##f(x,y)=0## of degree four in ##y## which in principle is solvable. It looks a bit better in polar coordinates as ##C=r^2(r^2+2)(1+\sin (2\varphi ))## but is still not really convenient. The funniest intermediate result I had was ##\frac{du}{dv}+\frac{v}{u}=0## which the other equations are a solution to. I'm curious whether there is a chance of an elegant expression.
 
  • #17
Krylov said:
I do hope this is more helpful than irritating. At least it is intended as such. I appreciated the discussion very much
I appreciate your patience with me. I've read your posts more closely and dived into a few of my old diff eq books, and I think I understand the implicit function theorem a little better. It seems like there's a tendency toward abuse of notation that was tripping me up.

So what I gather now is that we have a twice-differentiable function ##E(x,y):\mathbb{R}^2\to\mathbb{R}## in two independent variables, and the implicit function theorem says that if we choose a point ##(x_0,y_0)## where ##E(x_0,y_0)=0##, (plus other appropriate assumptions), we are guaranteed the existence of a function ##f:U\subseteq\mathbb{R}\to\mathbb{R}## such that ##f(x_0)=y_0## and ##E(x,f(x))=0, \forall x\in U##.

So I think I was confusing the independent variable ##y## with the function ##f(x)## that gets associated with ##y## via the implicit function theorem.
 
  • Like
Likes S.G. Janssens and Ackbach
  • #18
Solution posted on MHB:
http://mathhelpboards.com/potw-university-students-34/problem-week-281-sep-18-2017-a-22325.html
 

Related to Intermediate Math Problem of the Week 9/18/2017

1. What is the "Intermediate Math Problem of the Week 9/18/2017"?

The "Intermediate Math Problem of the Week 9/18/2017" is a weekly math problem that is designed for intermediate level students. It was posted on September 18, 2017 and is meant to challenge students and help them improve their math skills.

2. Who can participate in the "Intermediate Math Problem of the Week 9/18/2017"?

The "Intermediate Math Problem of the Week 9/18/2017" is open to anyone who is interested in solving math problems and improving their math skills. It is designed for intermediate level students, but anyone can attempt to solve it.

3. How do I submit my answer for the "Intermediate Math Problem of the Week 9/18/2017"?

To submit your answer for the "Intermediate Math Problem of the Week 9/18/2017", you can either post your solution in the comments section of the problem or email it to the designated email address. Make sure to follow the submission guidelines provided in the problem.

4. Are there any prizes for solving the "Intermediate Math Problem of the Week 9/18/2017"?

Unfortunately, there are no prizes for solving the "Intermediate Math Problem of the Week 9/18/2017". It is meant to be a fun and challenging activity for students to improve their math skills.

5. Can I use a calculator to solve the "Intermediate Math Problem of the Week 9/18/2017"?

Yes, you can use a calculator to solve the "Intermediate Math Problem of the Week 9/18/2017". However, it is recommended to try solving it without a calculator first to improve your mental math skills.

Similar threads

  • Math Proof Training and Practice
Replies
3
Views
2K
  • Math Proof Training and Practice
Replies
15
Views
2K
  • Math Proof Training and Practice
Replies
1
Views
1K
  • Math Proof Training and Practice
Replies
8
Views
2K
  • Math Proof Training and Practice
Replies
5
Views
2K
  • Math Proof Training and Practice
Replies
6
Views
3K
  • Math Proof Training and Practice
Replies
11
Views
2K
  • Math Proof Training and Practice
Replies
1
Views
1K
  • Math Proof Training and Practice
Replies
4
Views
2K
  • Math Proof Training and Practice
Replies
3
Views
2K
Back
Top