Is there a solution to this simple 1st order PDE?

  • Thread starter docnet
  • Start date
  • Tags
    Pde
In summary: If ##x\in R^n## where ##n>1##, then your description will change to account for extra spatial variables.In summary, the PDE has an analytic solution if ##f(t)## is an arbitrary function defined for ##t\in(-\infty,\infty)##, and ##u(t,x)## is defined for all of ##R^n##.
  • #1
docnet
Gold Member
799
486
Homework Statement
please see below
Relevant Equations
please see below
This isn't homework, but I was just wondering whether the following PDE has an analytic solution.

$$\partial_x u(t,x)=u(t,x)$$

where ##x\in R^n## and ##\partial_x## implies a derivative with respect to the spatial variables.
 
Physics news on Phys.org
  • #2
If ##\ u = f(t)e^x\ ## then ##\quad \partial_x u(t,x)=u(t,x)##
 
  • Like
  • Informative
Likes docnet, Chestermiller and Mayhem
  • #3
BvU said:
If ##\ u = f(t)e^x\ ## then ##\quad \partial_x u(t,x)=u(t,x)##
Thank you BvU for preventing my headache. Just so that I'm understanding this correctly, did you define ##f(t)## to be an arbitrary function defined for ##t\in(-\infty,\infty)##, and you defined ##u(t,x)## for all of ##R^n##?
 
  • #4
docnet said:
Thank you BvU for preventing my headache. Just so that I'm understanding this correctly, did you define ##f(t)## to be an arbitrary function defined for ##t\in(-\infty,\infty)##, and you defined ##u(t,x)## for all of ##R^n##?
##u(t, x)## would be defined on some subset of ##\mathbb R^2##, not ##\mathbb R^n##
 
  • Like
Likes BvU and docnet
  • #5
Mark44 said:
##u(t, x)## would be defined on some subset of ##\mathbb R^2##, not ##\mathbb R^n##

what about the case when ##x\in R^n## so ##u(t,x)\Rightarrow u(t,x_1,x_2,...x_n)##?
Can ##u(t,x)## be a short-hand way of writing the latter case?
 
  • #6
It can be, but I'm not sure exactly what ##\partial_x## would mean in that case.

If it's the partial derivative with respect to each variable consecutively, then ##e^{\sum x_i}## still works.
 
  • Like
Likes BvU and docnet
  • #7
Office_Shredder said:
It can be, but I'm not sure exactly what ##\partial_x## would mean in that case.

If it's the partial derivative with respect to each variable consecutively, then ##e^{\sum x_i}## still works.
thank you.. may i ask one more question ?

Is the solution ##e^{\sum x_i}## to the PDE ##\partial_xu(t,x)=u(t,x)## derived by a theorem?

we know the solution ##v=Ce^t## to the ode problem ##v'=v## is found by separating the variables ##v## and ##t## and integrating. is the PDE case analogous or much more complicated?

edit reason : clarity
 
Last edited:
  • #8
I was mostly just copying the idea of the solution for the two dimensional case.

In general pdes are much harder to solve than odes are.
 
  • Like
Likes docnet
  • #9
docnet said:
what about the case when ##x\in R^n## so ##u(t,x)\Rightarrow u(t,x_1,x_2,...x_n)##?
Can ##u(t,x)## be a short-hand way of writing the latter case?
I suppose it could, but that would have to be given information. Since you didn't show any such information, it's really a stretch to make that assumption. And as Office_Shredder mentioned, it would be meaningless or at least ambiguous to talk about the partial with respect to x.
 
  • Like
Likes docnet
  • #10
:olduhh: Aren't you just bamboozling yourself with notation? Isn't this just ##\dfrac {du}{dx}=u## ? For which the solution is ##u=Ae^{x}## - just the familiar exponential function or curve of ##u## against ##x##. With ##A## an arbitrary constant, determined by the 'initial' value of ##u## at any particular ##x##. Then this curve may be magnified or contracted as any extraneous variable, call it ##t##, changes any which way. Think of this as changing the initial condition in any which way and represent it: ##u=A(t)e^{x}## .
 
  • Like
Likes docnet
  • #11
Mark44 said:
I suppose it could, but that would have to be given information. Since you didn't show any such information, it's really a stretch to make that assumption. And as Office_Shredder mentioned, it would be meaningless or at least ambiguous to talk about the partial with respect to x.

I'm sorry about that. not making excuses, but my PDE professor likes to use the short-hand? the description ##x \in R^n## to mean ##x=(x_1,x_2,...,x_n)##. ##u(t,x)## means ##u(t,x_1,x_2,...,x_n)## and ##\partial x## means partial derivatives with respect to each spatial variable consecutively. His notations have caused confusion for I and other classmates before, but we got familiar to them over time. The professor likes to think in abstract terms and use abstract notations whenever possible. I realize his notations are not usual.

epenguin said:
:olduhh: Aren't you just bamboozling yourself with notation? Isn't this just ##\dfrac {du}{dx}=u## ? For which the solution is ##u=Ae^{x}## - just the familiar exponential function or curve of ##u## against ##x##. With ##A## an arbitrary constant, determined by the 'initial' value of ##u## at any particular ##x##. Then this curve may be magnified or contracted as any extraneous variable, call it ##t##, changes any which way. Think of this as changing the initial condition in any which way and represent it: ##u=A(t)e^{x}## .

wow, I think you make a great point. suppose ##x\in R^n## where ##n>1##, then how will your description change to account for extra spatial variables? thanks, I will chew on this over night.
 
  • #12
We can make the change of variables ##x_1=u+v## and ##x_2=u-v##. Ignore t and pretend we're doing it in two dimensions.

Then ##\partial_{x_1}=\partial_u+\partial_v## and ##\partial_{x_2}=u-v## so ##\partial_x =\partial^2_u+\partial^2_v##. This means once we have any particular solution, we can add homogenous solutions which are harmonic functions. There are a lot of them, for example the real and imaginary part of any complex differentiable function are both harmonic.
 
  • Like
Likes docnet
  • #13
docnet said:
Thank you BvU for preventing my headache. Just so that I'm understanding this correctly, did you define ##f(t)## to be an arbitrary function defined for ##t\in(-\infty,\infty)##, and you defined ##u(t,x)## for all of ##R^n##?
docnet said:
where ##x\in R^n## and ##\partial_x## implies a derivative with respect to the spatial variables.
I humbly concede that I totally overlooked the possible vector character of ##x## (and possibly also the vector character of ##u## :rolleyes: ?) because I am so used (spoiled rotten?) to a clear distinction in notation: either ##\bf x, \ \bf u## or ##\vec x, \ \vec u## -- used consistently -- denote vectors; ## \ \ x, \ u## denote scalars. Even then, ##\ \displaystyle {\partial\over\partial\vec x} ## is to be avoided in favor of ##\ \nabla\ ## or even ##\ \vec\nabla\ ##

##\nabla\cdot\vec u\ ## is unambiguously a scalar, and ##\nabla u\ ## is unambiguously a vector.
The Jacobian ##\nabla \vec u\ ## is a matrix and personally I prefer ##\ \vec\nabla\vec u\ ##
(but I still have to find a way to make it look reasonable : ##\vec \nabla \vec{\vphantom{\nabla}u}## is ugly too :biggrin: )

@Office_Shredder saves my skin in #6 and you explain how this can pop up in #11. I sympathize but can't help and can't force PDEprof to share my preferences.

What remains in my perception is that PDEprof does do harm using ##\partial _x## this way: acting on a scalar it yields a vector that doesn't look like one and, by the same token: acting on a vector that looks like a scalar it yields a matrix that looks like a scalar (or a vector) too.

And now I must chew on #12 because I don't understand it ... :cry:

##\ ##
 
  • Like
Likes docnet
  • #14
Office_Shredder said:
We can make the change of variables ##x_1=u+v## and ##x_2=u-v##. Ignore t and pretend we're doing it in two dimensions.

Then ##\partial_{x_1}=\partial_u+\partial_v## and ##\partial_{x_2}=u-v## so ##\partial_x =\partial^2_u+\partial^2_v##. This means once we have any particular solution, we can add homogenous solutions which are harmonic functions. There are a lot of them, for example the real and imaginary part of any complex differentiable function are both harmonic.
Man I don't understand this at all...Can you flesh it out for the slow kids? Sorry
 
  • Like
Likes docnet and BvU
  • #15
Office_Shredder said:
It can be, but I'm not sure exactly what ##\partial_x## would mean in that case.

If it's the partial derivative with respect to each variable consecutively, then ##e^{\sum x_i}## still works.
In which case I would want to see an index: ##\ \partial_{x_i} \ ##
$$\partial_{x_i} u(t,x)=u(t,x)$$meaning
$${\partial u(t,\vec x)\over \partial x_i}=u(t,\vec x)$$
 
  • Like
Likes hutchphd
  • #16
Or does "derivative with respect to each spatial variable consecutively" mean [tex]\frac{\partial^n}{\partial x_1 \cdots \partial x_n}[/tex] which makes [tex]
\frac{\partial ^n u}{\partial x_1 \dots \partial x_n} = u[/tex] make sense for scalar or vector [itex]u[/itex]. This can be solved by separation of variables.
 
  • Like
Likes docnet
  • #17
hutchphd said:
Man I don't understand this at all...Can you flesh it out for the slow kids? Sorry

That's good you don't, because it was full of mistakes

Ok, consider the change of variables
##u=x_1+x_2##, ##v=x_1-x_2## (not the change of variables I said, I flipped it)

Then by the chain rule, ##\frac{\partial f}{\partial x_1}= \frac{\partial f}{\partial u} \frac{\partial u}{\partial x_1} + \frac{\partial f}{\partial v} \frac{\partial v}{\partial x_1}##

The partial derivatives of ##u## and ##v## are just 1 of course.

Which I will rewrite in the notation of the original post as ##\partial_{x_1} = (\partial_u+\partial_v)##

Similarly, ##\partial_{x_2} = (\partial_u-\partial_v)##

Notice that these partial derivative operators just multiply like ##\partial_x \partial_y## is the partial derivative with respect to y, then with respect to x (and these commute).

So ##\partial_{x_1} \partial_{x_2} = (\partial_u+\partial_v)(\partial_u-\partial_v)##.

Expanding the right hand side gives ##\partial^2_u+\partial_v\partial_u - \partial_u \partial_v - \partial^2_v##. The middle two terms cancel, leaving us with ##\partial^2_u - \partial^2_v,## (not the plus sign I promised)

Then I made a mistake, since I forgot ##u## showed up on the right hand side. If you had ##\partial_x u = f(x)## and you had some solution ##u_f##, then every given any solution of ##u_x =0##, say ##u_0##, then ##u_f+u_0## is a solution to ##\partial_x u =f(x)##. But this differential equation isn't in that form so that's not a useful observation.

My apologies for the mistakes.Edit to add:
I realize there are some simple solutions to this of the form ##f(u)+g(v)##. Then we get ##f''(u)=f(u)## and ##g''(v)=-g(v)## so anything like ##Ae^u + B e^-{u} + C \cos(v) + D\sin(v)## is a solution. To translate back to x coordinates, ##Ae^{x_1+x_2}+Be^{-(x_1+x_2)}+ C \cos(x_1-x_2)+D\sin(x_1-x_2)## are all solutions of the two dimensional equation (I'm still ignoring ##t##n from the o.p., since you can throw in functions of ##t## wherever you would have an arbitrary constant)
 
Last edited:
  • Like
Likes docnet

FAQ: Is there a solution to this simple 1st order PDE?

What is a 1st order PDE?

A 1st order PDE (partial differential equation) is a mathematical equation that involves partial derivatives of a function with respect to multiple variables. It relates the values of the function at a given point to its partial derivatives at that point.

What types of problems can be solved using 1st order PDEs?

1st order PDEs are commonly used to model physical phenomena, such as heat transfer, fluid flow, and electromagnetic fields. They can also be used in economics, finance, and other fields to describe relationships between variables.

Is there a general solution to all 1st order PDEs?

No, there is not a single general solution that can be applied to all 1st order PDEs. Each equation must be solved using specific techniques and methods depending on its form and boundary conditions.

How do you solve a 1st order PDE?

The method for solving a 1st order PDE depends on the type of equation and its boundary conditions. Some common techniques include separation of variables, characteristic curves, and the method of characteristics.

Are there any software programs available to solve 1st order PDEs?

Yes, there are many software programs available that can solve 1st order PDEs, such as MATLAB, Mathematica, and Maple. These programs use numerical methods to approximate solutions to the equations.

Similar threads

Replies
2
Views
1K
Replies
3
Views
2K
Replies
4
Views
1K
Replies
7
Views
2K
Replies
36
Views
2K
Replies
2
Views
1K
Back
Top