Exploring Nonlinear Solutions for PDEs of Functions f:\mathbb{R}^2\to\mathbb{R}

  • Thread starter jostpuur
  • Start date
In summary, the conversation discusses a partial differential equation (\partial_1 f(x_1,x_2))^2 + (\partial_2 f(x_1,x_2))^2 = 1 and possible solutions for the function f:\mathbb{R}^2\to\mathbb{R}. The only obvious solutions are linear, but it is believed that nonlinear solutions must exist as well. However, it is difficult to learn about these solutions. It is also considered that there may not be any nonlinear solutions, except for the affine solution. Proofs and counterexamples are discussed, with the conclusion that only affine solutions exist. Various examples and extensions of the PDE are also mentioned.
  • #1
jostpuur
2,116
19
I'm interested to know as much as possible about functions [itex]f:\mathbb{R}^2\to\mathbb{R}[/itex] that satisfy the PDE

[tex]
(\partial_1 f(x_1,x_2))^2 + (\partial_2 f(x_1,x_2))^2 = 1.
[/tex]

The only obvious solutions are

[tex]
f(x_1,x_2) = x_1\cos(\theta) + x_2\sin(\theta),
[/tex]

but this is a linear function with respect to the variables [itex]x_1,x_2[/itex].

I was thinking that nonlinear solutions must exist too, but it seems extremely difficult learn about them.

Having thought more, I'm also considering the possibility that nonlinear solutions don't exist (except the affine solution, which is mostly linear). But if they don't exist, how could such claim be proven?
 
Last edited:
Physics news on Phys.org
  • #2
Well, adding a constant is always possible, but not really interesting.

Here is an argument, where I can think this can be made more formal to get a proof:
Locally, the function always looks like an inclided plane where the magnitude of the gradient is 1. This is just what the equation tells us. Pick an arbitrary point in the (x1,x2)-plane. If we make a path, starting there and following the gradient of 1, where can we get? After a length of d, the value of f increased by d. We have to be in a distance of d - otherwise there would be a shorter path, with a larger gradient. Therefore, all those paths are straight lines. They cannot intersect, so they all have to be parallel, and you get your plane as only solution.
 
  • #3
So we define a mapping [itex][0,T]\to\mathbb{R}^2[/itex], [itex]t\mapsto\varphi(t)[/itex] so that

[tex]
\dot{\varphi}(t)\cdot \nabla f(\varphi(t)) = \|\dot{\varphi}(t)\|
[/tex]

Then

[tex]
f(\varphi(T))-f(\varphi(0)) = \int\limits_0^T D_t f(\varphi(t)) dt = \int\limits_0^T \|\dot{\varphi}(t)\| dt \geq \|\varphi(T) - \varphi(0)\|
[/tex]

Now you claim that we will have "[itex]=[/itex]" instead of "[itex]\geq[/itex]" in the last inequality?

otherwise there would be a shorter path

We could define

[tex]
\psi(t) = \varphi(0) + \frac{t}{T}\big(\varphi(T)-\varphi(0)\big)
[/tex]

[tex]
\dot{\psi}(t) = \frac{1}{T}\big(\varphi(T) - \varphi(0)\big)
[/tex]

[tex]
f(\psi(T))-f(\psi(0)) = \int\limits_0^T \frac{1}{T}\big(\varphi(T)-\varphi(0)\big)\cdot\nabla f(\varphi(t))dt
[/tex]

[tex]
\implies\quad \|f(\psi(T))-f(\psi(0))\| \leq \|\varphi(T)-\varphi(0)\|
[/tex]

I see, there will be equality!
 
  • #4
I have fallen for very simple things now...

So

[tex]
\int\limits_0^T \|\dot{\varphi}(t)\|dt = \|\varphi(T)-\varphi(0)\|
[/tex]

implies the curve to be a straight line? How do you prove that nicely?
 
  • #5
Well the straight line question is a different problem, which probably has a solution not related to PDEs. So the original problem is mostly solved. A peculiar result! Only affine solutions...

I'd be slightly interested to know what happens if I define

[tex]
f(x_1,0) = \sqrt{1 + x_1^2}
[/tex]

and then demand

[tex]
(\partial_1 f(x_1,x_2))^2 + (\partial_2 f(x_1,x_2))^2 = 1.
[/tex]

How far can the function be extended from the line? What kind of problems eventually prevent the extension to the whole plane?
 
  • #6
Hey guys!

[tex]
f(x_1,x_2) = \sqrt{x_1^2 + x_2^2}
[/tex]

is a solution to the original PDE! Not very affine, I would say :wink:

The trick is that this is not differentiable at the [itex](x_1,x_2)=(0,0)[/itex], where the mfb's lines would intersect.
 
  • #7
But then the equation is not satisfied everywhere in R^2 ;).
 
  • #8
I couldn't know in advance what the theory will turn out to be. Now it would seem more reasonable to study functions [itex]f:\Omega\to\mathbb{R}[/itex] where [itex]\Omega\subset\mathbb{R}^2[/itex].

Here's another example:

[tex]
\Omega = \;]-2,+2[\;\times\; ]-1,+1[
[/tex]

[tex]
\Omega_{-1} = \{x\in\Omega\;|\; -2<x_1<0,\quad 1-|x_1|<x_2\}
[/tex]
[tex]
\Omega_0 = \{x\in\Omega\;|\; x_2\leq 1 - |x_1|\}
[/tex]
[tex]
\Omega_{+1} = \{x\in\Omega\;|\; 0<x_1<+2,\quad 1-|x_1| < x_2\}
[/tex]

[tex]
f(x_1,x_2)=\left\{\begin{array}{ll}
\sqrt{(x_1+2)^2 + (x_2+1)^2} - \sqrt{2},\quad & x\in\Omega_{-1} \\
-\sqrt{x_1^2 + (x_2-1)^2} + \sqrt{2},\quad & x\in\Omega_0 \\
\sqrt{(x_1-2)^2+ (x_2+1)^2} - \sqrt{2},\quad & x\in\Omega_{+1}\\
\end{array}\right.
[/tex]

The idea of this example reveals that if [itex]f[/itex] is known in some small set, the extension to larger domain isn't neccessarily unique.
 
  • #9
Nice function.

$$f(x_1,x_2)=\left\{\begin{array}{ll}
\sqrt{(x_1+2)^2 + (x_2+1)^2} - \sqrt{2},\quad & x\in\Omega_{-1} \\
-\sqrt{(x_1-2)^2 + (x_2-3)^2} + 3\sqrt{2},\quad & x\in\Omega_0 \cup \Omega_{+1} \\
\end{array}\right.$$
I guess this will work, too (the critical point is not at the edge of Ω). And many other similar functions.
 
  • #10
This PDE is interesting because it could be related to the topics of this thread: Determine the function from a simple condition on its Jacobian matrix.

If [itex]\Omega[/itex] consist of those points [itex](x_0,x_1)[/itex] where [itex]x_0>0[/itex] and [itex]|x_1| < x_0[/itex], then a function [itex]f:\Omega\to\mathbb{R}[/itex]

[tex]
f(x_0,x_1) = \sqrt{x_0^2 - x_1^2}
[/tex]

satisfies a PDE

[tex]
(\partial_0 f)^2 - (\partial_1 f)^2 = 1.
[/tex]

Eventually I didn't figure out how this would imply anything to the isometry discussion, but anyway, this is distantly interesting at least.
 
  • #11
sinh and cosh give another set of solutions.
+- sqrt(c+1)x1 +- sqrt(c) x2) for arbitrary c>=0 works, too.
 

FAQ: Exploring Nonlinear Solutions for PDEs of Functions f:\mathbb{R}^2\to\mathbb{R}

What does the equation (D_1 f)^2+(D_2 f)^2=1 represent?

The equation (D_1 f)^2+(D_2 f)^2=1 represents a relationship between the first and second partial derivatives of a function f. It is known as the Pythagorean Identity for partial derivatives and is commonly used in vector calculus.

How do you interpret the terms (D_1 f)^2 and (D_2 f)^2 in this equation?

The terms (D_1 f)^2 and (D_2 f)^2 represent the squares of the first and second partial derivatives of a function f, respectively. They can also be thought of as the magnitude of the gradient of the function in the x and y directions.

What is the geometric interpretation of the equation (D_1 f)^2+(D_2 f)^2=1?

The equation (D_1 f)^2+(D_2 f)^2=1 can be interpreted geometrically as a circle with a radius of 1 centered at the origin in the x-y plane. The values of the partial derivatives at any point on this circle will satisfy the equation.

How is the equation (D_1 f)^2+(D_2 f)^2=1 used in real-world applications?

The Pythagorean Identity for partial derivatives is commonly used in physics and engineering to calculate and analyze the flow of fluids and other vector quantities. It is also used in image processing to detect edges and analyze image gradients.

Are there any other forms of the equation (D_1 f)^2+(D_2 f)^2=1?

Yes, there are various forms of the Pythagorean Identity for partial derivatives, such as (D_1 f)^2+(D_2 f)^2=||∇f||^2, where ||∇f|| represents the magnitude of the gradient of the function f. This equation can also be extended to higher dimensions, such as (D_1 f)^2+(D_2 f)^2+(D_3 f)^2=||∇f||^2 in three-dimensional space.

Similar threads

Back
Top