Can Non-Continuous Functions Have Fixed Points on Compact Convex Sets?

In summary, convexity is a mathematical concept that is important in scientific research as it allows for the development of efficient algorithms and optimization techniques. Fixed points, which are points that do not change when a function is applied, are significant in convexity as they represent optimal solutions to optimization problems. Convexity is closely related to optimization problems, and it can be applied in real-world scenarios, such as economics, physics, and computer science. In machine learning, convexity plays a crucial role in ensuring efficient optimization and accurate predictions.
  • #1
cateater2000
35
0
Suppose that K is a nonempty compact convex set in R^n. If f:K->K is not continuous, then f will not have any fixed point.


I believe this statement is false, but I cannot think of a function(not continuous) that maps a compact convex set to another compact convex set.

any tips would be appreciated
 
Physics news on Phys.org
  • #2
There are a lot of non-continuous functions! For example, in R1, define f(1)= 2, f(2)= 1, f(x)= x for any x other than 1 or 2. That works doesn't it?
 
  • #3


There are indeed functions that are not continuous but still have fixed points on a compact convex set. One example is the function f(x) = 2x on the interval [0,1]. This function is not continuous at x = 0, but it still has a fixed point at x = 0.5.

Another example is the function f(x) = -x on the interval [-1,1]. This function is not continuous at x = 0, but it has a fixed point at x = 0.

In general, if a function is not continuous, it does not necessarily mean that it cannot have fixed points. It is possible for a function to have fixed points even if it is not continuous. However, the statement is true in the context of convexity. If a function is not continuous, it cannot have any fixed points on a nonempty compact convex set K. This is because a convex set is closed and bounded, and a non-continuous function will not preserve these properties, thus not being able to map a point in K to itself.

In summary, while there are functions that are not continuous but still have fixed points, in the context of convexity and compactness, a non-continuous function will not have any fixed points on a compact convex set.
 

FAQ: Can Non-Continuous Functions Have Fixed Points on Compact Convex Sets?

1. What is convexity and why is it important in scientific research?

Convexity is a mathematical concept that refers to a set or function that always lies above its tangents. In scientific research, convexity is important because it allows for the development of efficient algorithms and optimization techniques that can be applied in various fields such as economics, physics, and computer science.

2. What are fixed points and why are they significant in convexity?

A fixed point is a point in a set or function that does not change when the function is applied. In convexity, fixed points are significant because they represent the optimal solution to an optimization problem. This is because a convex function always has a unique global minimum, which is a fixed point.

3. How is convexity related to optimization problems?

Convexity is closely related to optimization problems because convex functions have the property that any local minimum is also a global minimum. This allows for efficient optimization methods to be used, such as convex optimization, to find the optimal solution to a problem.

4. Can convexity be applied in real-world scenarios?

Yes, convexity can be applied in real-world scenarios. For example, in economics, convexity is used in the study of consumer preferences and production functions. In physics, it is used in the analysis of energy landscapes and in computer science, it is used in machine learning and data analysis.

5. How is convexity used in machine learning?

Convexity is crucial in machine learning as it allows for efficient and reliable optimization algorithms to be used in training models. Convexity ensures that the model converges to a global minimum, which results in better prediction accuracy. It also allows for the use of convex optimization techniques, such as gradient descent, to find the optimal parameters for a given problem.

Similar threads

Back
Top