Description of simplex algorithm

  • MHB
  • Thread starter evinda
  • Start date
  • Tags
    Algorithm
In summary, the conversation discusses the Simplex algorithm and the process of finding a new basic feasible solution. It introduces the concept of non-degenerate basic feasible solutions and explains how to use them to find a new solution. The conversation also touches on the linear independence of certain vectors and how to show that they are linearly independent. Finally, the conversation concludes by discussing how to find a contradiction in order to prove linear independence.
  • #1
evinda
Gold Member
MHB
3,836
0
Hello! (Wave)

I am looking at the description of the Simplex algorithm.Let $\overline{x_0}$ be a non degenerate basic feasible solution.
We suppose that $\overline{x_0}=(x_{10}, x_{20}, \dots, x_{m0},0, \dots,0)$ with $x_{10}, x_{20}, \dots, x_{m0}>0$.
Thus the first $m$ columns of $A$, i.e. $P_1, P_2, \dots, P_m$ are linearly independent (so they are a basis of $\mathbb{R}^m$).
Thus for each $j$ it holds that $P_j= x_{1j}P_1+ x_{2j} P_2+ \dots + x_{mj} P_m$ for suitable $x_{1j}, x_{2j}, \dots, x_{mj} \in \mathbb{R} (1)$.

We know that $Ax_0=b \Leftrightarrow P_1 x_{10}+ P_2 x_{20} + \dots + P_{m0} x_{m0}=b (2)$

We fix a $j \in \{ m+1, \dots, n \}$.$$(2)- \theta (1), \text{ where } \theta \in \mathbb{R}:$$

$$(x_{10} - \theta x_{1j})P_1 + (x_{20}- \theta x_{2j}) P_2+ \dots +(x_{m0}- \theta x_{mj})P_m+ \theta P_j=b (3)$$

$(3) \Rightarrow \overline{x}(\theta)=(x_{10}- \theta x_{1j}, x_{20}- \theta x_{2j}, \dots, x_{m0}- \theta x_{mj},0, \dots,0, \theta, 0,0, \dots)$ is a solution of the system of restrictions ($A \overline{x}(\theta)=b$) In order to have $\overline{x}(\theta) \in F$ it has to hold

$$ x_{10} - \theta x_{1j}>0 , x_{20}- \theta x_{2j}>0 , \dots , x_{m0}- \theta x_{mj}>0, \theta>0 (4)$$We notice that for $\theta$ small enough, $(4)$ holds and so there are feasible solutions with $m+1$ non-zero coordinates.
So in order to find a new basic feasible solution we have zero one of the first $m$ coordinates of $\overline{x}(\theta)$ for an appropriate choice of $\theta$.

Then $\theta$ satisfies $(4)$ whenever $0< \theta< \theta_0$, where $\theta_0=\min_i \{ \frac{x_{i0}}{x_{ij}}: x_{ij}>0\}$.

We define $\theta=\theta_0$ an consider the corresponding vector $\overline{x}(\theta) \in F$.

Then one of the $m$ first components of $\overline{x}(\theta)$ is zero.

We suppose that $x_{10}- \theta x_{1j}=0$.

Then $\overline{x}(\theta)$ is a candidtae basic feasible solution and so that it is indeed such a solution , it suffices to show that $P_2, P_3, \dots, P_m, P_j$ are linearly independent.How can we show that $P_2, P_3, \dots, P_m, P_j$ are linearly independent?

If we suppose that $P_2, P_3, \dots, P_m, P_j$ are linearly dependent then there are $\lambda_2, \lambda_3, \dots, \lambda_m, \lambda_j \in \mathbb{R}$ with $|\lambda_2|+ |\lambda_3|+ \dots+ |\lambda_m|+ |\lambda_j| \neq 0$ such that

$$\lambda_2 P_2+ \lambda_3 P_3+ \dots + \lambda_m P_m+ \lambda_j P_j=0$$

We also know that $A \overline{x}(\theta)=b \Rightarrow P_2 (x_{20}- \theta x_{2j})+P_3 (x_{30}- \theta x_{3j})+ \dots+ P_m (x_{m0}- \theta x_{mj})+P_j \theta=b$

Can we find a contradiction from these two relations? Or do we have to use something else? (Thinking)
 
Physics news on Phys.org
  • #2
We can use the two equations to find a contradiction. Substituting the first equation into the second equation, we have:$$\lambda_2 (x_{20}- \theta x_{2j})+\lambda_3 (x_{30}- \theta x_{3j})+ \dots+ \lambda_m (x_{m0}- \theta x_{mj})+\lambda_j \theta=b$$Now, since the $\lambda_i$s are not all zero, this implies that the right hand side of the equation is non-zero. But this contradicts the fact that $x_{10}- \theta x_{1j}=0$, which implies that the left hand side of the equation must be zero. Thus, we have found a contradiction, and we can conclude that $P_2, P_3, \dots, P_m, P_j$ are linearly independent.
 

FAQ: Description of simplex algorithm

What is the simplex algorithm?

The simplex algorithm is a mathematical method for solving linear programming problems. It is used to find the optimal solution for a system of linear equations, where the objective is to maximize or minimize a linear function subject to linear constraints.

How does the simplex algorithm work?

The simplex algorithm works by starting at a feasible solution and then iteratively improving it until an optimal solution is reached. This is done by moving from one corner of the feasible region to another, while ensuring that the objective function is either maximized or minimized at each step.

What is the significance of the simplex algorithm?

The simplex algorithm is significant because it is one of the most efficient and widely used methods for solving linear programming problems. It allows for the optimization of complex systems with many variables and constraints, making it a valuable tool in various fields such as economics, engineering, and operations research.

What are the limitations of the simplex algorithm?

Although the simplex algorithm is a powerful method, it does have some limitations. One limitation is that it can only be used for linear programming problems with continuous variables. Additionally, it may not always find the global optimal solution and may get stuck in a local optimum.

How has the simplex algorithm evolved over time?

The simplex algorithm was first developed by George Dantzig in 1947 and has since undergone many improvements and modifications. These include the introduction of more efficient pivot rules, the development of more advanced variants such as the dual simplex algorithm, and the use of interior-point methods. These advancements have made the simplex algorithm even more efficient and applicable to a wider range of problems.

Similar threads

Replies
4
Views
2K
Replies
25
Views
4K
Replies
10
Views
3K
Replies
2
Views
1K
Replies
32
Views
5K
Replies
1
Views
1K
Replies
1
Views
1K
Replies
4
Views
2K
Back
Top