Matrix theory question (to do with Linear Programming)

In summary, the conversation discusses a maximum LOP problem and its dual, and how the bounds for the objective function can be obtained by using matrix multiplication and transpose properties. The conversation also touches on the concept of transposing a matrix and how it affects the direction of inequalities.
  • #1
flyingpig
2,579
1

Homework Statement



Suppose we have a maximum LOP P

max [tex]z = c^t x[/tex]

s.t.

[tex]Ax \leq b[/tex]
[tex]x \geq 0[/tex]

and the dual to P is

min [tex]w = b^t y[/tex]

[tex]A^t y \geq c[/tex]

[tex]y \leq 0[/tex]

Then the bounds are

[tex]z = c^t x \leq y^t Ax \leq y^t b = b^t y = w[/tex]

question

Okay where the heck did [tex]y^t Ax[/tex] came from? And what property of matrix are they using for [tex] y^t b = b^t y [/tex]? Neither y or b are numbers, they are points? column vectors?
 
Physics news on Phys.org
  • #2
flyingpig said:

Homework Statement



Suppose we have a maximum LOP P

max [tex]z = c^t x[/tex]

s.t.

[tex]Ax \leq b[/tex]
[tex]x \geq 0[/tex]

and the dual to P is

min [tex]w = b^t y[/tex]

[tex]A^t y \geq c[/tex]

[tex]y \leq 0[/tex]

Then the bounds are

[tex]z = c^t x \leq y^t Ax \leq y^t b = b^t y = w[/tex]

question

Okay where the heck did [tex]y^t Ax[/tex] came from? And what property of matrix are they using for [tex] y^t b = b^t y [/tex]? Neither y or b are numbers, they are points? column vectors?
y and b are column vectors, or if you like, n x 1 matrices. So ytb can be thought of as a 1 x n matrix multiplying an n x 1 matrix, which produces a 1 x 1 matrix (a number). This is just like a dot product, for which we know that u [itex]\cdot[/itex] v = v [itex]\cdot[/itex] u. In terms of matrix multiplication, you can't change ytb to byt, but you can write it as bty.

From the constraints on the dual problem, you have c <= Aty, so ct <= (Aty)t = ytA.

So ctx <= ytAx, and so on.
 
  • #3
Is (ytb)t = bt y
 
  • #5
Mark44 said:
y and b are column vectors, or if you like, n x 1 matrices. So ytb can be thought of as a 1 x n matrix multiplying an n x 1 matrix, which produces a 1 x 1 matrix (a number).

Does that mean for z = ctx, that the argument is the same?

My book defines it this way

[tex]c^T = (c_1,...,c_n), x = (x_1,...,x_n)^T[/tex]

Where it looks like (actually it is lol), c is 1 x n and x is n x 1

Why do we want the final results to be a 1 x 1 matrix? What's wrong with n x n?

Also, one a bit further.

Usually (and this I think might answer myt own question below) we write solutions as y = (y_1,y_2)^t (two elements for now) and by that product and tranpose property we have

((y_1, y_2)^t)^t

For y = (y_1, y_2)^t is 1 x 2 matrix and then tranpose gives back 2 x 1 so that it could multiply out with A

Mark44 said:
Yes. In general, (AB)t = BtAt. This wiki page (http://en.wikipedia.org/wiki/Transpose) lists several properties of matrix transposes.

Yeah I just couldn't see it in the beginning.
 
Last edited:
  • #6
flyingpig said:
Does that mean for z = ctx, that the argument is the same?
Yes.
flyingpig said:
My book defines it this way

[tex]c^T = (c_1,...,c_n), x = (x_1,...,x_n)^T[/tex]

Where it looks like (actually it is lol), c is 1 x n and x is n x 1

Why do we want the final results to be a 1 x 1 matrix? What's wrong with n x n?
Because we want the objective function z to be a number. In typical problems, the objective function represents the profit that we're trying to maximize, or a cost that we're trying to minimize, or similar.
flyingpig said:
Yeah I just couldn't see it in the beginning.
 
  • #7
Mark44 said:
Yes.
Because we want the objective function z to be a number. In typical problems, the objective function represents the profit that we're trying to maximize, or a cost that we're trying to minimize, or similar.

So z = [9] for instance, we can't just pull out the 9 from the matrix right? (I remember we could do this for determinants, but we are talking abut a matrix here)
 
  • #8
Mark44 said:
So ctx <= ytAx, and so on.

Actually I just noticed one thing, quite subtle and important too

When you took the tranpsoe of both sides of the inequality, why wouldn't the inequality sign flip? Like would tranposing the matrix A suddenly make it into a negative matrix...(matrix with negative elements?)?
 
  • #9
Mark, there is also something wrong with the multiplication.

Suppose A is 3 x 3 and that a D-feasible solution is [tex]y = (y_1, y_2, y_3)^t[/tex]

Then the product [tex]y^t Ax [/tex] cann be defined since yt = [tex]\begin{bmatrix}
y_1\\
y_2\\
y_3

\end{bmatrix}[/tex]
 
  • #10
Both x and y are column vectors, so yt is a row vector.

Suppose x and y are n x 1 (column vectors) and A is n x n. Then ytAx is [1 x n] * [n x n] * [n x 1], which works out to [1 x n] * [n x 1], or 1 x 1. IOW, a scalar, which is exactly what you want.
 
  • #11
flyingpig said:
Actually I just noticed one thing, quite subtle and important too

When you took the tranpsoe of both sides of the inequality, why wouldn't the inequality sign flip?
You're basically comparing two numbers, that just happen to be obtained from the product of several matrices. The transpose of a number is just the number, so nothing weird happens, like changing the direction of the inequality.
flyingpig said:
Like would tranposing the matrix A suddenly make it into a negative matrix...(matrix with negative elements?)?

And when you transpose a matrix it switches the rows and columns. It doesn't turn it into a "negative" matrix, whatever that means.

IOW, At [itex]\neq[/itex] -A, if that's what you're saying.
 
  • #12
Mark44 said:
You're basically comparing two numbers, that just happen to be obtained from the product of several matrices. The transpose of a number is just the number, so nothing weird happens, like changing the direction of the inequality.


And when you transpose a matrix it switches the rows and columns. It doesn't turn it into a "negative" matrix, whatever that means.

IOW, At [itex]\neq[/itex] -A, if that's what you're saying.

Nah I just think that the concept (other than swapping rows and columns) of tranpose is similar to finding an "inverse", not the "inverse of a matrix". I know you have no idea what I just said, it's just something I don't want to believe in, but i have to.
 

Related to Matrix theory question (to do with Linear Programming)

1. What is matrix theory?

Matrix theory is a branch of mathematics that deals with the study of matrices, which are rectangular arrays of numbers or symbols arranged in rows and columns. It involves understanding the properties and operations of matrices, as well as their applications in various fields such as linear algebra and optimization.

2. How is matrix theory related to linear programming?

Matrix theory plays a crucial role in linear programming as it provides a mathematical framework for representing and solving linear programming problems. Matrices are used to represent the constraints and objectives of a linear programming problem, and their properties are utilized to develop efficient algorithms for finding optimal solutions.

3. What is the difference between a matrix and a vector in linear programming?

A matrix is a rectangular array of numbers or symbols, while a vector is a one-dimensional matrix. In linear programming, matrices are used to represent the constraints and objectives of a problem, while vectors are used to represent the decision variables and their coefficients in the objective function.

4. How are matrices used to solve linear programming problems?

In linear programming, matrices are used to represent the constraints and objectives of a problem, as well as to develop efficient algorithms for finding optimal solutions. By transforming the problem into a matrix form, it can be solved using techniques such as the simplex method or interior-point methods.

5. What are some real-life applications of matrix theory in linear programming?

Matrix theory has various applications in real-life problems, such as resource allocation, production planning, supply chain management, and financial portfolio optimization. It is also used in fields such as transportation, energy, telecommunications, and economics to solve complex optimization problems.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
578
  • Calculus and Beyond Homework Help
Replies
4
Views
761
  • Calculus and Beyond Homework Help
Replies
6
Views
303
  • Calculus and Beyond Homework Help
Replies
3
Views
721
  • Calculus and Beyond Homework Help
Replies
3
Views
946
  • Calculus and Beyond Homework Help
Replies
1
Views
499
  • Calculus and Beyond Homework Help
Replies
2
Views
422
Replies
11
Views
2K
  • Calculus and Beyond Homework Help
2
Replies
43
Views
3K
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
Back
Top