Linear Algebra augmented matrix

I'm sorry for the confusion. Let me try to clarify my point. In your original post, you defined:u = (u, 0, 0) and v = (v, 0, 0) But this doesn't fit the definition of S. Instead, it should be (u, v, 0) and (0, 0, u + v). Also, make sure to include the condition that u, v, and u + v are all in R. I hope this helps!No worries! Thanks for the clarification!
  • #1
ultima9999
43
0
1. The augmented matrix for a system of linear equations in the variables x, y and z is given below:

[ 1...-1...1...|..2 ]
[ 0...2...a -1.|..4 ]
[ -1...3...1...|..b ]

*It's a 3x3 augmented matrix btw. Can't do the big square brackets, so I made do with the smaller ones...*

For which values of a and b does the system have:
a) no solutions;
b) exactly one solution;
c) infinitely many solutions?

For the values of a and b in c), find all solutions of the system.

2. a) Show that the set of all vectors (x, y, z) such that x + y + z = 0 are subspaces of R^3 (Euclidean space).

b) Let u = (2$, -1, -1), v = (-1, 2$, -1) and w = (-1, -1, 2$),

i) For what real values of $ do the vectors u, v and w form a linearly dependant set in R^3?

ii) For each of these values express one of the vectors as a linear combination of the other two.
 
Physics news on Phys.org
  • #2
Where are you having trouble?
 
  • #3
For 1., I'm not sure where to start. I know it is an inhomogenous system with the left part of the matrix being "A", the unknowns x, y, z being x and the right part of the matrix being b.
Then if Rank(A) = Rank(A|b) and < than the number of unknowns, there are infinite solutions; if = to the number of unknowns, then there is 1 solution. Now, if Rank(A) =/= Rank(A|b), there are no solutions.

But the problem for me is how to find the unknowns to get to that stage?

For 2a), I tried to go:

Let S be the set of all vectors of the form (0, 0, 0) and let u and v be vectors in S.

Therefore: u = (u, 0, 0) and v = (v, 0, 0), u, v є R

Now we test:
i) u + v = (u, 0, 0) + (v, 0, 0) = (u + v, 0, 0)
Therefore u + v є S (since u + v є R)

ii) ku = k(u, 0, 0), k є R
= (ku, 0, 0)

Therefore: ku є S

Therefore: S is a subspace of R^3

and for 2b), I'm not sure how to find the values. If there wasn't the unknown $ and it asked to determine whether it was linearly dependant, then I could do it; but this is just puzzling me.
 
  • #4
How about starting by doing exactly what you would do to solve the matrix equation:
[tex]\left(\begin{array}{cccc} 1 & -1 & 1 & 2 \\ 0 & 2 & a- 1 & 4 \\ -1 & 3 & 1 & b\end{array}\right)[/tex]

Since you already have a 1 in the "first column, first row", and a 0 in the "first column, second row", you need to get a 0 in the "first column, third row" and you can do that by adding the first row to the last row.

Continue row-reducing as far as you can. Of course, at some point you may need to divide by something involving a- you can do that and get a single unique solution as long as that "something" is not 0. For what value of a is that "something" 0? In that case, you wind up with a third row consisting of all 0s except possibly the fourth column. What does that mean? What happens if the fourth column is also 0 and for what values of a and b does that happen?
 
Last edited by a moderator:
  • #5
Reply

I do not really understand your working for question 2a. Does S represent the set of all vectors (x, y, z) such that x + y + z = 0? If yes, then your vectors u and v are not in S.
Instead, why not define u = [tex] (x_{1} \ y_{1} \ z_{1}) [/tex] and v = [tex] (x_{2} \ y_{2} \ z_{2}) [/tex], where [tex] x_{1} + \ y_{1} + \ z_{1} = 0 \ and \ x_{2} +\ y_{2} +\ z_{2} = 0[/tex]. You can then carry on proving from here!

For question 2b, just do what you normally would had there not been any unknowns! After you finish the row operations, set the last row to be a zero row to determine the values of $ for linear dependence. Do you know why this is done?
 
Last edited:
  • #6
For 2a), I wrote:

let S be the set of all vectors of the form (x, y, z) such that x + y + z = 0; x, y, z є R; and let u and v be vectors in S

Therefore: u = (x1, y1, z1 and v = (x2, y2, z2), where x1 + y1 + z1 = 0 and x2 + y2 + z2 = 0, and x1, x2, y1, y2, z1, z2 є R

i) u + v = (x1 + x2, y1 + y2, z1 + z2)
Therefore u + v є S (since x1+x2, y1+y2, z2+z2 є R)

ii)ku = (kx1, ky1, kz1), k є R
Therefore: ku є S (since kx1, ky1, kz1 є R)

Therefore: S is a subspace of R^3


Working on 2b). For 1, I row reduced, but then I get weird forms of a in the 3rd column.

Btw, where can I read a tutorial on how to use the mathematical text that you guys use?
 
  • #7
ultima9999 said:
For 2a), I wrote:

let S be the set of all vectors of the form (x, y, z) such that x + y + z = 0; x, y, z є R; and let u and v be vectors in S

Therefore: u = (x1, y1, z1 and v = (x2, y2, z2), where x1 + y1 + z1 = 0 and x2 + y2 + z2 = 0, and x1, x2, y1, y2, z1, z2 є R

i) u + v = (x1 + x2, y1 + y2, z1 + z2)
Therefore u + v є S (since x1+x2, y1+y2, z2+z2 є R)
It doesn't follow that the sum is in S just because the indvidual components are in R. You need to show that the sum vector also satisfies the definition of S: that (x1+ x2)+ (y1+y2)+ (z1+ z2)= 0.

ii)ku = (kx1, ky1, kz1), k є R
Therefore: ku є S (since kx1, ky1, kz1 є R)
Once again, just saying that the components are in R is not sufficient. You must show that kx1+ ky1+ kz1= 0.

Therefore: S is a subspace of R^3
Yes, provided you clean up (i) and (ii).


Working on 2b). For 1, I row reduced, but then I get weird forms of a in the 3rd column.
You have u = (2x, -1, -1), v = (-1, 2x, -1) and w = (-1, -1, 2x) and are asked for what values of x they are linearly dependent. Row reducing will work and can be simplified by putting (-1, -1, 2x) as the first row, (-1, 2x, -1) as the second row, and (2x, -1, -1) as the third row. Doing that I get -4x2+ 2x- 2 as the remaining number in the third row and these will be linearly dependent if that is 0.


Btw, where can I read a tutorial on how to use the mathematical text that you guys use?
There is a tutorial on LaTex formatting in the "Tutorials" section under "Science Education":
https://www.physicsforums.com/showthread.php?t=8997
 
Last edited by a moderator:
  • #8
HallsofIvy said:
It doesn't follow that the sum is in S just because the indvidual components are in R. You need to show that the sum vector also satisfies the definition of S: that (x1+ x2)+ (y1+y2)+ (z1+ z2)= 0. Once again, just saying that the components are in R is not sufficient. You must show that kx1+ ky1+ kz1= 0. Yes, provided you clean up (i) and (ii).

Ok, thanks!
HallsofIvy said:
You have u = (2x, -1, -1), v = (-1, 2x, -1) and w = (-1, -1, 2x) and are asked for what values of x they are linearly dependent. Row reducing will work and can be simplified by putting (-1, -1, 2x) as the first row, (-1, 2x, -1) as the second row, and (2x, -1, -1) as the third row. Doing that I get -4x2+ 2x- 2 as the remaining number in the third row and these will be linearly dependent if that is 0.

Yeah, I worked that out myself, and was just about to post an update. I apologize for not posting my remark clearly; it should have been:

"I'm working on 2b now.
However, for 1., I row reduced, but then I get weird forms of a in the 3rd column."Anyway, I got $ = 1 or -1/2. Furthermore, when $ = -1/2, the second row is a zero row as well, from my final matrix of ($ replaced by [tex]\lambda[/tex]):

[tex] \left(\begin{array}{ccc|c}1 & 1 & -2\lambda & 0\\0 & 2\lambda + 1 & -2\lambda -1 & 0\\0 & 0 & 4\lambda^2 -2\lambda -2 & 0\end{array}\right)[/tex]
HallsofIvy said:
There is a tutorial on LaTex formatting in the "Tutorials" section under "Science Education":
https://www.physicsforums.com/showthread.php?t=8997

Thanks dude, you've been a great help!
 
Last edited:
  • #9
So for [tex]\lambda[/tex] = 1,

[tex] \left(\begin{array}{ccc|c}1 & 1 & -2 & 0\\0 & 3 & -3 & 0\\0 & 0 & 0 & 0\end{array}\right)[/tex]

[tex]c_{3}[/tex] is arbitrary; [tex]c_{3} = t, t \epsilon[/tex] R
[tex]3c_{2} = 3c_{3}
c_{1} = 2c_{3} - c_{2} = 2t -t = t[/tex]

(because [tex]c_{1}\left(\begin{array}{ccc}2\lambda & -1 & -1\end{array}\right) + c_{2}\left(\begin{array}{ccc}-1 & 2\lambda & -1\end{array}\right) + c_{3}\left(\begin{array}{ccc}-1 & -1 & 2\lambda\end{array}\right) = \mathbf {0}[/tex])

solution space: [tex]\left(\begin{array}{ccc}c_{1} & c_{2} & c_{3}\end{array}\right) = t\left(\begin{array}{ccc}1 & 1 & 1\end{array}\right)[/tex] where [tex]t \epsilon[/tex] R

let [tex]t = 1, \left(\begin{array}{ccc}c_{1} & c_{2} & c_{3}\end{array}\right) = \left(\begin{array}{ccc}1 & 1 & 1\end{array}\right)A\mathbf{x} = \mathbf{b}[/tex]
[tex]\left(\begin{array}{ccc}2\lambda & -1 & -1\end{array}\right) + \left(\begin{array}{ccc}-1 & 2\lambda & -1\end{array}\right) + \left(\begin{array}{ccc}-1 & -1 & 2\lambda\end{array}\right) = \mathbf{0}[/tex]
[tex]\left(\begin{array}{ccc}2\lambda & -1 & -1\end{array}\right) = - \left(\begin{array}{ccc}-1 & 2\lambda & -1\end{array}\right) - \left(\begin{array}{ccc}-1 & -1 & 2\lambda\end{array}\right)[/tex]

And I'll leave out -1/2 because it takes ages to type out all that Latex...
 

FAQ: Linear Algebra augmented matrix

What is an augmented matrix in linear algebra?

An augmented matrix in linear algebra is a matrix that represents a system of linear equations. It is created by combining the coefficients and constants of the equations into a single matrix, with a vertical line separating the coefficients from the constants.

How is an augmented matrix used to solve a system of linear equations?

An augmented matrix can be used to solve a system of linear equations through the process of row reduction, also known as Gaussian elimination. By performing row operations on the matrix, the system of equations can be simplified until the solutions can be easily found.

Can an augmented matrix have more than one solution?

Yes, an augmented matrix can have multiple solutions. This occurs when the equations in the system are not independent, meaning that they are not all necessary to solve the system. In this case, the system is said to be underdetermined, and there are infinitely many solutions that satisfy the equations.

What happens if the augmented matrix has no solutions?

If the augmented matrix has no solutions, then the system of linear equations is said to be inconsistent. This means that there is no set of values that can satisfy all of the equations in the system. In other words, the equations are contradictory and cannot be solved simultaneously.

Can an augmented matrix be used for systems of equations with more than two variables?

Yes, an augmented matrix can be used for systems of equations with any number of variables. As long as the number of equations is equal to or greater than the number of variables, the system can be represented as an augmented matrix and solved using row reduction.

Similar threads

Replies
69
Views
5K
Replies
2
Views
2K
Replies
4
Views
3K
Replies
32
Views
1K
Replies
3
Views
1K
Replies
19
Views
1K
Replies
4
Views
1K
Replies
6
Views
1K
Back
Top