Solution Sets of Linear Systems

In summary: I wouldn't really suggest going this contradiction route but the normal equations are extensively used, so I thought I'd throw it in.
  • #1
Drakkith
Mentor
23,042
7,424

Homework Statement



Q. Let A be an m by n matrix. Assume that the equation Ax=b is consistent for some b≠0 in Rn. Moreover, assume that Ax=b has a unique solution. Does the equation Ax=0 have a non-trivial solution? Is it possible to find a vector V in Rm so that the system Ax=V has infinitely many solutions? Justify your answer.

Homework Equations


Theorem: Assume that the equation Ax=b is consistent. Then every solution to this equation has the form Vp+Vh where Vp is a particular solution to Ax=b and Vh is any solution to the associated homogeneous system Ax=0.

The Attempt at a Solution



Since Ax=b is consistent, then the solution to this equation is Vp+Vh, where Vh is a solution to Ax=0. However, is this solution non-trivial? Well, the homogeneous equation Ax=0 has a nontrivial solution if and only if it has at least one free variable. According to the existence and uniqueness theorem, a consistent linear system has infinitely many solutions if it has at least one free variable. But, since we're told that Ax=b has a unique solution, that means that there aren't any free variables and Vh = 0 (zero vector in Rm).

Does that sound right?

Is it possible to find a vector V in Rm so that the system Ax=V has infinitely many solutions?

Assuming V≠b≠0, then I'm not sure.
 
Physics news on Phys.org
  • #2
Part one seems about right. I'd suggest re-framing it from solutions of equations to linear independence tests for column vectors. I.e. perhaps the most important formula in linear algebra is

##\sum_{k=1}^n \gamma_k \mathbf u_k = \gamma_1 \mathbf u_1 + \gamma_2 \mathbf u_2 + ... + \gamma_n \mathbf u_n = \mathbf 0##

The above vectors ##\mathbf u_k## are linearly independent iff each and every ##\gamma_k =0 ##. (Why?)

In this problem, the idea really comes down to the linear independence of ##\mathbf A##'s columns. If they are linearly independent, then you have

##\mathbf A \mathbf x=
\bigg[\begin{array}{c|c|c|c|c}
\mathbf a_1 & \mathbf a_2 &\cdots & \mathbf a_{n-1} & \mathbf a_{n}
\end{array}\bigg]
\begin{bmatrix}
x_1\\
x_2\\
\vdots\\
x_{n-1}\\
x_n
\end{bmatrix}
= x_1 \mathbf a_1 + x_2 \mathbf a_2 + ... x_{n-1}\mathbf a_{n-1} + x_n \mathbf a_n = \mathbf 0
##

iff ##\mathbf x = \mathbf 0##

so using your answer for part one, we should be able to infer that ##\mathbf A## has full column rank (i.e. it's columns are linearly independent).

for part two: for some vector ##\mathbf v## in ##\mathbb R^m##, if there infinitely many solutions, the columns in ##\mathbf A## must be linearly dependent (why? -- because that vector is either in the span of the columns of ##\mathbf A## or it is not. If it is not, there is no exact solution... if it is, then...?).

- - - -
an alternative approach to part 2, since our scalars are in ##\mathbb R## and we have an inner product is:

Assume that there is an unbounded number of exact solutions for that ##\mathbf v ##. Then we have, at a minimum something like ##\mathbf A \big( \mathbf x_1 + \gamma\mathbf x_2\big) = \mathbf v##, where ##\mathbf 0 \neq \mathbf x_1 \neq \mathbf x_2 \neq \mathbf 0## and ##\gamma \neq 0##, but otherwise ##\gamma## can be any number we want. Using the normal equations:

##\mathbf A^T \mathbf A \big(\mathbf x_1 + \gamma \mathbf x_2\big) =\mathbf A^T \mathbf v##

Yet if ##\mathbf A## has full column rank, this system of equations has an exact solution (why?), which results in a contradiction. I wouldn't really suggest going this contradiction route but the normal equations are extensively used, so I thought I'd throw it in.
 
Last edited:
  • Like
Likes NFuller and Drakkith
  • #3
Hmmm, linear independence is 2 sections ahead of where we are now, so I'm having difficulty following your post. I also don't know what the "normal equations" are.
 
  • #4
Hmmm, what have you covered so far? And what's the text? To my mind, this question is screaming for linear independence of vectors and an awful lot of texts would not throw this at you without first covering linear independence. (E.g. linear independence is introduced on page 8 of Linear Algebra Done Wrong and drilled heavily in chapter 2, called "Systems of Linear Equations". It's also a chapter 2 concept in Linear Algebra Done Right.)

- - - -
I suppose a simpler approach is in part 1 of your answer, which I liked, you in effect demonstrated ##\mathbf A \mathbf x = \mathbf 0## iff ##\mathbf x = \mathbf 0##. Thus for part 2, to have infinite answers, you know you need a non-trivial nullspace, but part 1 of your answer says it does not exist... can I assume you've covered nullspace?
 
Last edited:
  • #5
StoneTemplePython said:
Hmmm, what have you covered so far? And what's the text?

The textbook is Linear Algebra and its Applications, by David C. Lay, Steven R. Lay, and Judi J. McDonald.
We've only covered the first 5 sections in the 1st chapter: Systems of Linear Equations, Row Reduction and Echelon Forms, Vector Equations, The Matrix Equation Ax=b, and Solution Sets of Linear Systems.

The theorem given in part 2 of my first post comes from section 1.5, Solution Sets of Linear Systems.

StoneTemplePython said:
can I assume you've covered nullspace?

Not yet. If the index is correct, it's first introduced about 100 pages further in, towards the end of chapter 2.
 
  • #6
Drakkith said:
The textbook is Linear Algebra and its Applications, by David C. Lay, Steven R. Lay, and Judi J. McDonald.
We've only covered the first 5 sections in the 1st chapter: Systems of Linear Equations, Row Reduction and Echelon Forms, Vector Equations, The Matrix Equation Ax=b, and Solution Sets of Linear Systems.

The theorem given in part 2 of my first post comes from section 1.5, Solution Sets of Linear Systems.
Not yet. If the index is correct, it's first introduced about 100 pages further in, towards the end of chapter 2.

Got it. So Linear Independence is introduced in Chapter 1, part 7, but thus far, you've only covered up through Chapter 1 part 5. Is reading ahead for 20 pages allowed? It is time well spent as Linear Independence is... absurdly important. It looks like that book covers some good stuff, you just haven't gotten far enough in yet.

You have the right idea for part 1, and just need to re-package it to answer part 2. The issue is so little has been covered thus far, and the progression is rather different than most texts I've seen, so it's kind of hard to do the repackaging for part 2 without having read that book.

I am thinking there may be a way to incorporate reduced row echelon form here (that you tacitly used in part 1) and answer part2 with it.
 
  • #7
StoneTemplePython said:
Got it. So Linear Independence is introduced in Chapter 1, part 7, but thus far, you've only covered up through Chapter 1 part 5. Is reading ahead for 20 pages allowed?

I've done some of that already. Not enough to solve this problem though, it appears. :rolleyes:

StoneTemplePython said:
You have the right idea for part 1, and just need to re-package it to answer part 2.

Alright. I'll dive back into this later on, as I've been staring at that math book for the last 4 hours or so and I need a break.
 
Last edited:
  • #8
StoneTemplePython said:
So Linear Independence is introduced in Chapter 1, part 7, but thus far, you've only covered up through Chapter 1 part 5. Is reading ahead for 20 pages allowed?
I wouldn't think so.
 
  • #9
I'm sorry, but I've gone over this several times in the last week or so and I still don't know if you can find a vector V in Rm such that Ax=V has infinitely many solutions. I don't even understand how to approach this, despite having a word document full of just about every theorem and definition from the book so far.

Edit: My apologies if this comes off as defeatist or something. I'm exhausted and pretty downhearted with school at the moment.
 
Last edited:
  • #10
It's ok. It took me a while to get the hang of Linear Algebra. (To be honest, reading from a 2nd or 3rd book that approached things in a complementary way... was helpful but I don't want you to go astray from the way your prof wants things done).

Is chapter 1 part 7 covering Linear Independence fair game now? If so, I could probably try to re-cut and expand my original linear independence answer.
 
  • #11
StoneTemplePython said:
Is chapter 1 part 7 covering Linear Independence fair game now? If so, I could probably try to re-cut and expand my original linear independence answer.

Yes, we've gone through that section now.

Question: When we're solving Ax=b, we're typically solving for some vector x such that the product of A and x results in b, correct? If our solution is consistent and unique, that means that there is only 1 possible vector x that solves the equation, right? If that's right, then it doesn't seem to me that taking product of A and x could ever result in a different vector V, much like how Ax≠0 since x isn't the zero vector.
 
  • #12
Drakkith said:
Yes, we've gone through that section now.Question: When we're solving Ax=b, we're typically solving for some vector x such that the product of A and x results in b, correct? If our solution is consistent and unique, that means that there is only 1 possible vector x that solves the equation, right? If that's right, then it doesn't seem to me that taking product of A and x could ever result in a different vector V, much like how Ax≠0 since x isn't the zero vector.

Indeed. If you solve for one consistent and unique ##\mathbf x## in the case where ##\mathbf b \neq \mathbf 0##, then that it's the only possible solution vector ##\mathbf x## (otherwise we wouldn't say it's unique)... this means the columns of ##\mathbf A## cannot be linearly dependent.

Drakkith said:
then it doesn't seem to me that taking product of A and x could ever result in a different vector V, much like how Ax≠0 since x isn't the zero vector.

The wording seems a touch off here but I think I know what you are getting at... the main idea is, is there some ##\mathbf w \neq \mathbf 0## such that ##\mathbf {Aw} = \mathbf 0##? If so, then when you solve for some future ##\mathbf x## in the equation ##\mathbf{Ax} = \mathbf v##, I could also tell you:

##\mathbf {A}\big(\mathbf x + \mathbf w\big) = \mathbf {A}\mathbf x + \mathbf {A}\mathbf w = \mathbf {A}\mathbf x + \mathbf 0 = \mathbf {A}\mathbf x = \mathbf v##

but ##\big(\mathbf x + \mathbf w\big) \neq \mathbf x## because ##\mathbf w \neq \mathbf 0## and hence you don't have a unique solution. So the goal is to make sure there is no ##\mathbf w## like that. The fact that we're told you found a consistent and unique solution for some ##\mathbf b \neq \mathbf 0## tells us that ##\mathbf w## can't exist, and hence can't cause you problems when you're solving ##\mathbf {A}\mathbf x = \mathbf v##- - --
note the reason people say infinite or unbounded number of solutions is that you could actually have

##\mathbf {A}\big(\mathbf x + \gamma \mathbf w\big) = \mathbf {A}\mathbf x + \gamma\mathbf {A}\mathbf w = \mathbf {A}\mathbf x + \mathbf 0 = \mathbf {A}\mathbf x = \mathbf v##

where ##\gamma ## is some arbitrary real scalar that can be anything and hence there is an unbounded number of solutions if ##\mathbf w## exists
 
Last edited:
  • #13
Okay. So we're not really worried about what x is, but about the relationship between the columns of A. A unique solution means that no column is dependent on the value of another column, otherwise we'd have something like column 5 = 3 x column 4 (x5=3x4).
 
  • #14
Drakkith said:
Okay. So we're not really worried about what x is, but about the relationship between the columns of A. A unique solution means that no column is dependent on the value of another column, otherwise we'd have something like column 5 = 3 x column 4 (x5=3x4).
Yes.

that is what I was getting at when I said:

##\mathbf A \mathbf x=
\bigg[\begin{array}{c|c|c|c|c}
\mathbf a_1 & \mathbf a_2 &\cdots & \mathbf a_{n-1} & \mathbf a_{n}
\end{array}\bigg]
\begin{bmatrix}
x_1\\
x_2\\
\vdots\\
x_{n-1}\\
x_n
\end{bmatrix}
= x_1 \mathbf a_1 + x_2 \mathbf a_2 + ... x_{n-1}\mathbf a_{n-1} + x_n \mathbf a_n = \mathbf 0##

and we have linear independence of those columns in ##\mathbf A## if and only if the right hand side only occurs when ##\mathbf x = \mathbf 0## (note this is equivalent to saying ##\mathbf w## does not exist... why?)

The idea behind it is: otherwise you could write at least one column --call it column k i.e. ##\mathbf a_k## -- as a linear combination of the other columns of ##\mathbf A##.

And for avoidance of doubt, the steps you'd do are subtract it from both sides:

##\sum_{j\neq k} x_j \mathbf a_j = x_1 \mathbf a_1 + x_2 \mathbf a_2 + ... x_{n-1}\mathbf a_{n-1} + x_n \mathbf a_n -x_k \mathbf a_k = -x_k \mathbf a_k##

and divide by the non-zero scalar ##-x_k##

## \mathbf a_k = \frac{-1}{x_k} \sum_{j\neq k}x_j \mathbf a_j ##
 
Last edited:
  • Like
Likes Drakkith
  • #15
Okay. I Talked to my Professor about this and I finally understand. One of my problems was that I wasn't really understanding what was going on with A in regards to switching between Ax=b, Ax=0, and Ax=V. Since we've been told the solution to Ax=b is consistent and unique, that means that there are no free variables in A. And this doesn't change when we set Ax equal to b, 0, or V! Since there are no free variables, then there is no possible way that there can exist a vector V such that there are infinitely many solutions to Ax=V, as that would require that A have at least one free variable.
 
  • Like
Likes StoneTemplePython

Related to Solution Sets of Linear Systems

1. What are solution sets of linear systems?

A solution set of a linear system is the set of all possible solutions that satisfy the given system of equations. It can be represented by a set of ordered pairs, a vector, or a parametric representation.

2. How are solution sets of linear systems found?

Solution sets of linear systems can be found by using various methods such as substitution, elimination, or matrix operations. These methods involve manipulating the equations to isolate a single variable and then solving for its value.

3. Can a linear system have more than one solution set?

Yes, a linear system can have one, infinite, or no solution sets. This depends on the number of equations and variables in the system, and the relationships between them. A consistent system with the same number of equations and variables will have one unique solution set, while an inconsistent system with contradictory equations will have no solution set.

4. What is the importance of solution sets of linear systems?

Solution sets of linear systems are important in various fields such as mathematics, engineering, and physics. They help in solving real-world problems by representing relationships between different variables and finding the values that satisfy these relationships. Solution sets are also used in optimization and modeling scenarios.

5. Can a solution set be graphed?

Yes, solution sets of linear systems can be graphed in two or three dimensions. The graph will show the intersection of all the equations in the system, which represents the solution set. In a two-dimensional graph, the solution set will be a point, while in a three-dimensional graph, it will be a line or a plane.

Similar threads

  • Calculus and Beyond Homework Help
Replies
17
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
445
  • Calculus and Beyond Homework Help
Replies
2
Views
523
  • Calculus and Beyond Homework Help
Replies
3
Views
998
  • Calculus and Beyond Homework Help
Replies
15
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
771
  • Calculus and Beyond Homework Help
Replies
1
Views
972
Back
Top