System no/infinitely many solution(s)

  • MHB
  • Thread starter mathmari
  • Start date
  • Tags
    System
In summary, the conversation discusses using matrix operations and Gaussian elimination to solve systems of equations. The matrix A and vectors b1 and b2 are given, and the conversation checks if the system Ax=bi has a solution for i=1,2. If the system is impossible, the solution is found by projecting the vector bi onto the column space of A. If the system has infinitely many solutions, the solution is found by projecting the vector bi onto the row space of A. The correct formula for projection is discussed and the solutions are verified. The importance of finding the closest point to the range of the matrix is also mentioned.
  • #1
mathmari
Gold Member
MHB
5,049
7
Hey! :eek:

We have the matrix $A=\begin{pmatrix}1 & -1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 1\end{pmatrix}$ and the vectors $b_1=\begin{pmatrix}1 \\ 0 \\1\end{pmatrix}$ and $b_2=\begin{pmatrix}-1 \\ 1 \\2\end{pmatrix}$.
Check if the system $Ax=b_i$ for $i\in \{1,2\}$ has a solution.
If the system is impossible find the solution that we get if the vector $b_i$ is projected onto the column space.
If the system has infinitely many solutions find the solution that belongs to the row space. I have done the following:

  • For $b_1$ the echelon form of the extended matrix is $\begin{pmatrix}\left.\begin{matrix}
    \begin{matrix}1 & -1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{matrix}
    \end{matrix}\right|\begin{matrix}1 \\ -1 \\ 2\end{matrix}\end{pmatrix}$.

    That means that the system is impossible.

    A basis for the column space of $A$ is $\left \{\begin{pmatrix}1\\ 1\\ 0\end{pmatrix}, \begin{pmatrix}-1\\ 0\\ 1\end{pmatrix}\right \}$, right? (Wondering)

    Let them be $q_1$ and $q_2$ respectively.

    The projection of $b$ onto the column space of $A$ is \begin{equation*}b_{p1}=(b_1^Tq_1)q_1+(b_1^Tq_2)q_2=1\cdot q_1+0\cdot g_2=q_1\end{equation*}
    Applying again the Gaussian elimination method with the new vector we get the echelon form $\begin{pmatrix}\left.\begin{matrix}
    \begin{matrix}1 & -1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{matrix}
    \end{matrix}\right|\begin{matrix}1 \\ 0 \\ 0\end{matrix}\end{pmatrix}$.

    So we get the solution \begin{equation*}\begin{pmatrix}x_1 \\ x_2\\ x_3\end{pmatrix}=\begin{pmatrix}1+x_2 \\ x_2\\ -x_2\end{pmatrix}=\begin{pmatrix}1 \\ 0\\ 0\end{pmatrix}+x_2\begin{pmatrix}1 \\ 1\\ -1\end{pmatrix}\end{equation*} Is everything correct? (Wondering)
  • For $b_2$ the echelon form of the extended matrix is $\begin{pmatrix}\left.\begin{matrix}
    \begin{matrix}1 & -1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{matrix}
    \end{matrix}\right|\begin{matrix}-1 \\ 2 \\ 0\end{matrix}\end{pmatrix}$.

    That means that the system has infinitely many solutions.

    A basis for the row space of $A$ is $\left \{\begin{pmatrix}1\\ -1\\ 0\end{pmatrix}, \begin{pmatrix}0\\ 1\\ 1\end{pmatrix}\right \}$, right? (Wondering)

    We get the solution \begin{equation*}\begin{pmatrix}x_1 \\ x_2\\ x_3\end{pmatrix}=\begin{pmatrix}x_1 \\ x_1+1\\ 1-x_1\end{pmatrix}=\begin{pmatrix}0 \\ 1\\ 1\end{pmatrix}+x_1\begin{pmatrix}1 \\ 1\\ -1\end{pmatrix}\end{equation*}

    How do we see that this belongs to the row space? Do we have to check if it is a linear combination of the basis vectors of the row space? (Wondering)
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
mathmari said:
A basis for the column space of $A$ is $\left \{\begin{pmatrix}1\\ 1\\ 0\end{pmatrix}, \begin{pmatrix}-1\\ 0\\ 1\end{pmatrix}\right \}$, right?

Let them be $q_1$ and $q_2$ respectively.

The projection of $b$ onto the column space of $A$ is \begin{equation*}b_{p1}=(b_1^Tq_1)q_1+(b_1^Tq_2)q_2=1\cdot q_1+0\cdot g_2=q_1\end{equation*}

Hey mathmari!

The formula you are using assumes that $q_1$ and $q_2$ have unit length.
But that is not the case is it? (Worried)

mathmari said:
2. A basis for the row space of $A$ is $\left \{\begin{pmatrix}1\\ -1\\ 0\end{pmatrix}, \begin{pmatrix}0\\ 1\\ 1\end{pmatrix}\right \}$, right?

We get the solution \begin{equation*}\begin{pmatrix}x_1 \\ x_2\\ x_3\end{pmatrix}=\begin{pmatrix}x_1 \\ x_1+1\\ 1-x_1\end{pmatrix}=\begin{pmatrix}0 \\ 1\\ 1\end{pmatrix}+x_1\begin{pmatrix}1 \\ 1\\ -1\end{pmatrix}\end{equation*}

How do we see that this belongs to the row space? Do we have to check if it is a linear combination of the basis vectors of the row space?

What happens if we find the projection of the solutions onto the row space? (Wondering)
 
  • #3
Klaas van Aarsen said:
The formula you are using assumes that $q_1$ and $q_2$ have unit length.
But that is not the case is it? (Worried)

Ahh ok! Which is the correct formula that we have to use? Do we have to divide each part by the length, i.e. as follows?
\begin{equation*}b_{p1}=\frac{(b_1^Tq_1)}{q_1^Tq_1}q_1+\frac{(b_1^Tq_2)}{q_2^Tq_2}q_2\end{equation*}
(Wondering)
Klaas van Aarsen said:
What happens if we find the projection of the solutions onto the row space? (Wondering)

Why do we have to do that? I got stuck right now. (Wondering)
 
  • #4
mathmari said:
Ahh ok! Which is the correct formula that we have to use? Do we have to divide each part by the length, i.e. as follows?
\begin{equation*}b_{p1}=\frac{(b_1^Tq_1)}{q_1^Tq_1}q_1+\frac{(b_1^Tq_2)}{q_2^Tq_2}q_2\end{equation*}

Yep. (Nod)

mathmari said:
Why do we have to do that? I got stuck right now.

Well, we don't have to.
I merely wanted to show that you basically already have the answer. And also that the result is similar to the first part. (Emo)

Anyway, we are supposed to find the intersection of the line representing the solutions with the plane representing the row space. (Thinking)
 
  • #5
Klaas van Aarsen said:
Yep. (Nod)

Ok! So we get \begin{equation*}b_{p1}=\frac{(b_1^Tq_1)}{q_1^Tq_1}q_1+\frac{(b_1^Tq_2)}{q_2^Tq_2}q_2=\frac{1}{2}\cdot q_1+\frac{0}{2}\cdot g_2=\frac{1}{2}q_1\end{equation*}

Applying again the Gaussian elimination method with the new vector we get the echelon form $\begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & -1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{matrix}
\end{matrix}\right|\begin{matrix}\frac{1}{2} \\ 0 \\ 0\end{matrix}\end{pmatrix}$.

So we get the solution \begin{equation*}\begin{pmatrix}x_1 \\ x_2\\ x_3\end{pmatrix}=\begin{pmatrix}\frac{1}{2}1+x_2 \\ x_2\\ -x_2\end{pmatrix}=\begin{pmatrix}\frac{1}{2} \\ 0\\ 0\end{pmatrix}+x_2\begin{pmatrix}1 \\ 1\\ -1\end{pmatrix}\end{equation*} Is everything correct? (Wondering)
Klaas van Aarsen said:
Well, we don't have to.
I merely wanted to show that you basically already have the answer. And also that the result is similar to the first part. (Emo)

Anyway, we are supposed to find the intersection of the line representing the solutions with the plane representing the row space. (Thinking)
Ok, so the intersection is $\begin{pmatrix}0\\ 1\\ 1\end{pmatrix}$. That means that the solution that belongs to the row space is $\begin{pmatrix}0\\ 1\\ 1\end{pmatrix}$, right? (Wondering)
 
  • #6
mathmari said:
Ok! So we get \begin{equation*}b_{p1}=\frac{(b_1^Tq_1)}{q_1^Tq_1}q_1+\frac{(b_1^Tq_2)}{q_2^Tq_2}q_2=\frac{1}{2}\cdot q_1+\frac{0}{2}\cdot g_2=\frac{1}{2}q_1\end{equation*}

Applying again the Gaussian elimination method with the new vector we get the echelon form $\begin{pmatrix}\left.\begin{matrix}
\begin{matrix}1 & -1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{matrix}
\end{matrix}\right|\begin{matrix}\frac{1}{2} \\ 0 \\ 0\end{matrix}\end{pmatrix}$.

So we get the solution \begin{equation*}\begin{pmatrix}x_1 \\ x_2\\ x_3\end{pmatrix}=\begin{pmatrix}\frac{1}{2}1+x_2 \\ x_2\\ -x_2\end{pmatrix}=\begin{pmatrix}\frac{1}{2} \\ 0\\ 0\end{pmatrix}+x_2\begin{pmatrix}1 \\ 1\\ -1\end{pmatrix}\end{equation*}

Is everything correct?

Ok, so the intersection is $\begin{pmatrix}0\\ 1\\ 1\end{pmatrix}$. That means that the solution that belongs to the row space is $\begin{pmatrix}0\\ 1\\ 1\end{pmatrix}$, right?

All correct. (Nod)

Note that in the first part we projected $b_1$ onto the column space and found a unique solution.
We have solved for the closest point to the range of the matrix.

And in the second part we effectively projected the solution on the row space to find a unique solution.
We have a solution that is as close as possible to the range of the inverse matrix. (Nerd)
 
  • #7
Klaas van Aarsen said:
Note that in the first part we projected $b_1$ onto the column space and found a unique solution.

But we have a free variable $x_2$ so we don't have a unique solution. Or have I understood that wrongly? (Wondering)
 
  • #8
mathmari said:
But we have a free variable $x_2$ so we don't have a unique solution. Or have I understood that wrongly?

Ah yes, we still have a line there. (Blush)

Still, it is a unique solution if we intersect it with the row space. (Emo)
 
  • #9
Hi mathmari and Klaas van Aarsen,

I want to politely and respectfully suggest that the analysis for the solution to the system $Ax = b_{p1}$ -- i.e., where $b_{1}$ has been projected to the column space of $A$ -- should be re-examined. In particular, the projection of $b_{1}$ onto the column space is not given by \begin{equation*}b_{p1}=(b_1^Tq_1)q_1+(b_1^Tq_2)q_2=1\cdot q_1+0\cdot q_2=q_1.\end{equation*}

Please refer to the attached image.

The projection of $b_{1}$ onto the column space of $A$ is the "shadow" $b_{1}$ casts on the plane determined by $q_{1}$ and $q_{2}.$ Here, $b_{1}$ is shown in blue; $b_{1p}$ is shown in red; and $q_{1}$ and $q_{2}$ in black. As we can see, it is possible for $b_{1}$ to be orthogonal to $q_{2}$ in $\mathbb{R}^{3}$, yet its projection onto the column space will contain a $q_{2}$ component nevertheless. The equation we must consider is $$b_{p1} = c_{1}q_{1} + c_{2}q_{2}\qquad (1).$$ Visually speaking, the magnitude of $c_{1}$ and $c_{2}$ are given by the lengths of the orange and purple line segments, respectively.

How do we solve for $c_{1}$ and $c_{2}$? This is done by taking dot products of (1) with $q_{1}$ and $q_{2}.$ It is very important to note here that $q_{1}$ and $q_{2}$ are not orthogonal, so $q_{1}\cdot q_{2}$ will not vanish. Doing this will yield a system of 2 equations that can be used to solve for $c_{1}$ and $c_{2}$. From there you can proceed to solving $Ax=b_{p1}$ using the row echelon technique applied previously.View attachment 9680
 

Attachments

  • Projection-Onto-Subspace10-April-2020.png
    Projection-Onto-Subspace10-April-2020.png
    13.2 KB · Views: 88
  • #10
GJA said:
In particular, the projection of $b_{1}$ onto the column space is not given by \begin{equation*}b_{p1}=(b_1^Tq_1)q_1+(b_1^Tq_2)q_2=1\cdot q_1+0\cdot q_2=q_1.\end{equation*}

Oh yes, I forgot to take into account that $q_1$ and $q_2$ are not perpendicular in addition to not being of unit length. (Blush)
 
  • #11
So isn't there a general formula? Do we have to derive the formula by solving the system for $c_1$ and $c_2$ ? (Wondering)
 
  • #12
mathmari said:
So isn't there a general formula? Do we have to derive the formula by solving the system for $c_1$ and $c_2$ ? (Wondering)

As I see it, we can do one of:

1. Set up the system and solve it as GJA suggested.
2. Orthogonalize $q_1$ and $q_2$ and use the formula that you already used. We can use $\tilde q_2=q_2 - (q_2\cdot q_1)q_1 / \|q_1\|^2$ for that (Gram-Schmidt process).
3. Use the formula $\text{projection}(b)=b-(b\cdot n)n$, where $n$ is a vector of unit length that is orthogonal to both $q_1$ and $q_2$. It's the projection to the space normal to $n$, which is the desired space.
(Thinking)
 
Last edited:
  • #13
I would suggest using equation (1) from my previous post and taking dot products of it with $q_{1}$ and $q_{2}.$ If that is too vague, here are some details that will hopefully make it make clear what I am suggesting. Note that $$b_{1} = c_{1}q_{1} + c_{2}q_{2} +c_{3}n,$$ where $n$ is a non-zero vector orthogonal to the $q_{1}$-$q_{2}$ plane. This is the mathematical way to express the fact that in the picture I posted $b_{1}$ equals orange + purple + green. Since $$b_{p1} = b_{1} - c_{3}n,$$ (1) becomes $$b_{1} - c_{3}n = c_{1}q_{1} + c_{2}q_{2}\qquad (2).$$ Taking dot products of (2) with $q_{1}$ and $q_{2}$, we get $$b_{1}\cdot q_{1} = c_{1}q_{1}\cdot q_{1} + c_{2}q_{2}\cdot q_{1}$$
and $$b_{1}\cdot q_{2} = c_{1}q_{1}\cdot q_{2} + c_{2}q_{2}\cdot q_{2},$$ because $n\cdot q_{1} = 0 = n\cdot q_{2}$.

If you work out those dot products you can get the system of equations for $c_{1}$ and $c_{2}$. Solve this system for $c_{1}$ and $c_{2}$, then solve the $Ax=b_{p1}$ equation using the row echelon technique you previously applied.
 
  • #14
I will try to solve the system as GJA suggested! First I tried to use the second way
Klaas van Aarsen said:
2. Orthogonalize $q_1$ and $q_2$ and use the formula that you already used. We can use $\tilde q_2=q_2 - (q_2\cdot q_1)q_2 / \|q_2\|^2$ for that (Gram-Schmidt process).

Are the orthogonal vectors then:
\begin{align*}&\tilde q_1=q_1=\begin{pmatrix}1 \\ 1\\ 0\end{pmatrix} \\ &\tilde q_2=q_2 - \frac{q_2\cdot q_1 }{ \|q_1\|^2}q_1=\begin{pmatrix}-1 \\ 0\\ 1\end{pmatrix} - \frac{-1}{ 2}\begin{pmatrix}1 \\ 1\\ 0\end{pmatrix}=\begin{pmatrix}-1 \\ 0\\ 1\end{pmatrix} + \begin{pmatrix}\frac{1}{ 2} \\ \frac{1}{ 2}\\ 0\end{pmatrix}= \begin{pmatrix}-\frac{1}{ 2} \\ \frac{1}{ 2}\\ 1\end{pmatrix}\end{align*}
And so is the formula for the projection the following? \begin{equation*}b_{p1}=\frac{b_1\cdot \tilde q_1}{\|\tilde q_1\|^2}\tilde q_1+\frac{b_1\cdot \tilde q_2}{\|\tilde q_2\|^2}\tilde q_2\end{equation*}

(Wondering)
 
  • #15
Yep. (Nod)

And I see you fixed the mistake I made in the Gram-Schmidt formula. Good. :eek:
 
  • #16
Great! Thank you for your help! (Star)(Mmm)
 

FAQ: System no/infinitely many solution(s)

What does it mean when a system has no solutions?

When a system of equations has no solutions, it means that there is no set of values for the variables that can satisfy all of the equations simultaneously. In other words, the equations are inconsistent and do not intersect at any point on a graph.

How can you tell if a system has infinitely many solutions?

If a system of equations has infinitely many solutions, it means that there are an infinite number of possible solutions that can satisfy all of the equations. This occurs when the equations are dependent, meaning they represent the same line and intersect at every point on a graph.

Can a system have both no solutions and infinitely many solutions?

No, a system cannot have both no solutions and infinitely many solutions. These are two mutually exclusive scenarios. A system can only have one of these outcomes.

What are the possible outcomes of solving a system of equations?

There are three possible outcomes when solving a system of equations: one unique solution, no solutions, or infinitely many solutions. These outcomes depend on the relationships between the equations and the values of the variables.

How can you determine the number of solutions in a system of equations?

To determine the number of solutions in a system of equations, you can graph the equations and see if they intersect at one point, no points, or all points. Alternatively, you can solve the system algebraically and see if there is a unique solution, no solution, or infinitely many solutions.

Similar threads

Replies
34
Views
2K
Replies
2
Views
1K
Replies
6
Views
2K
Replies
10
Views
1K
Replies
3
Views
2K
Back
Top