System of linear equations, all possible outcomes

In summary: Second Edition, Terence TaoAn Introduction to Linear Algebra, Stanley and Stella SteinOpalg,There are many books on linear algebra that would teach the material in the same way. For more specific recommendations, you may want to consult a library or bookstore specializing in mathematics.
  • #1
Yankel
395
0
Hello all,

I need your help, I am trying to put some order in all the possible outcomes of a system of linear equations, I want to create a diagram or a table for my convenience.

What I want to know, which I am not sure of, is these kind of things:

If the system is homogeneous, and there are more variables than equations, then...

If the system is NOT homogeneous, and there are more variables than equations, then...

If the system is homogeneous, and there are more equations than variables, then...

If the system is NOT homogeneous, and there are more equations than variables, then...

If the system is homogeneous, and there are same number of equations and variables, then...If the system is NOT homogeneous, and there are same number of equations and variables, then...

the answer for each statement should be: no solution, single solution, infinite number of solutions, or two or three of the answers together.

Can you assist me with determining this ? Are there situations I forgot ?

Thank you !
 
Physics news on Phys.org
  • #2
Here are the answers. Can you supply examples to show that each of the listed outcomes can occur, and proofs that none of the non-listed outcomes can occur?

I am using the abbreviations N for no solutions, S for single solution, I for infinite number of solutions. (For homogeneous systems, a single solution means the trivial solution where all the variables are zero.)If the system is homogeneous, and there are more variables than equations, then... I

If the system is NOT homogeneous, and there are more variables than equations, then... N or I

If the system is homogeneous, and there are more equations than variables, then... S or I

If the system is NOT homogeneous, and there are more equations than variables, then... N or S or I

If the system is homogeneous, and there are same number of equations and variables, then... S or I

If the system is NOT homogeneous, and there are same number of equations and variables, then... N or S or I
 
  • #3
thanks !

Examples should be fairly easy (I think). Proofs on the other hand...
 
  • #4
Some things I want to point out:

There are only infinite number of solutions when the vector space is infinite. Finite vector spaces DO exist.

"Number of equations" is really a "bad yardstick" to use. Here is why:

$x + y = 0$
$2x + 2y = 0$

Those are two equations, but the second one doesn't tell us anything more than the first one does.

Here is another example:

$x = 2$
$y = 3$
$x + y = 5$

Those are three equations, but we only need two of them.

Here is a really silly example:

$0 = 1$.

That's an equation...but NO variables. It also can never be true. In this case, the number of equations doesn't really tell us anything, except whoever wrote it (me) is a little soft in the head.

Here is a BETTER way to look at a system of equations:

$Ax = b$.

What we are really interested is the FUNCTION:

$x \mapsto Ax$

Things we want to know:

Is this function (which we'll also call $A$, just to be confusing) one-to-one?
Is this function onto?
Is $b$ actually in the range of $A$?

Now the number of equations is really the dimension of the space $b$ lives in (the co-domain of $A$).
The number of variables is the dimension of the space $x$ lives in, which is the domain of $A$.
The rank of $A$ tells us how much $A$ reduces the dimension of its domain to (the dimension of the range of $A$).
The nullity of $A$ tells us how much reduction takes place.

The rank-nullity theorem says these two things exactly balance, if the rank is $k$ and the domain has dimension $n$, then the nullity is $n-k$.

In other words, it's clearer to stop thinking about "the number of ____ in the system of equations" and think about the properties of the matrix $A$. We want to know its size, and we want to find a basis for its image and kernel (sometimes all we need to know is "how big are these bases", which is what rank and nullity tell us).

See, what the equations are, are linear combinations of the variables. And that is what vector spaces are all about: linear combinations. We row-reduce to find linearly independent (non-redundant) linear combinations. This actually reduces the number of equations we have to the bare minimum, and THAT's what we want to know, not how many equations we started out with.

You should try to do what Opalg suggests, anyway. Let us know where you get stuck.
 
  • #5
Deveno, is there an linear algebra book that teaches this material from the point of view you just described ? any recommendations ?
 
  • #6
Yankel said:
Deveno, is there an linear algebra book that teaches this material from the point of view you just described ? any recommendations ?

The only two books I am famiilar with enough to whole-heartedly recommend are:

Linear Algebra Done Right, A. Shelton

Linear Algebra (2nd ed.), Hoffman & Kunze

*************

That said, I suspect similar material can be found in a great many texts.

Here the thing: there is a trend in mathematics, which you may or may not have noticed. For example, take sets: while sets are interesting things in their own right, often what we are mostly occupied with is functions BETWEEN sets. These functions may take many forms, so that we don't even realize there IS a function involved.

For example, say I have two sets with $A \subseteq B$. There is a natural function associated with this, the function:

$f: A \to B$ with $f(a) = a$, for all $a \in A$. This type of function is called an INCLUSION function. A identity function:

$1_A:A \to A$ with $1_A(a) = a$ (note the similarity with above) reflects that fact that $A$ is included in $A$.

Another example: if we have a set with an equivalence relation $\sim$, there is a natural function:

$f: A \to A/\sim$ given by $f(a) = [a]$ which maps every element of $A$ to the equivalence class that contains it.

I cannot stress how important these two examples are, they occur is many "disguises" in many areas. Now, in linear algebra, the focus in many courses in on "vectors", you learn how to add them, to calculate their dot product, and cross-product, and to test sets of vectors for linear independence. But the real "meat" of linear algebra isn't even about vectors (which are pretty simple things, actually), it's about LINEAR TRANSFORMATIONS.

Well, this is an "abstract concept", with very wide applicability. Often, people feel more comfortable with "things they can get their hands on", that they can visualize, and relate to the world around them. Now the cool thing about linear transformations is: in a finite-dimensional vector space (such as the plane, or real 3-space), we actually have a "concrete realization" of what a linear transformation IS: we pick a basis (or two, if our domain space, and co-domain space are different dimensions), and form the matrix for the linear transformation in that basis (or bases).

So, in a sense, matrices form the "arithmetic" of linear algebra, much like "numbers" form the arithmetic of high-school algebra. For every theorem about vector spaces and linear transformations, we get a corresponding theorem about tuples (essentially, matrices with "one column") and matrices.

I like to think of the dimension of a vector space as telling us the "size" of the space. A linear mapping $L$ will preserve linear combinations:

$L(c_1x_1 + c_2x_2 + \cdots + c_nx_n) = c_1L(x_1) + c_2L(x_2) + \cdots + c_nL(x_n)$.

Now there are basically "two kinds of things" $L$ can do:

1. Preserve the size,
2. Shrink the size.

(Functions can only send one domain element to one image element, they never "expand" the domain).

As far as where $L$ sends the space (to its image), their are also two things that could happen:

3. $L$ could "cover" all of the target space
4. $L$ covers only PART of the target space.

These 4 pieces of information are what we want, they tell us "the general behavior" of a linear transformation. Let's look at one matrix in detail, to see how this fits together:

Suppose:

$A = \begin{bmatrix}1&2&1\\3&-1&0\\9&4&3 \end{bmatrix}$

If we choose "the typical basis" $\{(1,0,0),(0,1,0),(0,0,1)\}$ for $\Bbb R^3$, this is the matrix for THIS linear transformation:

$L(x,y,z) = (x+2y+z,3x-y,9x+4y+3z)$

If we pick $b = (3,2,13)$, then the equation $Av = b$ is this system of linear equations:

$x + 2y + z = 3$
$3x - y = 2$
$9x + 4y + z = 13$.

So, the first thing we want to know is: "does $A$ do any shrinking"? Let's be clear about what this means:

Suppose $Av_1 = Av_2$, with $v_1 \neq v_2$. This means $A$ sends two different vectors to the same one. We can re-write this as:

$Av_1 - Av_2 = 0$, and then because $A$ is linear, $A(v_1 - v_2) = 0$, and $v_1 - v_2$ is non-zero, since $v_1 \neq v_2$.

On the other hand, if $A$ sends some non-zero vector $u$ to the 0-vector, then for any vector $v \neq u$, we have:

$A(u + v) = Au + Av = 0 + Av = Av$, and $u + v \neq v$ since $u \neq 0$. To summarize this:

A matrix $A$ has collapsing if, and only if, it sends a nonzero vector TO the zero vector.

Thus, if $A$ DOES NOT collaspe anything, that is for each $u$ we get a UNIQUE $Au$, then the only possible vector $A$ sends to 0 is 0.

$A$ is injective $\iff \text{ker(A)} = \{0\} \iff \text{nullity}(A) = 0$.

So which category does OUR $A$ fall into? To find out, we solve the HOMOGENEOUS system:

$x + 2y + z = 0$
$3x - y = 0$
$9x + 4y + z = 0$.

We can do this different ways, I will row-reduce $A$ to get:

$\text{rref}(A) = \begin{bmatrix}1&0&\frac{1}{7}\\0&1&\frac{3}{7}\\0&0&0 \end{bmatrix}$

which corresponds to the REDUCED SYSTEM OF EQUATIONS:

$x + \dfrac{z}{7} = 0$

$y + \dfrac{3z}{7} = 0$

This tells us if we pick any value for $z$, say $t$, that $(x,y,z) = \left(\dfrac{-t}{7},\dfrac{-3t}{7},t\right)$

$= t(-\frac{1}{7},-\frac{3}{7},1)$

Note that we have only one "free parameter" ($t$), so the nullity of $A$ is 1, and:

$\text{ker}(A) = \{t(-\frac{1}{7},-\frac{3}{7},1): t \in \Bbb R\}$ that is to say:

$\{(-\frac{1}{7},-\frac{3}{7},1)\}$ is a basis for the null space of $A$.

This null space is bigger than just the 0-vector (0,0,0), so $A$ falls into category 2: it collapses.

Since we know that $A$ "loses (collapses) one dimension" that leaves us with 2 of our 3 we started with left, and indeed we see the rank of $A$ is 2 (it's rref has 2 non-zero rows). This makes sense: 1 + 2 = 3.

Since our target space has 3 dimensions, there's no way $A$ can "fill" all of it, so it also falls into category 4, it only covers part of the target space.

What is the range of $A$?

Well, $A(1,0,0) = (1,3,9)$ and $A(0,1,0) = (2,-1,4)$. So $A$ contains at LEAST the span of these two vectors:

$\{a(1,3,9) + b(2,-1,4): a,b \in \Bbb R\} \subseteq \text{im}(A)$.

A basis is a minimal spanning set, so if these two vectors ((1,3,9) and (2,-1,4)) are linearly independent, they form a basis for the range of $A$, or the COLUMN SPACE. Let's check to see if they ARE linearly independent:

Suppose:

$a(1,3,9) + b(2,-1,4) = 0$, that is:

$(a+2b,3a-b,9a+4b) = (0,0,0)$, so that:

$a+2b = 0$
$3a-b = 0$
$9a+4b = 0$

We have from the first equation: $a = -2b$, so the second equation becomes:

$-6b - b = 0 \implies -7b = 0 \implies b = 0$, and it is then evident we have have $a = 0$.

So the only solution is $a = b = 0$, so they are indeed linearly independent.

So we have found a basis for $\text{im}(A)$, namely $\{1,3,9),(2,-1,4)\}$. These two vectors determine a plane in $\Bbb R^3$.

Now we are in a position to answer: is $(3,2,13)$ in the range of $A$?

If it is, we have:

$a+2b = 3$
$3a-b = 2$
$9a+4b = 13$

We could solve this system, or use the rref of the augmented matrix, which turns out to be:

$\text{rref}(A|b) = \begin{bmatrix}1&0&\frac{1}{7}&|&1\\0&1&\frac{3}{7}&|&1\\0&0&0&|&0 \end{bmatrix}$

We can pick any value for $z$ we like, so let's pick $z = 0$ to make life easier on us. Our reduced non-homogenous system is equivalent to:

$x + \dfrac{z}{7} = 1$

$y + \dfrac{3z}{7} = 1$

which tells us that $(x,y,z) = (1,1,0)$ is a solution, or, equivalently:

$(1)(1,3,9) + (1)(2,-1,4) = (3,2,13)$, so $b$ is indeed in the range of $A$.

Note the crucial role the numbers 1,2 and 3 played in the above. What has happened to 3-space, as $A$ transformed it, is that $A$ shriunk the entire line that goes between the origin and (1,3,-7) (I used $t = -7$ to clear the fractions), leaving only a plane behind. Any point not on that plane, can never be reached by $A$.
 
Last edited:

FAQ: System of linear equations, all possible outcomes

What is a system of linear equations?

A system of linear equations is a set of two or more equations with multiple variables that are all related to each other. These equations can be solved simultaneously to find the values of the variables that satisfy all of the equations.

How do you solve a system of linear equations?

There are several methods for solving a system of linear equations, including substitution, elimination, and graphing. The most common method is Gaussian elimination, which involves using row operations to transform the system into an equivalent system with a lower triangular matrix. This allows for easy back substitution to find the values of the variables.

What are the possible outcomes of a system of linear equations?

The possible outcomes of a system of linear equations depend on the number of equations and variables in the system. For a system with the same number of equations and variables, there can be a unique solution, no solution, or infinitely many solutions. A system with more equations than variables can also have no solution or infinitely many solutions.

How do you know if a system of linear equations has a unique solution?

A system of linear equations has a unique solution if the number of equations is equal to the number of variables and the system is consistent (the equations do not contradict each other). This means that the equations intersect at exactly one point, and the values of the variables can be determined with a unique solution.

Can a system of linear equations have no solution?

Yes, a system of linear equations can have no solution if the equations are inconsistent (they contradict each other). This means that the equations do not intersect and there is no set of values that satisfy all of the equations. This can also occur if the number of equations is greater than the number of variables, resulting in an overdetermined system.

Similar threads

Replies
3
Views
1K
Replies
1
Views
1K
Replies
9
Views
2K
Replies
3
Views
1K
Replies
4
Views
1K
Replies
9
Views
2K
Back
Top