Linear Dependency: Find x for a,b,c in R3

  • MHB
  • Thread starter Yankel
  • Start date
  • Tags
    Linear
In summary: T(v)) = c(w).3) if w is in W, so is c:since w = T(u), c = T(u)+T(v) = T(u)+c(v).so w is in im(T) if and only if c is in im(T).
  • #1
Yankel
395
0
Let a,b,c be 3 vectors in R3.

Let A be a 3X3 matrix with a,b,c being it's columns. It is known that there exist x such that:

[tex]A^{17}\cdot \begin{pmatrix} 1\\ 2\\ x \end{pmatrix}= \begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix}[/tex]

Which statement is the correct one:

1) a,b and c are linearly independent
2) a,b and c are linearly dependent
3) transpose((1,2,x)) is linear combination of a,b,c
4) the system:
[tex]A\cdot \begin{pmatrix} 1\\ 2\\ x \end{pmatrix}[/tex]
has a non trivial solution

The correct answer is (2), but I don't understand why it is correct...

thanks.
 
Physics news on Phys.org
  • #2
if a,b,c are linearly independent, then rank(A) = 3.

this means, in particular that:

3 = dim(ker(A)) + 3, so:

dim(ker(A)) = 0, that is, the null space of A is {(0,0,0)}.

but if A17(x,y,z) = (0,0,0), then:

A16(x,y,z) = (0,0,0), and thus:

A15(x,y,z) = A14(x,y,z) =...= A(x,y,z) = (0,0,0),

so that (x,y,z) = (0,0,0).

since (1,2,x) ≠ (1,0,0) (no matter what we choose for x),

the columns of A cannot be linearly independent. this means (1) is not true.

let's look at (3). suppose that:

$A = \begin{bmatrix}0&0&0\\0&0&0\\0&0&1 \end{bmatrix}$

then for x = 0, we have:

A(1,2,0) = (0,0,0), so certainly A17(1,2,0) = (0,0,0), but (1,2,x) is not in im(A) (which it would be if it were in the column space of A).

note that it IS possible to have SOME A such that:

A17(1,2,0) = (0,0,0), with (1,2,x) in the column space of A. let:

$A = \begin{bmatrix}-2&1&0\\-4&2&0\\-2x&x&0 \end{bmatrix}$

clearly A(0,1,0) = (1,2,x) so that (1,2,x) = b (and is thus a linear combination of a,b, and c). but an easy calculation shows that:

A2 = 0, for any choice of x, so that A17(x,y,z) = A15(A2(x,y,z)) = A15(0,0,0) = (0,0,0).

so (3) isn't ALWAYS true, but it MIGHT be true.

you have some typo in (4), as you haven't defined a system of equations (no "equals" sign), so until you rectify this, i cannot give a proper argument. however, the argument for (1) shows that indeed, {a,b,c} cannot be linearly independent, so must be linearly dependent.
 
  • #3
thanks for your help

I never studies transformations (yet), so I am struggling with im() and ker()...

I do understand why the columns of A^17 are dependent, the only part I got missing is why if the columns of A^17 are dependent, the columns of A are also dependent...

I need an explanation that doesn't use linear transformations knowledge...thanks !
 
Last edited:
  • #4
fix a basis for Rn, and another one for Rm. then there is a unique matrix relative to those bases for any linear transformation T, and every such matrix corresponds to some linear transformation T.

loosely, matrices and linear transformations are "the same things", they're just "written" differently.

you have probably studied null spaces and column spaces belonging to a matrix $A$. these ARE the direct analogues (for a linear transformation $v \to Av$) of ker(T) and im(T) for a general linear transformation T. there's nothing mysterious about this:

kernels are what maps to 0.
images are the entirety of what gets mapped TO.

kernels (or null spaces) measure "how much shrinkage we get". images measure "how big what's left goes to". there's a natural trade-off, here: bigger image means smaller kernel, and smaller image means bigger kernel. the way we keep score is called "dimension".

linear independence is related to kernels
spanning is related to images

what this means for matrices is:

a matrix is 1-1 if the nullspace is {0}, which means ALL its columns are linearly independent. for square matrices, this means the matrix is invertible.

a matrix is onto if it has as many independent columns as the dimension of its co-domain (target space). in particular if it is an mxn matrix with m < n, the columns will be linearly dependent.

linear transformations (think: matrices with a "fancy name". this is not quite accurate, but close enough for visualization) change one vector space to another. they preserve "vector-space-ness": that is they preserve sums:

T(u+v) = T(u) + T(v)

and scalar multiples:

T(cv) = c(T(v)).

since they are functions, they can't "enlarge" a vector space:

dim(T(V)) ≤ dim(V)

but "good ones" preserve dimension:

dim(T(V)) = T(V) <---these are invertible.

********

for any linear transformation T:V-->W, the set ker(T) = {u in V: T(u) = 0} is a SUBSPACE of V. this boils down to the following facts:

1) if u is in ker(T) and v is in ker(T), then:

T(u+v) = T(u) + T(v) (since T is linear)
= 0 + 0 = 0 (since T(u) = 0, and T(v) = 0).

2) if u is in ker(T), so is cu:

T(cu) = c(T(u)) = c(0) = 0

3) 0 is always in ker(T):

T(0) = T(0+0) = T(0) + T(0)
0 = T(0) (subtracting T(0) from both sides).

if T:V-->W is a linear transformation, then the set:

im(T) = T(V) = {w in W: w = T(v) for some v in V} is a subspace of W.

1) suppose w,x are in im(T).

then w = T(u), x = T(v) for some u,v in V.

thus w+x = T(u) + T(v) = T(u+v), so w+x is in im(T).

2) if w is in W, so is cw:

since w = T(u), cw = c(T(u)) = T(cu), so cw is in im(T).

3) 0 is in im(T):

0 = T(0), and 0 is always in V, for any vector space.

*********
now, basis vectors are useful: they let us use COORDINATES (numbers) to represent vectors. but bases aren't UNIQUE, we can have several different coordinate systems on the same space. so it's better not to get "too attached" to any particular basis, dimension is one of those things that stay the same no matter which basis we use. so theorems that say something about dimension are more POWERFUL than theorems which rely on numerical calculation.
 
  • #5


The correct answer is (2) because if there exists a non-zero value for x such that the matrix-vector multiplication results in a zero vector, then the vectors a,b, and c must be linearly dependent. This means that one or more of the vectors can be expressed as a linear combination of the other vectors. In other words, one or more of the vectors can be written as a scalar multiple of another vector, which means they are not independent. Therefore, the correct statement is that a,b, and c are linearly dependent.
 

FAQ: Linear Dependency: Find x for a,b,c in R3

What is linear dependency?

Linear dependency occurs when one vector can be expressed as a combination of other vectors. In other words, one vector is a multiple of another vector, making them dependent on each other.

What does it mean to find x for a,b,c in R3?

In R3, we are referring to a three-dimensional vector space. Finding x for a,b,c means finding the coefficients that would satisfy the equation ax + by + cz = 0. These coefficients represent the relationship between the three vectors a,b, and c in the three-dimensional space.

How can I determine if a set of vectors is linearly dependent?

One way to determine linear dependency is by using the determinant of the matrix formed by the vectors. If the determinant is equal to zero, the vectors are linearly dependent. Another way is by checking if one vector can be written as a linear combination of the other vectors.

What is the importance of understanding linear dependency?

Understanding linear dependency is important in many areas of mathematics and science, such as linear algebra and physics. It allows us to solve systems of linear equations and to understand the relationships between vectors in a vector space.

How can I solve for x in a linear dependency problem?

To solve for x, we can use various methods such as Gaussian elimination or finding the inverse of the matrix formed by the vectors. It is also important to note that there may be multiple solutions or no solution at all, depending on the specific problem.

Similar threads

Replies
2
Views
1K
Replies
6
Views
2K
Replies
10
Views
1K
Replies
34
Views
2K
Replies
52
Views
3K
Replies
15
Views
1K
Back
Top