On proving Linearly independence

In summary, to show that {a, b, c} is linearly independent when c is not in the span of {a, b}, you need to prove that c1a + c2b + c3c = 0 implies c1 = c2 = c3 = 0. To show that w is a linear combination of the vectors in S, you need to prove that for each vi in T, there exists some scalars ai such that vi = ai1u1 + ai2u2 + ... + aikuk, and then substitute this expression into the equation for w and solve for the scalars bi.
  • #1
franz32
133
0
Hello. I want to ask questions... I hope you can guide me
in showing the proof.

1. Let a, b and c be vectors in a vector space such that {a, b} is linearly independent. Show that if c does not belong to span {a, b}
, then {a, b, c} is linearly independent.

I know that is {a,b} is l. independent, it implies that
c1a + c2b = 0. That is, c1 = c2 = 0.
What does it mean (imply) when c is not in span {a,b}?
How will I show the essence of the proof?

2. Let S = {u1, u2, ..., uk) be a set of vectors in a vector space, and let T = {v1, v2, ..., vm}, where each vi, i = 1, 2, ..., m, is a linear combination of the vectors in S. Show that

w = b1v1 + b2v2 + ... + bmvm
is a linear combination of the vectors in S.

How will I show the essence of the proof? I don't understand the meaning (implication)of the first sentence.
 
Last edited:
Physics news on Phys.org
  • #2
For question 1, the statement implies that c is not in the span of {a, b}, which means that it cannot be written as a linear combination of a and b. You can prove that {a, b, c} is linearly independent by showing that the only way you can make the linear combination 0 is if all the coefficients are 0. That is, if c1a + c2b + c3c = 0, then c1 = c2 = c3 = 0. For question 2, the statement implies that for each vector vi in T, there exists some scalars a1, a2, ..., ak such that vi = a1u1 + a2u2 + ... + akuk. To show that w is a linear combination of the vectors in S, you need to show that there exist scalars b1, b2, ..., bm such that w = b1u1 + b2u2 + ... + bkuk. This can be shown by substituting the expression for vi into the expression for w, giving you an equation in terms of the scalars a1, a2, ..., ak. Solving this equation will give you the scalars b1, b2, ..., bm that you need.
 
  • #3


Hello there! It's great that you're looking to understand the proof for linear independence. Let me try to guide you through the process:

1. When we say that {a,b} is linearly independent, it means that these two vectors cannot be written as a linear combination of each other. In other words, there is no way to find a non-zero value for c1 and c2 such that c1a + c2b = 0. This is the definition of linear independence.

Now, if c is not in span {a,b}, it means that c cannot be written as a linear combination of a and b. In other words, there is no way to find non-zero values for c1 and c2 such that c1a + c2b = c. This is because c does not belong to the span of {a,b}.

To show that {a,b,c} is linearly independent, you need to show that there is no way to find non-zero values for c1, c2, and c3 such that c1a + c2b + c3c = 0. This can be done by contradiction - assume that there exist non-zero values for c1, c2, and c3 that satisfy the equation, and then show that this leads to a contradiction (i.e. c belongs to the span of {a,b}).

2. The first sentence means that the set T is made up of vectors that are linear combinations of the vectors in S. In other words, each vector in T can be written as a linear combination of the vectors in S.

To show that w is a linear combination of the vectors in S, you need to find values for b1, b2, ..., bm such that w = b1u1 + b2u2 + ... + bmum. This can be done by using the fact that each vi is a linear combination of the vectors in S. For example, if v1 = a1u1 + a2u2 + ... + akuk, then we can choose b1 = a1 and b2 = a2, and so on. This will give us the desired linear combination of the vectors in S to represent w.

I hope this helps to clarify the essence of the proof for you. Remember, the key is to understand the definitions and use them to guide your reasoning. Best of luck with your studies!
 

FAQ: On proving Linearly independence

What is linear independence?

Linear independence is a concept in linear algebra where a set of vectors is said to be linearly independent if no vector in the set can be written as a linear combination of the other vectors in the set. In other words, no vector can be formed by scaling and adding the other vectors in the set.

Why is it important to prove linear independence?

Proving linear independence is important in various areas of science and engineering, such as in linear systems analysis, signal processing, and statistics. It allows us to understand the relationships and dependencies between different variables or vectors and make accurate predictions or decisions based on this information.

What is the process for proving linear independence?

The process for proving linear independence involves setting up a linear combination of the vectors in the set and then solving for the coefficients that would make the combination equal to the zero vector. If the only solution is for all coefficients to be equal to zero, then the vectors are linearly independent.

Can a set of vectors be both linearly independent and dependent?

No, a set of vectors can only be either linearly independent or dependent. If a set of vectors is linearly independent, then it cannot be linearly dependent and vice versa. However, it is possible for a set of vectors to be partially linearly independent and partially dependent.

What are some applications of linear independence?

Linear independence is used in various fields, such as computer graphics, physics, and economics. It is used to solve systems of linear equations, perform matrix operations, and determine the basis of a vector space. It is also important in machine learning and data analysis to identify and remove redundant features or variables.

Similar threads

Replies
45
Views
3K
Replies
1
Views
534
Replies
10
Views
1K
Replies
11
Views
1K
Replies
2
Views
2K
Back
Top