Proving that the two given functions are linearly independent

In summary, we are required to show that ##\alpha_1 \varphi_1(t) + \alpha_2 \varphi_2(t) \equiv 0## for some ##\alpha_1, \alpha_2 \in \mathbb{R}## is only possible when both ##\alpha_1, \alpha_2 = 0##. This means that the function ##\alpha_1 \varphi_1(t) + \alpha_2 \varphi_2(t)## is identical to the function ##f(t)\equiv 0##, and in order for this to be true, both ##\alpha_1## and ##\alpha_2## must be equal
  • #1
brotherbobby
702
163
Summary:: I attach a picture of the given problem below, just before my attempt to solve it.

1581406947076.png


We are required to show that ##\alpha_1 \varphi_1(t) + \alpha_2 \varphi_2(t) = 0## for some ##\alpha_1, \alpha_2 \in \mathbb{R}## is only possible when both ##\alpha_1, \alpha_2 = 0##.

I don't know how to proceed from here. Surely in the interval ##[0,2]## there an infinitely many real number points. It is impossible to show that a linear combination of the given functions are L.I. at every point.

Let me make an attempt and try for two points of relevance, one exactly between [0,1] and the other between [1,2].

##t = 1/2##

We have ##\alpha_1\times 0 + \alpha_2 (1/2 - 1)^4 = 0## only if ##\alpha_2 = 0## but ##\alpha_1## doesn't have to be zero.

Likewise for the value ##t=3/2, \alpha_2\neq 0##, is admissible.

Hence, the two functions are linearly dependent by my reasoning!

A help would be welcome.

[Moderator's note: Moved from a technical forum and thus no template.]
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
brotherbobby said:
Summary:: I attach a picture of the given problem below, just before my attempt to solve it.

View attachment 256933

We are required to show that ##\alpha_1 \varphi_1(t) + \alpha_2 \varphi_2(t) = 0## for some ##\alpha_1, \alpha_2 \in \mathbb{R}## is only possible when both ##\alpha_1, \alpha_2 = 0##.

I don't know how to proceed from here. Surely in the interval ##[0,2]## there an infinitely many real number points. It is impossible to show that a linear combination of the given functions are L.I. at every point.

Let me make an attempt and try for two points of relevance, one exactly between [0,1] and the other between [1,2].

##t = 1/2##

We have ##\alpha_1\times 0 + \alpha_2 (1/2 - 1)^4 = 0## only if ##\alpha_2 = 0## but ##\alpha_1## doesn't have to be zero.

Likewise for the value ##t=3/2, \alpha_2\neq 0##, is admissible.

Hence, the two functions are linearly dependent by my reasoning!

A help would be welcome.

First, you need to understand the difference between a function having a zero (i.e. a single point where it crosses the x-axis - or t-axis in this case) and a function being identically zero (i.e. is zero for all points). Compare:

##\sin x = 0## for ##x = n\pi##

This function has distinct zeroes or roots.

##\sin^2 x + \cos^2 x - 1 = 0## for all ##x##

This function is identically zero.

Second, in linear algebra the function that is identically zero is the zero vector in your vector space. If you say a set of functions are linearly dependent, it means you can find constants such that the linear sum is the zero function (i.e. zero for all points).

The example above shows that the set ##\{\sin^2 x, \cos^2 x, 1\}## is linearly dependent.

But, the set ##\{\sin x, \cos x\}## is not linearly dependent, because you cannot find constants ##a, b##, except ##a = b = 0##, such that ##\forall x \ \ a\sin x + b\cos x = 0##. Although you can find distinct points where, for example ##\sin x + \cos x = 0##.
 
  • Like
Likes WWGD and FactChecker
  • #3
brotherbobby said:
It is impossible to show that a linear combination of the given functions are L.I. at every point.
There is no such thing as ”linearly independemt at every point”. Linearly independent is a property of vectors in a vector space. In this case that vector space is a function space.
 
  • #4
Orodruin said:
There is no such thing as ”linearly independemt at every point”. Linearly independent is a property of vectors in a vector space. In this case that vector space is a function space.

I do not understand what you mean when you said "there's no such thing as linearly independent at every point".

Let's take two functions of the variable ##x\in [a,b]##, ##f_1(x)\; \text{and} f_2(x)## and test whether they are linearly dependent or not.

We can determine this by taking the linear combination of the functions : ##c_1 f_1(x) + c_2 f_2(x)##, and seeing whether we can find values of ##c_1,c_2 \in \mathbb{R}\neq 0## such that the sum is zero identically. If yes, then the two functions are L.D. And if not, they are L.I.

We don't need to evaluate the functions for a given point but keep them in terms of the variable ##x##. However, whatever conclusion we arrive applies for every point in the domain in which the functions are defined.
 
  • #5
brotherbobby said:
I do not understand what you mean when you said "there's no such thing as linearly independent at every point".

Let's take two functions of the variable ##x\in [a,b]##, ##f_1(x)\; \text{and} f_2(x)## and test whether they are linearly dependent or not.

We can determine this by taking the linear combination of the functions : ##c_1 f_1(x) + c_2 f_2(x)##, and seeing whether we can find values of ##c_1,c_2 \in \mathbb{R}\neq 0## such that the sum is zero identically. If yes, then the two functions are L.D. And if not, they are L.I.

We don't need to evaluate the functions for a given point but keep them in terms of the variable ##x##. However, whatever conclusion we arrive applies for every point in the domain in which the functions are defined.
It was you who said "linearly independent at every point". This concept is irrelevant. For every point where you can find a linear combination of the functions that is zero at that point so if you restrict the functions to that point they are not linearly independent - which is the only meaningful interpretation of "linearly independent at every point". This does not make the functions linearly dependent so the "at every point" is meaningless.

The functions are either linearly independent or not, there is no "at every point". In order to be linearly independent, the linear combination must identically vanish at every point, but linear independence is a reference to the function space, not to particular points.
 
  • #6
Actually, the equal sign should be identically equal ##\equiv##. That is, the function ##\alpha_1 \varphi_1(t) + \alpha_2 \varphi_2(t)## is identical to the function ##f(t)\equiv 0##.
brotherbobby said:
CORRECTED: We are required to show that ##\alpha_1 \varphi_1(t) + \alpha_2 \varphi_2(t) \equiv 0## for some ##\alpha_1, \alpha_2 \in \mathbb{R}## is only possible when both ##\alpha_1, \alpha_2 = 0##.
The significance of that is that you are free to pick a value of ##t## that makes it easy to prove ##\alpha_1 = 0## and another value of ##t## that makes it easy to prove ##\alpha_2 = 0##. Try ##t=1.5## for proving ##\alpha_1 = 0## and ##t=0.5## for proving ##\alpha_2 = 0##.
 
  • #7
Yes, so let me attempt it as you say. I would like to copy the problem for easy reference.

1581503964691.png


(1) I begin with the value of ##t = 0.5##. In this case, we have (from above), ##\alpha_1 \times 0 + \alpha_2 \times (0.5-1)^4 = 0##, a linear equation that can be satisfied with ##\alpha_2 = 0## but ##\boxed{\mathbf{\alpha_1\neq 0}}##.

(2) Likewise, I put the value of ##t = 1.5##. Putting this value in the equation for linear combination, we find ##\alpha_1(1.5-1)^4 + \alpha_2 \times 0 = 0##. This equation must have ##\alpha_1 = 0## but would be valid for ##\boxed{\mathbf{\alpha_2 \neq 0}}##.

Hence, as per my calculation above, the two functions are linearly dependent!

Please tell me where am going wrong when you can.
 
  • #8
brotherbobby said:
Yes, so let me attempt it as you say. I would like to copy the problem for easy reference.

View attachment 256988

(1) I begin with the value of ##t = 0.5##. In this case, we have (from above), ##\alpha_1 \times 0 + \alpha_2 \times (0.5-1)^4 = 0##, a linear equation that can be satisfied with ##\alpha_2 = 0## but ##\boxed{\mathbf{\alpha_1\neq 0}}##.

(2) Likewise, I put the value of ##t = 1.5##. Putting this value in the equation for linear combination, we find ##\alpha_1(1.5-1)^4 + \alpha_2 \times 0 = 0##. This equation must have ##\alpha_1 = 0## but would be valid for ##\boxed{\mathbf{\alpha_2 \neq 0}}##.

Hence, as per my calculation above, the two functions are linearly dependent!

Please tell me where am going wrong when you can.

What ##\alpha_1, \alpha_2## have you found? So that:$$\forall t: \alpha_1 \phi_1(t) + \alpha_2 \phi_2(t) = 0$$
 
  • #9
brotherbobby said:
Yes, so let me attempt it as you say. I would like to copy the problem for easy reference.

View attachment 256988

(1) I begin with the value of ##t = 0.5##. In this case, we have (from above), ##\alpha_1 \times 0 + \alpha_2 \times (0.5-1)^4 = 0##, a linear equation that can be satisfied with ##\alpha_2 = 0## but ##\boxed{\mathbf{\alpha_1\neq 0}}##.

(2) Likewise, I put the value of ##t = 1.5##. Putting this value in the equation for linear combination, we find ##\alpha_1(1.5-1)^4 + \alpha_2 \times 0 = 0##. This equation must have ##\alpha_1 = 0## but would be valid for ##\boxed{\mathbf{\alpha_2 \neq 0}}##.

Hence, as per my calculation above, the two functions are linearly dependent!

Please tell me where am going wrong when you can.
I think your problem is to think that ##\alpha_1## and ##\alpha_2## are also functions, they are not! They are scalars (numbers), so whatever ##\alpha_i## is in the case (1), it must be the same in the case (2) (and in any other case you can think of).
Also, notice that it is not true that ##\alpha_1\neq 0## in (1) neither ##\alpha_2\neq 0## in (2).
 
  • Like
Likes FactChecker
  • #10
Sorry I fail to follow. Let me write equation one again, where a L.C. of the two functions add to zero for the given value of ##t = 0.5## : ##\alpha_1 \times 0 + \alpha_2 \times (0.5-1)^4 = 0##. The first term is identically 0, so ##\alpha_1 \neq 0 \Rightarrow## it can have any value. However, ##\alpha_2 = 0##, there is no choice there.

Likewise, in case (2), it is ##\alpha_1 = 0## though ##\alpha_2 \neq 0##.

Are you trying to say that once either of the ##\alpha##'s is fixed at 0 for one case, it cannot change for the other case?
 
  • #11
brotherbobby said:
Are you trying to say that once either of the αα\alpha's is fixed at 0 for one case, it cannot change for the other case?
Yes, this is the definition of linear independence. You are trying to make something pointwise, exactly what I have said you should not do and what you have yourself argued that you should not do.
 
  • #12
Yes, thank you. Unfortunately, I haven't come across a book that makes this point explicit. It is straightforward if functions was given as, say for instance, ##f_1(x) = x \; \text{and}\; f_2(x)= x^2##. These two functions are L.I., for no choice of scalar multipliers ##c_1\;\text{and}\; c_2## can make their linear combination 0 unless the multipliers were zero themselves.

But it gets tricky, as I found out the hard way, if functions were piecewise defined : ##f_1(x) = x, \; 0 \leq x<2\;\; \text{and}\;f_1(x) = x^2\;, 2\leq x<6##. And a different definition for ##f_2 (x)##, also piecewise.

The trick is to realize that if ##c_1=0## necessarily for the first interval, then it is 0 in the second interval as well.

Thank you for your efforts.
 
  • #13
You can make an analogy to finite-dimensional vector spaces, where the components are labelled 1, 2, etc. If you have two vectors ##\vec v_1 = \vec e_1## and ##\vec v_2 = \vec e_2## and you want to know if they are linearly independent you first check the ##\vec e_1##-component of ##a\vec v_1 + b \vec v_2 = 0##, which gives
$$
a + 0 = a = 0.
$$
Doing the same for the ##\vec e_2##-component gives
$$
0 + b = b = 0.
$$
Hence, for the linear combination to be zero, you need all coefficients to be zero, i.e., you have linear independence. The only thing different here is that the "components" are labelled by a continuous variable ##x##.
 
  • Like
Likes PeroK
  • #14
brotherbobby said:
Yes, thank you. Unfortunately, I haven't come across a book that makes this point explicit. It is straightforward if functions was given as, say for instance, ##f_1(x) = x \; \text{and}\; f_2(x)= x^2##. These two functions are L.I., for no choice of scalar multipliers ##c_1\;\text{and}\; c_2## can make their linear combination 0 unless the multipliers were zero themselves.

But it gets tricky, as I found out the hard way, if functions were piecewise defined : ##f_1(x) = x, \; 0 \leq x<2\;\; \text{and}\;f_1(x) = x^2\;, 2\leq x<6##. And a different definition for ##f_2 (x)##, also piecewise.

The trick is to realize that if ##c_1=0## necessarily for the first interval, then it is 0 in the second interval as well.

Thank you for your efforts.

You are missing the point. It has nothing to do with piecewise definition. Essentially you were stilling looking for zeroes of the function. What you showed was that:

##\alpha_1 \phi_1 + \alpha_2 \phi_2##

Was zero at some points for one choice of ##\alpha_1, \alpha_2## and was zero at other points for a different choice of different ##\alpha_1, \alpha_2##.

As pointed out above, with your approach to linear dependence all functions are linearly dependent. You pick a point and find coeffients that make the linear combination zero at that point. That's easy to do. Linear dependence means that the same coefficients work for all points.

Note also that for any vector space two vectors are linearly dependent if and only if they are scalar multiples of each other. From this, you can see immediately that your two functions are linearly independent.
 
  • #15
brotherbobby said:
Yes, thank you. Unfortunately, I haven't come across a book that makes this point explicit. It is straightforward if functions was given as, say for instance, ##f_1(x) = x \; \text{and}\; f_2(x)= x^2##. These two functions are L.I., for no choice of scalar multipliers ##c_1\;\text{and}\; c_2## can make their linear combination 0 unless the multipliers were zero themselves.

But it gets tricky, as I found out the hard way, if functions were piecewise defined : ##f_1(x) = x, \; 0 \leq x<2\;\; \text{and}\;f_1(x) = x^2\;, 2\leq x<6##. And a different definition for ##f_2 (x)##, also piecewise.

The trick is to realize that if ##c_1=0## necessarily for the first interval, then it is 0 in the second interval as well.

Thank you for your efforts.
Note that the problem has nothing to do with piecewise functions, in the case of ##x## and ##x^2## (which you say that are LI) I can consider a combination ##f(x)=\alpha_1 x + \alpha_2 x^2##, then for ##x=1## I can choose ##\alpha_1=1## and ##\alpha_2=-1## and the function f vanishes, also for ##x=2## I can choose ##\alpha_1=2##, ##\alpha_2=-1## and f also vanishes, and I can do the same at any point you like.

The point to say that ##x## and ##x^2## are LI is that I cannot find a UNIQUE value for ##\alpha_1,\alpha_2## that works for any point.
 
  • Like
Likes PeroK
  • #16
Summarizing a lot of comments: Ultimately, as many have pointed out, it comes down to definitions.

We have ## C[0,2]##. This is a vector space. Over what field?

For ## \phi_1, \phi_2 \in C[0,2]:## linear dependence means:

##\alpha_1 \phi_1+ \alpha_2 \phi_2 =0 ##

This implies:

## \phi_1 = \frac { -\alpha_2}{\alpha_1} \phi_2 ## . You can rename ##\frac {\alpha_1}{\alpha_2}:= \alpha_3 ## (Assuming ## \alpha_1 \neq 0##because quotient operation in fields is well-defined and cosed )

Then you end up with:

## \phi_1 =\alpha_3 \phi_2 ##.

Does that help?
 
  • #17
WWGD said:
Summarizing a lot of comments: Ultimately, as many have pointed out, it comes down to definitions.

We have ## C[0,2]##. This is a vector space. Over what field?

For ## \phi_1, \phi_2 \in C[0,2]:## linear dependence means:

##\alpha_1 \phi_1+ \alpha_2 \phi_2 =0 ##

This implies:

## \phi_1 = \frac { -\alpha_2}{\alpha_1} \phi_2 ## . You can rename ##\frac {\alpha_1}{\alpha_2}:= \alpha_3 ## (Assuming ## \alpha_1 \neq 0##because quotient operation in fields is well-defined and cosed )

Then you end up with:

## \phi_1 =\alpha_3 \phi_2 ##.

Does that help?
I am afraid I am confused at this point. It is not your fault but my own lack of understanding and, well, the failure of books to discuss matters like these. It's far from simple. I'd like to read the subject matter first from other texts and return to you.

Thank you all for your help. It's a help to know that my understanding of something is far from adequate.
 
  • Like
Likes WWGD
  • #18
brotherbobby said:
I am afraid I am confused at this point. It is not your fault but my own lack of understanding and, well, the failure of books to discuss matters like these. It's far from simple. I'd like to read the subject matter first from other texts and return to you.

Thank you all for your help. It's a help to know that my understanding of something is far from adequate.
Maybe best to leave it aside for short time and then go back to it. Good luck, come back with other questions.
 
  • #19
brotherbobby said:
I am afraid I am confused at this point. It is not your fault but my own lack of understanding and, well, the failure of books to discuss matters like these. It's far from simple. I'd like to read the subject matter first from other texts and return to you.

Thank you all for your help. It's a help to know that my understanding of something is far from adequate.

Perhaps nothing is simple if you don't see it, but linear dependence of ##\phi_1## and ##\phi_2## would mean that ##\phi_1## is a scalar multiple of ##\phi_2##. And, it must be clear that that is not the case here.
 
  • #20
Gaussian97 said:
The point to say that ##x## and ##x^2## are LI is that I cannot find a UNIQUE value for ##\alpha_1,\alpha_2## that works for any point.
IMO, a better way to say that ##x## and ##x^2## are linearly independent, is that for any choice of x, the equation ##c_1x + c_2x^2 = 0## has only the trivial solution; i.e., ##c_1 = c_2 = 0##.

The whole business of linear independence or linear dependence of vectors or functions is confusing to many students. You can always write an equation such as ##c_1\vec {v_1} + c_2\vec{v_2} + \dots + c_n\vec{v_n} = \vec 0##, whether the vectors are dependent or independent. The crucial point is whether there are an infinite number of sets of the constants ##c_1, \dots, c_n## (linearly dependent) or exactly one set of solutions with all of the constants being zero (linearly independent).
 
  • #21
Mark44 said:
IMO, a better way to say that ##x## and ##x^2## are linearly independent, is that for any choice of x, the equation ##c_1x + c_2x^2 = 0## has only the trivial solution; i.e., ##c_1 = c_2 = 0##.

Are you sure you mean that?

Another idea is not use ##0## to mean the zero function. We could instead use the notation ##F_0## for the zero function. Then, linear dependence of functions means you can find ##c_1, c_2## such that:
$$c_1\phi_1 + c_2\phi_2 = F_0$$
Where
$$F_0: \mathbb{R} \rightarrow \mathbb{R}$$
Such that
$$\forall x: \ F_0(x) = 0$$
 
  • #22
PeroK said:
Are you sure you mean that?
Perhaps I should have said "for every choice of x ... "
 

FAQ: Proving that the two given functions are linearly independent

What does it mean for two functions to be linearly independent?

Linear independence means that the two functions cannot be represented as a linear combination of each other. In other words, one function cannot be written as a multiple of the other function.

How can I prove that two functions are linearly independent?

To prove that two functions are linearly independent, you can use the definition of linear independence and show that the only solution to the equation c1f1(x) + c2f2(x) = 0 is when c1 = 0 and c2 = 0.

What is the importance of proving linear independence between two functions?

Proving linear independence is important because it helps us understand the relationship and behavior of the functions. It also allows us to determine if a set of functions is a basis for a vector space.

Can two functions be linearly independent on one interval but dependent on another interval?

Yes, it is possible for two functions to be linearly independent on one interval but dependent on another interval. This is because the definition of linear independence depends on the domain of the functions.

Are there any shortcuts or tricks to quickly determine if two functions are linearly independent?

There are no shortcuts or tricks to determine if two functions are linearly independent. It is important to carefully analyze the definition and use algebraic manipulation to prove linear independence.

Back
Top