Linear Independence of Two Functions

In summary, the functions y1(t)=cos(t)sin(t) and y2(t)=sin(2t) are linearly dependent on the interval (0,1) because one is a constant multiple of the other. This is determined by the definition (1) which states that a pair of functions is linearly independent on an interval if and only if neither of them is a constant multiple of the other on all values within the interval. This can also be seen in the fact that solving for x in the equations results in x=1/2, indicating that they are dependent on each other. However, it is important to note that linear (in)dependency can change depending on the scalar field being used, as seen in the
  • #1
Drakkith
Mentor
23,093
7,499

Homework Statement


Use definition (1) to determine if the functions ##y_1## and ##y_2## are linearly dependent on the interval (0,1).
##y_1(t)=cos(t)sin(t)##
##y_2(t)=sin(t)##

Homework Equations


(1) A pair of functions is said to be linearly independent on the interval ##I## if and only if neither of them is a constant multiple of the other on all of ##I##.

The Attempt at a Solution


My first thought was to put an ##x## in front of one of the equations, set them equal to each other, and solve for ##x##. That left me with ##x=cos(t)## and ##x=sec(t)##.
My thought was that since these functions change value as ##t## changes, the two original functions are linearly independent. But that appears to be wrong.

Apparently my understanding of what definition (1) means is incorrect. Could someone enlighten me?
 
Physics news on Phys.org
  • #2
I read (and know) the definition as ##\{y_1,y_2\}## is linear dependent, iff there is a ##c \in \mathbb{R}## such that ##y_1(t) = c \cdot y_2(t)## for all ##t \in I## or the other way around (in case ##y_2 \equiv 0##). This means linear (in)dependency over ##\mathbb{R}##.

Neither ##\cos(t)## nor ##\sin(t)## has zeroes in ##I##, so ##y_1(t)=\cos(t) \cdot y_2(t)## and ##\cos(t) \not\equiv c##, so they are linearly independent. Do I miss something? Why do you think it's wrong? Or is it about linearity over a field of functions?
 
  • #3
I'm sorry, Fresh. Apparently ##y_2=sin(2t)## not ##sin(t)##...
Solving for x now yields ##x=1/2##, which means they are linearly dependent, right?
 
  • #4
Ayuh
 
  • #5
Drakkith said:
I'm sorry, Fresh. Apparently ##y_2=sin(2t)## not ##sin(t)##...
Solving for x now yields ##x=1/2##, which means they are linearly dependent, right?
... if I knew the formula by heart ... wait ... yes. Or ##2## if you take the other direction.
And in case we consider the field of all continuous and bounded functions on ##I## which don't have zeroes in ##I##, then it would even be linear dependent in the first version.
 
  • #6
vela said:
Ayuh
Gesundheit.
 
  • Like
Likes SammyS
  • #7
fresh_42 said:
... if I knew the formula by heart ... wait ... yes. Or ##2## if you take the other direction.
And in case we consider the field of all continuous and bounded functions on ##I## which don't have zeroes in ##I##, then it would even be linear dependent in the first version.

We're not looking for integer values for ##x## here are we?
 
  • #8
Drakkith said:
We're not looking for integer values for ##x## here are we?
No, just elements of ##I=(0,1)##. The remark shall stress that linear dependency has always to be said above what. Here above the reals.

If we consider continuous and bounded functions on ##I## which do not have a zero, then these functions form a field.
We then have to define ##(y_1 \cdot y_2) \, : \,t \longmapsto y_1(t)\cdot y_2(t)## as multiplication and this gets us a field ##F##, which can serve as a scalar field of vector spaces, e.g. a space of certain other functions.
Now in this field, ##F \ni \lambda_t = \cos(t)## is an element, which makes it a scalar and ##y_1(t)= \cos(t)\cdot \sin(t) = \lambda_t \cdot y_2(t)## is a linear dependency. So if we change the scalar field of a vector space, linear independency can turn into linear dependency.

An easier example is that ##\{1,i\}## are linear independent above ##\mathbb{R}## but linear dependent above ##\mathbb{C}##.
 
  • #9
Drakkith said:
I'm sorry, Fresh. Apparently ##y_2=sin(2t)## not ##sin(t)##...
Solving for x now yields ##x=1/2##, which means they are linearly dependent, right?
##y_1(t) = \cos(t)\sin(t)## and ##y_2(t) = \sin(2t)## are linearly dependent. Each function is some constant multiple of the other.I'm assuming that the revised version of ##y_2(t)## is the correct function in this problem.

BTW, it's easy to check two functions or two vectors for linear dependence/independence, but it's a bit more complication when there are three or more vectors/functions. With three vectors or functions, it's possible that no two of them are multiples of each other, but the set of vectors/functions can still be linearly dependent.
 
  • #10
Mark44 said:
I'm assuming that the revised version of ##y_2(t)## is the correct function in this problem.

That's right.

Mark44 said:
BTW, it's easy to check two functions or two vectors for linear dependence/independence, but it's a bit more complication when there are three or more vectors/functions. With three vectors or functions, it's possible that no two of them are multiples of each other, but the set of vectors/functions can still be linearly dependent.

Thanks Mark. Any idea if I'll get into that in an introduction to differential equations class?
 
  • #11
fresh_42 said:
No, just elements of ##I=(0,1)##. The remark shall stress that linear dependency has always to be said above what. Here above the reals.

If we consider continuous and bounded functions on ##I## which do not have a zero, then these functions form a field.
We then have to define ##(y_1 \cdot y_2) \, : \,t \longmapsto y_1(t)\cdot y_2(t)## as multiplication and this gets us a field ##F##, which can serve as a scalar field of vector spaces, e.g. a space of certain other functions.
Now in this field, ##F \ni \lambda_t = \cos(t)## is an element, which makes it a scalar and ##y_1(t)= \cos(t)\cdot \sin(t) = \lambda_t \cdot y_2(t)## is a linear dependency. So if we change the scalar field of a vector space, linear independency can turn into linear dependency.

An easier example is that ##\{1,i\}## are linear independent above ##\mathbb{R}## but linear dependent above ##\mathbb{C}##.

You're a little above my knowledge level, Fresh. I'm just an undergrad taking Introduction to Differential Equations (Math 254). I've never even seen most of what you just said in a math class before.
 
  • #12
Drakkith said:
You're a little above my knowledge level, Fresh. I'm just an undergrad taking Introduction to Differential Equations (Math 254). I've never even seen most of what you just said in a math class before.
It's basically the same as with the good old arrows. If you have two, then linear independency means they point in two different directions, and if they are linear dependent, they point in the same (or exactly opposite) direction, i.e. one is a multiple of the other. If we have three vectors, they can form a coordinate system in normal space, which means they are linear independent, or the only define a plane or a straight, in which case they are linear dependent as we cannot get the entire space from them.

All I wanted to say is, that one has to mention where the multiples are allowed to be from. ##\{1,i\}## point in two directions if we allow multiples from the reals, but in the same direction, if we allow complex multiples, because ##i = i \cdot 1##, as in your definition (1). This happens because we cannot really draw complex numbers other than in two real dimensions. But in itself, it is a one dimensional vector space, just a complex one.
 
  • #13
fresh_42 said:
It's basically the same as with the good old arrows. If you have two, then linear independency means they point in two different directions, and if they are linear dependent, they point in the same (or exactly opposite) direction, i.e. one is a multiple of the other. If we have three vectors, they can form a coordinate system in normal space, which means they are linear independent, or the only define a plane or a straight, in which case they are linear dependent as we cannot get the entire space from them.

That mostly makes sense. I understand that a vector is a multiple of another vector if you can multiply it by a scalar and have them equal each other, which requires that they point in the same direction or in opposite directions, right?

fresh_42 said:
All I wanted to say is, that one has to mention where the multiples are allowed to be from. ##\{1,i\}## point in two directions if we allow multiples from the reals, but in the same direction, if we allow complex multiples, because ##i = i \cdot 1##, as in your definition (1). This happens because we cannot really draw complex numbers other than in two real dimensions. But in itself, it is a one dimensional vector space, just a complex one.

I can't say I understand this I'm afraid. It's been a while since I had to do anything with vectors and I haven't worked with complex vectors yet.
 
  • #14
Drakkith said:
Any idea if I'll get into that in an introduction to differential equations class?
Your should. When you get into finding the solution to 2nd order DEs, there will generally be an infinite number of solutions, but they are linear combinations of two basic solutions that are a pair of linearly independent functions. These functions "span" the solution space. You probably don't understand these ideas just yet, but they show up in both differential equations and linear algebra. There's a large overlap between these two areas of mathematics.

Drakkith said:
That mostly makes sense. I understand that a vector is a multiple of another vector if you can multiply it by a scalar and have them equal each other, which requires that they point in the same direction or in opposite directions, right?
Correct, if you're talking about linearly dependent vectors. The situation is the same with linearly dependent functions -- i.e., each one is some nonzero multiple of the other.
 
  • #15
Drakkith said:
That mostly makes sense. I understand that a vector is a multiple of another vector if you can multiply it by a scalar and have them equal each other, which requires that they point in the same direction or in opposite directions, right?
Yes. The opposite direction comes into play as we can multiply with negative numbers, which reverses the direction.
I can't say I understand this I'm afraid. It's been a while since I had to do anything with vectors and I haven't worked with complex vectors yet.
One can also consider the reals as an infinite dimensional vector space over the rationals. ##\pi## and ##e## or ##\sqrt{2}## are not multiples of each other, if we only allow rationals as base field, as the reservoir for multiples. In this sense and according to definition (1), they are linearly independent over ##\mathbb{Q}##. But if we allow real multiples in the definition (1), i.e. real stretches and compression, then they become multiples of one another. So it is important where allowed multiples are from.
 
  • Like
Likes Drakkith
  • #16
Drakkith said:
I'm sorry, Fresh. Apparently ##y_2=sin(2t)## not ##sin(t)##...
Solving for x now yields ##x=1/2##, which means they are linearly dependent, right?
A good point to think about is on whether ## c_1sin(2t)+c_2sintcost=0 ## . But notice that the zero on the right is the 0 _function_ , not just the number 0; remember we are adding combinations of functions, so we get a function, the function that is identically 0 , i.e., ## f: f(x)=0 \ for all x ##
 
  • Like
Likes Drakkith
  • #17
WWGD said:
A good point to think about is on whether ## c_1sin(2t)+c_2sintcost=0 ## . But notice that the zero on the right is the 0 _function_ , not just the number 0; remember we are adding combinations of functions, so we get a function, the function that is identically 0 , i.e., ## f: f(x)=0 \ for all x ##

Excellent post. Are you aware of the definition of Linear Independence Drakk? Or what it means? Maybe I can help clarify a few things.
When I was first learning these ideas, I had a hard time understanding what Linear Indepence, Basis, etc was.
I read Serge Lang: Introduction to Linear Algebra and I finally understood these ideas. I recommend to have this book on your book shelf.
 
  • #18
MidgetDwarf said:
Are you aware of the definition of Linear Independence Drakk? Or what it means?

Only a very basic idea regarding two functions not being constant multiples of each other, and only in the context of two functions at a time. Most of the math notation and terminology above is just too far beyond my knowledge level right now.
 
  • #19
Definition of linear independence:

Let V be a vector space, and let v1,...,vn (these means n vectors) be elements of of V. Let a1,...,an be numbers such that
a1v1+...+anvn=0 (1)
then ai=0 for all i=1,...,n.

Note: That 1,n, and i should be subscripts (need to relearn latex). This is an easier definition to understand.

The main idea is that if vectors are linearly independent, the only way to make (1) true, is if the number themselves are all equal to 0.

You cannot write these vectors as linear combinations of one another if they are linearly independent.

Let me know if this makes sense. I can explain further.
 
  • #20
MidgetDwarf said:
Definition of linear independence:

Let V be a vector space, and let v1,...,vn (these means n vectors) be elements of of V. Let a1,...,an be numbers such that
a1v1+...+anvn=0 (1)
then ai=0 for all i=1,...,n.

There is some subtlety here that escapes many students. For example, consider ##\vec{u} = <1, 2>## and ##\vec{v} = <2, 4>##.
I notice that ##a_1<1, 2> + a_2<2, 4> = 0## when ##a_1 = 0## and ##a_2 = 0##, so I conclude (wrongly) that ##\vec u## and ##\vec v## are linearly independent. (I am repeating the reasoning that I've seen many students display.)

Whether a set of vectors is linearly dependent or linearly independent, we can always write the equation ##a_1\vec{v_1} + a_2\vec{v_2} + \dots + a_n\vec{v_n} = \vec 0##. For a linearly independent set, there is only one solution to this equation: ##a_1 = a_2 = \dots = a_n = 0##; i.e., the trivial solution.

MidgetDwarf said:
Note: That 1,n, and i should be subscripts (need to relearn latex).
This would take less than one minute to learn. See https://www.physicsforums.com/help/latexhelp/, in the section titled Superscripts and subscripts.

@Drakkith, we're talking about linear dependence/independence of vectors here. The situation is almost exactly the same for linear dependence/indepence of functions.
 
  • Like
Likes fresh_42
  • #21
There is another subtlety here in case of two vectors. If one vector is zero, then it always leads to linearly dependent systems, as the there is always a non-trivial solution of the equation with the coefficient in front of the zero vector being arbitrary. However, the word neither in definition (1) becomes important: Let ##y_2= 0##. Then ##\{y_1,0\}## is linear dependent, although ##0 \neq y_1 \neq c \cdot 0 = c \cdot y_2## whatever we choose ##c## to be, whereas ##0 = 1\cdot y_2 = c \cdot y_1 = 0 \cdot y_1## satisfies the condition. With respect to this circumstance, the defining equation ##c_1y_1+c_2y_2=0## has to be preferred over the special version in definition (1), because it covers this special case and avoids the need to mention the symmetry.
 
  • #22
So given my two functions above, we have ##(1):## ##A_1cos(t)sin(t)+A_2sin(2t)=0##. When I graph the functions, they become identical when ##A_1## and ##A_2## are both 0, when ##A_1=2## and ##A_2=1##, or when ##A_1=1## and ##A_2=1/2##. Actually it appears they become identical when ##\frac{A_1}{A_2}=2/1##. Linearly independent functions would only be solutions to ##(1)## when ##A_1=0## and ##A_2 = 0## correct?
 
  • #23
Drakkith said:
So given my two functions above, we have ##(1):## ##A_1cos(t)sin(t)+A_2sin(t)=0##. When I graph the functions, they become identical when ##A_1## and ##A_2## are both 0, when ##A_1=2## and ##A_2=1##, or when ##A_1=1## and ##A_2=1/2##. Actually it appears they become identical when ##\frac{A_1}{A_2}=2/1##. Linearly independent functions would only be solutions to ##(1)## when ##A_1=0## and ##A_2 = 0## correct?
Correct, but, don't you mean ## A_2 sin(2t) ##?
 
  • #24
WWGD said:
Correct, but, don't you mean ## A_2 sin(2t) ##?
Yes I do! I even messed that up on my graph and then forgot to correct my post!
Edit: Post corrected.
 
  • #25
Thanks mark. I forgot to mention that. "Only the trivial solution exist." If other solutions including the trivial solution exist, then the vectors are linearly dependent.

@drakk. If you want to understand what those symbols mean:

You can read a short book on set theory. A finer way, and I believe more useful, would be to read Hubbard Hubbard: Vector Calculus, Linear Algebra, and Differential Gor
Mark44 said:
There is some subtlety here that escapes many students. For example, consider ##\vec{u} = <1, 2>## and ##\vec{v} = <2, 4>##.
I notice that ##a_1<1, 2> + a_2<2, 4> = 0## when ##a_1 = 0## and ##a_2 = 0##, so I conclude (wrongly) that ##\vec u## and ##\vec v## are linearly independent. (I am repeating the reasoning that I've seen many students display.)

Whether a set of vectors is linearly dependent or linearly independent, we can always write the equation ##a_1\vec{v_1} + a_2\vec{v_2} + \dots + a_n\vec{v_n} = \vec 0##. For a linearly independent set, there is only one solution to this equation: ##a_1 = a_2 = \dots = a_n = 0##; i.e., the trivial solution.This would take less than one minute to learn. See https://www.physicsforums.com/help/latexhelp/, in the section titled Superscripts and subscripts.

@Drakkith, we're talking about linear dependence/independence of vectors here. The situation is almost exactly the same for linear dependence/indepence of functions.

Thanks Mark! I forgot to mention that if the only solution is the trivial solution, then the vectors are linearly independent. If there exist other solutions besides the trivial solution, the vectors are linearly dependent.

@Dark Light
Later or in your DE class, you will another method for checking if 2 or more functions are linearly independent. It's called the Wronskian and you take the determinant of it. Ofcourse, there is a limitation to this method or two. It involves a set of... being differential n-1 times, where n is the number of elements in the set, and that when determinant of this set is equal to 0, we cannot determine whether...
 
  • Like
Likes Drakkith
  • #26
MidgetDwarf said:
@Dark Light
Later or in your DE class, you will another method for checking if 2 or more functions are linearly independent. It's called the Wronskian and you take the determinant of it. Ofcourse, there is a limitation to this method or two. It involves a set of... being differential n-1 times, where n is the number of elements in the set, and that when determinant of this set is equal to 0, we cannot determine whether...

I assume you're referring to me, not Dark Light? Anyways, yes we've gone over the Wronskian briefly. I haven't used it much yet though.
I know that for any real numbers ##a≠0, b,## and ##c##, if ##y_1(t)## and ##y_2(t)## are any two solutions on ##(-∞,∞)## and if the equality ##y_1(\tau)y'_2(\tau)-y'_1(\tau)y_2(\tau)=0## holds at any point ##\tau##, then ##y_1## and ##y_2## are linearly dependent on ##(-∞,∞)##.

Though I admit I'm not quite sure why this is true. I've been going over the proofs, but I'm having difficulty getting through them. My medication certainly isn't helping.
 
  • #27
Drakkith said:
Anyways, yes we've gone over the Wronskian briefly. I haven't used it much yet though.
I know that for any real numbers ##a≠0, b,## and ##c##, if ##y_1(t)## and ##y_2(t)## are any two solutions on ##(-∞,∞)## and if the equality ##y_1(\tau)y'_2(\tau)-y'_1(\tau)y_2(\tau)=0## holds at any point ##\tau##, then ##y_1## and ##y_2## are linearly dependent on ##(-∞,∞)##.

Though I admit I'm not quite sure why this is true. I've been going over the proofs, but I'm having difficulty getting through them.
What you wrote is the Wronskian of two functions.
 
  • #28
Mark44 said:
What you wrote is the Wronskian of two functions.

Yep. That's the quote right out of my book.
 
  • #29
Drakkith said:
I assume you're referring to me, not Dark Light? Anyways, yes we've gone over the Wronskian briefly. I haven't used it much yet though.
I know that for any real numbers ##a≠0, b,## and ##c##, if ##y_1(t)## and ##y_2(t)## are any two solutions on ##(-∞,∞)## and if the equality ##y_1(\tau)y'_2(\tau)-y'_1(\tau)y_2(\tau)=0## holds at any point ##\tau##, then ##y_1## and ##y_2## are linearly dependent on ##(-∞,∞)##.

Though I admit I'm not quite sure why this is true. I've been going over the proofs, but I'm having difficulty getting through them. My medication certainly isn't helping.

Sorry about that. I have a slight level of dyslexia. Typing makes my dyslexia more apparent. I am embarrassed by my spelling mistakes.
 

FAQ: Linear Independence of Two Functions

What does it mean for two functions to be linearly independent?

Linear independence of two functions means that the two functions cannot be expressed as a scalar multiple of each other. In other words, one function cannot be obtained by multiplying the other function by a constant.

How do you determine if two functions are linearly independent?

To determine if two functions are linearly independent, you can use the Wronskian determinant. If the Wronskian determinant is not equal to 0, then the two functions are linearly independent. Another method is to check if the two functions satisfy the definition of linear independence, where one function cannot be a scalar multiple of the other.

What is the significance of linear independence in mathematics and science?

Linear independence is an important concept in mathematics and science because it allows us to express a wide range of functions as linear combinations of independent functions. It also helps us to solve systems of linear equations and understand the behavior of linear systems.

Can two functions be linearly independent if they share the same set of solutions?

No, two functions cannot be linearly independent if they share the same set of solutions. In other words, if the two functions have the same roots or zeros, then they are linearly dependent. This is because one function can be obtained by multiplying the other function by 0, which makes it a scalar multiple.

How does linear independence relate to linear dependence?

Linear independence and linear dependence are opposite concepts. Linear independence means that the two functions are not related by a scalar multiple, while linear dependence means that one function can be obtained by multiplying the other function by a constant. If two functions are linearly dependent, then they cannot be linearly independent, and vice versa.

Back
Top