Linear Dependency Check: {e^x, e^{2x}}

  • Thread starter evagelos
  • Start date
  • Tags
    Linear
In summary: In this case, we are talking about the vector space of real valued functions of a real variable. So the zero function would be the function f(x)= 0 for all x. In that case, "af(x)+ bg(x)= 0" is the same as "a(0)+ b(0)= 0" or just "0= 0". I know you are now thinking "But what if x is not 0?". That's the point- that is not the zero function. Different functions can be equal at one point but not at others. The "definition" you give for linear dependence can be written as "If [itex]f(x)= ae^x+
  • #1
evagelos
315
0
Prove whethere is linearly independent or not the following:

{[tex]e^x,e^{2x}[/tex]}
 
Physics news on Phys.org
  • #2
If not, then there must exist two scalars, a and b (they can't both be zero), such that

[tex]a e^x + b e^{2x} = 0[/tex]

for all x. Can you find such scalars?
 
  • #3
Since that is to be true for all x, you can get two simple equations to solve for a and b by choosing two values of x.
 
  • #4
I think it's possible:
[tex]a e^x + b e^{2x} = 0[/tex] <==> [tex]a e^x = - b e^{2x}[/tex] <==> [tex]ln (a e^x) = - ln (b e^{2x})[/tex] <==> ax = -2bx

So if we carefully choose a, we can definitely show your given set is linearly dependent.
 
  • #5
jeff1evesque said:
I think it's possible:
[tex]a e^x + b e^{2x} = 0[/tex] <==> [tex]a e^x = - b e^{2x}[/tex] <==> [tex]ln (a e^x) = - ln (b e^{2x})[/tex] <==> ax = -2bx

So above, if we carefully choose a, we can definitely show your given set is linearly dependent.

Except

[tex]\ln(a e^x) = \ln(a) + x[/tex]
[tex]\ln(-b e^{2x}) = \ln(-b) + 2x[/tex]

so you need

[tex]\ln(a) - \ln(-b) = x[/tex] for all x!
 
  • #6
jbunniii said:
Except

[tex]\ln(a e^x) = \ln(a) + x[/tex]
[tex]\ln(-b e^{2x}) = \ln(-b) + 2x[/tex]

so you need

[tex]\ln(a) - \ln(-b) = x[/tex] for all x!

Bummer, so sorry.
 
  • #7
jbunniii said:
If not, then there must exist two scalars, a and b (they can't both be zero), such that

[tex]a e^x + b e^{2x} = 0[/tex]

for all x. Can you find such scalars?

Put x=0,then a+b=0 ====> a=-b

Hence the above are linearly dependent

CORRECT?

THANKS
 
  • #8
No, not correct. You need to show that a=b=0. That is the DEFINITION of linear dependence.
 
  • #9
matt grime said:
No, not correct. You need to show that a=b=0. That is the DEFINITION of linear dependence.


Mat grime ,please ,write down for me the definition ,when a set of n vectors are linearly independent
 
  • #10
Apologies, I missed out an 'in'. The functions e^x and e^2x are clearly linearly independent (over R), which is where I misunderstood what you're saying. (If you don't see it then just rearrange ae^x +be^2x=0 to see that you're claim of linear dependence implies that e^x = -a/b for all x.)
 
Last edited:
  • #11
evagelos said:
Put x=0,then a+b=0 ====> a=-b

Hence the above are linearly dependent

CORRECT?

THANKS
No, that is incorrect. You have shown that taking a= -b makes [itex]ae^x+ be^{2x}= 0[/itex] for x= 0. To show they are linearly dependent, you would have to find a and b so that was true for ALL x. And you can't. Taking x= 0 gives a+ b= 0, so a=-b, while taking x= 1 gives ae+ be^2= 0 so a= -be. Those two equations are only possible if a= b= 0.

Another way to prove it is this: if [itex]ae^x+ be^{2x}= 0[/itex] for all x, then it is a constant and its derivative must be 0 for all x: [itex]ae^x+ 2be^{2x}= 0[/itex]. Taking x= 0 in both os those, a+ b= 0 and a+ 2b= 0. Again, those two equations give a= b= 0.
 
  • #12
HallsofIvy said:
No, that is incorrect. You have shown that taking a= -b makes [itex]ae^x+ be^{2x}= 0[/itex] for x= 0. To show they are linearly dependent, you would have to find a and b so that was true for ALL x. And you can't. Taking x= 0 gives a+ b= 0, so a=-b, while taking x= 1 gives ae+ be^2= 0 so a= -be. Those two equations are only possible if a= b= 0.

Another way to prove it is this: if [itex]ae^x+ be^{2x}= 0[/itex] for all x, then it is a constant and its derivative must be 0 for all x: [itex]ae^x+ 2be^{2x}= 0[/itex]. Taking x= 0 in both os those, a+ b= 0 and a+ 2b= 0. Again, those two equations give a= b= 0.

Is not the definition for linear indepedence the following:

for all a,b and x ,real Nos and [tex]ae^x + be^{2x}=0\Longrightarrow a=0=b[/tex]
 
  • #13
Yes, that is why you can't "prove [itex]e^x[/itex] and [itex]e^{2x}[/itex] are dependent": they are independent.
 
  • #14
Remember this is equal to the zero *function* in the vector space of real valued functions. The zero function is the one that is zero for all its inputs.
 
  • #15
evagelos said:
Is not the definition for linear indepedence the following:

for all a,b and x ,real Nos and [tex]ae^x + be^{2x}=0\Longrightarrow a=0=b[/tex]

HallsofIvy said:
Yes, that is why you can't "prove [itex]e^x[/itex] and [itex]e^{2x}[/itex] are dependent": they are independent.



Then the negation of the above definition should imply for linear dependency

But the negation of the above definition is:

There exist a.b, and ,x such that [tex]ae^x + be^{2x} = 0[/tex] and[tex]a\neq 0[/tex] and [tex]b\neq 0[/tex].

HENCE if we put a=2 ,b=-2 ,x=0 ,we satisfy the linear dependency definition
 
  • #16
That isn't right. You have your quantifiers all kludged up into one it should have read

(for all a,b)(ae^x + be^2x=0 for all x => a=b=0)

the negation of which is

(there exists a,b)(ae^x + be^2x=0 for all x AND NOT(a=b=0))

The for all x is not part of the definition of linear (in)dependence, it is part of the definition of what it means for the function to be zero.
 
Last edited:
  • #17
matt grime said:
That isn't right. You have your quantifiers all kludged up into one it should have read

(for all a,b)(ae^x + be^2x=0 for all x => a=b=0)

.

Can you put that into a quantifier form??
 
  • #18
In what way is that not in a useful quantifier form (for those who like things like that)? Really, we shouldn't even bother with the quantifier "for all a,b", and putting things in the full on formal abstract quantifier notation just makes things far more opaque than they need to be.

You're making a very easy question seem very hard: can you find real numbers a and b not both equal to 0 so that

ae^x + be^2x

is the zero function? No - it has been proven several times in this thread.
 
  • #19
The point is that you are working in a vector space of functions. When we say "af(x)+ bg(x)= 0" we mean it is equal to the 0 function- 0 for all x.
 
  • #20
matt grime said:
That isn't right. You have your quantifiers all kludged up into one it should have read

(for all a,b)(ae^x + be^2x=0 for all x => a=b=0)

the negation of which is

(there exists a,b)(ae^x + be^2x=0 for all x AND NOT(a=b=0))

The for all x is not part of the definition of linear (in)dependence, it is part of the definition of what it means for the function to be zero.

The negation of :

(for all a,b)(ae^x + be^2x =0for all x ===.> a=b=0) is

(there exist a,b) [~(ae^x + be^2x = 0 for all x ====> a=b=0)] ======>

(there exist a,b) [ ae^x = be^2x =0 there exist x AND a=/=0 ,b=/=0]


Whether you put the quantifier infront ,like i did, or at the end ,like you did we still have the same result.

The negation of [tex]\forall xPx[/tex] is : [tex]\exists x\neg Px[/tex] .

In our case Px is ( ae^x + be^2x =0 ======> a=b=0) and the negation of Px is:


ae^x = be^2x =0 and a=/=0,b=/=0

When you want to show that the function ae^x +be^2 is zero for all xεR ,YOU write :

for all xεR, ae^x + be^2x =0 OR ae^x + be^2x = 0, for all x.

To say that: for all x, is not part of the definition ,then what is part of??
 
  • #21
evagelos said:
The negation of :

(for all a,b)(ae^x + be^2x =0for all x ===.> a=b=0) is

(there exist a,b) [~(ae^x + be^2x = 0 for all x ====> a=b=0)] ======>

(there exist a,b) [ ae^x = be^2x =0 there exist x AND a=/=0 ,b=/=0]


Whether you put the quantifier infront ,like i did, or at the end ,like you did we still have the same result.

The negation of [tex]\forall xPx[/tex] is : [tex]\exists x\neg Px[/tex] .

In our case Px is ( ae^x + be^2x =0 ======> a=b=0) and the negation of Px is:


ae^x = be^2x =0 and a=/=0,b=/=0

When you want to show that the function ae^x +be^2 is zero for all xεR ,YOU write :

for all xεR, ae^x + be^2x =0 OR ae^x + be^2x = 0, for all x.

To say that: for all x, is not part of the definition ,then what is part of??
You did not quote all of what he said. He said "The for all x is not part of the definition of linear (in)dependence" and then followed with "it is part of the definition of what it means for the function to be zero" and that was the point of my last response:

We are talking about functions of x. Saying that "[itex]ae^x+ be^{-x}= 0[/itex]" means that the function [itex]f(x)= ae^x+ be^{-x}[/itex] is equal to the "0 function": g(x)= 0.
 
  • #22
To add to what Halls said: you have not take the negation of A => B correctly. The positioning of the quantifiers is very important: the negation of A => B is A and not(B), so the "for all" in there is not changed. You have negated A=>B and gotten, well, goodness knows what in relation to A and B.
 
  • #23
if [tex](e^x, e^{2x})[/tex] is linear denpendent, then [tex]e^{2x}=ke^{x},k\in R, k\ is\ constant \longrightarrow e^x=k[/tex],but [tex]e^x[/tex] is not a constant, so [tex](e^x, e^{2x})[/tex] is linear independent
 
  • #24
HallsofIvy said:
You did not quote all of what he said. He said "The for all x is not part of the definition of linear (in)dependence" and then followed with "it is part of the definition of what it means for the function to be zero" and that was the point of my last response:

We are talking about functions of x. Saying that "[itex]ae^x+ be^{-x}= 0[/itex]" means that the function [itex]f(x)= ae^x+ be^{-x}[/itex] is equal to the "0 function": g(x)= 0.

What is your definition of linear Independence ,in symbolical form or not ??
 
Last edited:
  • #25
Sigh: A set of vectors
[tex]\{v_1, v_2, \cdot\cdot\cdot, v_n\}[/tex]
is independent
If the only set of scalars
[tex]\{a_1, a_2, \cdot\cdot\cdot, a_n\}[/tex]
such that
[tex]a_1v_1+ a_2v_2+ \cdot\cdot\cdot+ a_nv_n= 0[/tex]
is
[tex]a_1= a_2= \cdot\cdot\cdot= a_n= 0[/itex]

But the point you seem to be having trouble with is this: Since the left side of the first equation is a linear combination of vectors, the "0" on the right side is the 0 vector!

When we talk about functions being "independent" or "dependent" we are talking about the functions as members of some vector space- the set of all polynomials, continuous functions, differentiable functions, infinitely differentiable functions, etc. and the 0 vector is the 0 function- i.e. the function f such that f(x)= 0 for all x.

Given almost any set of functions, you can always find numbers such that the linear combination is 0 at one value of x. That has nothing to do with them being "independent".
 
  • #26


O.K here is a proof having the for all x part:


Let b=1

Let a= -e^x

AND [tex]ae^x + be^{2x} = e^x( a + be^x) = e^x( -e^x + e^x) = e^x.0 = 0[/tex] [tex]for all x\in R[/tex] >

Hence we have proved there exist [tex]a\neq 0,b\neq 0[/tex] and such that :

[tex] ae^x + be^{2x} = 0[/tex] for all x
 
  • #27
matt grime said:
: can you find real numbers a and b not both equal to 0 so that

ae^x + be^2x

is the zero function? No - it has been proven several times in this thread.

If you cannot find them it does not mean they do not exist
 
  • #28


evagelos said:
O.K here is a proof having the for all x part:


Let b=1

Let a= -e^x

AND [tex]ae^x + be^{2x} = e^x( a + be^x) = e^x( -e^x + e^x)[/tex]
How do you get that last step? Did you set [itex]a= -e^x[/itex] and b= 1? Of course, you can't do that. [itex]-e^x[/itex] is not a constant.

[tex] = e^x.0 = 0[/tex] [tex]for all x\in R[/tex] >

Hence we have proved there exist [tex]a\neq 0,b\neq 0[/tex] and such that :

[tex] ae^x + be^{2x} = 0[/tex] for all x
 
Last edited by a moderator:
  • #29
matt grime said:
In what way is that not in a useful quantifier form (for those who like things like that)? Really, we shouldn't even bother with the quantifier "for all a,b", and putting things in the full on formal abstract quantifier notation just makes things far more opaque than they need to be.

You're making a very easy question seem very hard: can you find real numbers a and b not both equal to 0 so that

ae^x + be^2x

is the zero function? No - it has been proven several times in this thread.

evagelos said:
If you cannot find them it does not mean they do not exist
I can only conclude that you have not understood anything anyone has said here. matt grime is saying clearly here that it has been proven several time in this thread that it is impossible to find such a and b. Your statement here is not at all responsive to that.
 
  • #30
HallsofIvy said:
I can only conclude that you have not understood anything anyone has said here. matt grime is saying clearly here that it has been proven several time in this thread that it is impossible to find such a and b. Your statement here is not at all responsive to that.


Proofs in forums ,Universities ,books, and in general in mathematical literature are wrong and right ,and not ALWAYS RIGHT..

AND to check them whether are correct or wrong it is an impossible matter because every line can be disputed ,since are not written in a formal way where every line of the proof is justified and hence checked .

CAN anybody in this forum produce a formal proof ,establishing the linear Independence of the said functions??

IF not ,then linear Independence can be doubted.

How can a,and b be constants when they can be quantified?
 

FAQ: Linear Dependency Check: {e^x, e^{2x}}

What is linear dependency?

Linear dependency refers to the relationship between two or more variables where one variable can be expressed as a linear combination of the others. In other words, one variable can be written as a multiple of the other variables.

How is linear dependency checked?

Linear dependency can be checked by determining if one variable is a multiple of the other variables. In the case of {e^x, e^{2x}}, if one variable can be written as a multiple of the other (e.g. e^{2x} = 2e^x), then they are linearly dependent.

What is the purpose of checking for linear dependency?

Checking for linear dependency is important in linear algebra and other mathematical applications because it helps identify relationships between variables and can be used to simplify equations and solve problems.

Can linear dependency be checked for more than two variables?

Yes, linear dependency can be checked for any number of variables. The same principle applies - if one variable can be expressed as a linear combination of the others, then they are linearly dependent.

What does it mean if two variables are linearly dependent?

If two variables are linearly dependent, it means that one variable can be written as a multiple of the other, and therefore they are not completely independent. This can have implications in mathematical calculations and modeling, as well as in real-world applications.

Similar threads

Back
Top