Orthogonal polynomials are perpendicular?

In summary, the problem is defining orthogonality of polynomials as the tangent at a particular x of two polynomials are perpendicular to each other, for each x. This simply follows the perpendicular vectors or planes etc. However, this definition is not useful as it leads to a contradiction.
  • #1
A Dhingra
211
1
Orthogonal polynomials are perpendicular??

hi..

So as the title suggests, i have a query regarding orthogonal polynomials.
What is the problem in defining orthogonality of polynomials as the tangent at a particular x of two polynomials are perpendicular to each other, for each x? This simply follows the perpendicular vectors or planes etc.

What was the need of defining orthogonality as inner product of the two polynomials are zero? Inner product is given as
∫w(x)*f1(x)*f2(x)=0 over a define interval (that determines the limit)
where w(x)is called the weight function, w(x)>0 for all x in the given interval.

can someone explain what this inner product begin zero mean geometrically, if possible?
(please pardon me for asking a geometrical explanation on this Abstract & linear maths forum.)
 
Physics news on Phys.org
  • #2
What is the problem in defining orthogonality of polynomials as the tangent at a particular x of two polynomials are perpendicular to each other, for each x?
That would give a very boring definition of "perpendicular" - only linear, non-constant polynomials could be perpendicular to each other.
 
  • #3
Think about why we care about perpendicular vector in the first place.
 
  • #4
mfb said:
That would give a very boring definition of "perpendicular" - only linear, non-constant polynomials could be perpendicular to each other.
Can we not have polynomials whose tangents are always perpendicular to each other, apart from straight lines (formed by linear polynomials)?
 
  • #5
pwsnafu said:
Think about why we care about perpendicular vector in the first place.
Definitely you are trying to point out that being 90° (as geometrically it is no special) to each other is not why we care about perpendicular vectors.. they are linearly independent, that is, they can form the basis of a vector space. Same is true for orthogonal polynomials.( I am reminded that I have studied vector spaces this year)

Can you explain how does the definition of inner product lead us to this linear independence of orthogonal polynomials?(if it does...)
 
  • #6
A Dhingra said:
Can you explain how does the definition of inner product lead us to this linear independence of orthogonal polynomials?(if it does...)

It doesn't matter what inner product you use. It only depends on the general properties of inner products.

Write "." to mean "inner product", and (because I'm too lazy to type the general case in LaTeX!) consider just three orthogonal polynomials ##p_1##, ##p_2##, ##p_3##.

Suppose ##p_3## is linearly dependent on ##p_1## and ##p_2##.

##p_3 = a_1 p_1 + a_2 p_2##
So
##p_3.p_1 = a_1 p_1.p_1 + a_2 p_2.p_1##

If the polynomials are orthogonal,

##p_3.p_1 = 0## and ##p_2.p_1 = 0##

So

##0 = a_1 p_1.p_1 + 0##, i.e. ##a_1 = 0##.

Similarly ##a_2## = 0.

So ##p_3 = 0## which is a contradiction.
 
  • #7
A Dhingra said:
Can we not have polynomials whose tangents are always perpendicular to each other, apart from straight lines (formed by linear polynomials)?
The slopes of the tangents on f and g are the derivatives f' and g'. Orthogonality requires that the product of those slopes is -1: ##f'(x)g'(x)=-1## or ##f'(x)=\frac{1}{g'(x)}##. The left side is a polynomial, therefore the right side has to be one as well. 1/polynomial is a polynomial only of g' is constant, therefore g is linear. With the same argument, f is linear as well.
To be orthogonal with your proposed definition, both polynomials have to be straight, orthogonal lines.
 
  • #8
Maybe he meant that the tangent vectors need to be orthogonal only where the two functions intersect. That would certainly be a very geometric thing to do. But it's also useless since there are going to be very little applications of such a thing.

The reason that we care about orthogonality as defined by integrals is because it is a useful concept. It shows up a lot in solving differential equations and in many other situations.

I'll admit that it is confusing to call f and g "orthogonal" since that would imply some geometric picture about f and g. The only reason we call them orthogonal is because of the analogy in the finite-dimensional case where it really does have a geometric picture. We keep the same name in the infinite-dimensional case. It's maybe not the best choice of words, but you'll have to get used to it.
 
  • #9
mfb said:
The left side is a polynomial, therefore the right side has to be one as well. 1/polynomial is a polynomial only of g' is constant, therefore g is linear. With the same argument, f is linear as well.
To be orthogonal with your proposed definition, both polynomials have to be straight, orthogonal lines.

As you have argued g' when a constant can give a polynomial 1/g', but can't we use expansion methods of a polynomial of n degrees with negative power..(something like binomial expansion of negative powers). Using that 1/g' can be a polynomial too.
For example, let us say g' = e^x (it can be approximated by a polynomial let that be g' instead)
then 1/g' = e^(-x)..which can be approximated by another polynomial. Can this work?
(please pardon me if this is silly argument)
 
  • #10
The thread title says "polynomials". You are now talking about functions that are not polynomials.

Sure, orthogonal functions which are not polynomials are often used in math (for example Fourier series), but we can only read what you wrote, not what you meant!
 
  • #11
the Maclaurin series for e^x is
Ʃ(x^k/k!)

(summation from k=0 to ∞, over x^k/k!)

and e^(-x) is
summation over ((-1)^k*x^k)/k!

these both are polynomials aren't they?
can't we use such approximating polynomials when talking about regular polynomials?
 
  • #12
A Dhingra said:
the Maclaurin series for e^x is
Ʃ(x^k/k!)

(summation from k=0 to ∞, over x^k/k!)

and e^(-x) is
summation over ((-1)^k*x^k)/k!

these both are polynomials aren't they?

Those are series, not polynomials. Polynomials have finite number of terms.
 
  • #13
A Dhingra said:
the Maclaurin series for e^x is
Ʃ(x^k/k!)

(summation from k=0 to ∞, over x^k/k!)

and e^(-x) is
summation over ((-1)^k*x^k)/k!

these both are polynomials aren't they?
can't we use such approximating polynomials when talking about regular polynomials?

A polynomial would be a finite sum. So we can approximate ##e^x## be polynomials to arbitrary accuracy, but it's not a polynomial itself because the sum is infinite.
 
  • #14
micromass said:
A polynomial would be a finite sum. So we can approximate ##e^x## be polynomials to arbitrary accuracy, but it's not a polynomial itself because the sum is infinite.
Thanks for mentioning this. I thought with any number of terms (finite or infinite )can be called a polynomial, which is not so..

So the conclusion is that orthogonal polynomials are just linearly independent with their inner product being begin, and they do not have much of geometrical interpretation in general.

Thanks everyone.
 

Related to Orthogonal polynomials are perpendicular?

1. What are orthogonal polynomials?

Orthogonal polynomials are a type of mathematical function that are defined on a specific interval and have the property that their inner product is equal to zero. They are commonly used in physics, engineering, and other fields to solve problems involving functions with specific properties.

2. Why are orthogonal polynomials important?

Orthogonal polynomials have many applications in various fields, including signal processing, numerical analysis, and statistics. They also have important theoretical properties and can be used to approximate other functions with high accuracy.

3. How are orthogonal polynomials different from other types of polynomials?

The main difference between orthogonal polynomials and other types of polynomials is that they satisfy the orthogonality property, which means that their inner product is equal to zero. This allows for more efficient calculations and has important implications for their use in solving problems.

4. How do you determine if a set of polynomials is orthogonal?

To determine if a set of polynomials is orthogonal, you can use the inner product (also known as the dot product) to calculate the integral of the product of two polynomials. If the result is zero, then the polynomials are orthogonal.

5. Can orthogonal polynomials be used for any type of function?

No, orthogonal polynomials are typically only defined on a specific interval and have specific properties that must be satisfied in order to be considered orthogonal. They are often used to approximate other functions, but may not be suitable for all types of functions.

Similar threads

  • Linear and Abstract Algebra
Replies
9
Views
702
  • Linear and Abstract Algebra
Replies
2
Views
1K
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
7
Views
871
  • Calculus and Beyond Homework Help
Replies
2
Views
586
Replies
2
Views
890
  • Linear and Abstract Algebra
Replies
8
Views
2K
  • Linear and Abstract Algebra
Replies
20
Views
3K
Replies
5
Views
1K
Replies
4
Views
2K
Back
Top