Fourier Series as (Generalized)Least Squares?

In summary: It means that instead of starting with a subspace and finding the best fit for data points within that subspace, in least-squares problems we start with the data points and then try to find the best subspace to fit them. This is why it is considered a "reverse" problem. I wrote:"""" I am trying to express the Fourier Series for f with the standard orthonormal basis in this format. Stephen Tashi wrote:""""I'm not sure what you mean by "in this format". I'll interpret it to mean that you want to look at finding the Fourier series for a function as projecting the function on to a subspace of functions, the subspace defined by
  • #1
Bacle
662
1
Hi, All:

Given a normed vector space (X,||.||), and an inconsistent system Ax=b, the generalized
least squares solution x^ to Ax=b is the point in the span of Ax that is closest to b, i.e.,
given a fixed matrix A, we define AX={Ax: x in X}, and then:

x^:={ x in AX :||x-b||<||x'-b||, for all x' in AX}

In an inner-product space, x^ is the orthogonal projection of b into AX. The value
x^ that minimizes ||x-b|| also minimizes ||x-b||^2

(the least-squares problem in statistics is a sort of reverse problem of finding
a subspace that minimizes the sums of squares of distances of data points given.)

I am trying to express the Fourier Series for f with the standard orthonormal basis
in this format. Is it accurate to say that the Fourier-series for f is the orthogonal
projection of f on the span of the basis{ 1/2Pi, +/-cos(nx),+/-sin(nx), n=1,2,...}?

I am having some trouble with the fact that we are using an infinite-dimensional
space; if we cut off the series at some value N, then I think an argument is easier.

Any Ideas?

Thanks.
 
Physics news on Phys.org
  • #2
Bacle said:
Hi, All:

Given a normed vector space (X,||.||), and an inconsistent system Ax=b, the generalized
least squares solution x^ to Ax=b is the point in the span of Ax that is closest to b, i.e.,
given a fixed matrix A, we define AX={Ax: x in X}, and then:

x^:={ x in AX :||x-b||<||x'-b||, for all x' in AX}

In an inner-product space, x^ is the orthogonal projection of b into AX. The value
x^ that minimizes ||x-b|| also minimizes ||x-b||^2

(the least-squares problem in statistics is a sort of reverse problem of finding
a subspace that minimizes the sums of squares of distances of data points given.)

I don't understand in what sense it is a "reverse" problem.

If I want to solve a an whole class of least squares problems, I can understand that as defining (in some sense) a subspace of the space of curves. For example, if I am fitting quadratics, the sum of two quadratics is a quadratic, a scalar multiple of a quadratic is a quadratic etc. On a finite interval [a,b] , one can define an inner product of two quadratics f and g by [itex] \int_a^b f(x) g(x) dx. Is that what you mean?

I am trying to express the Fourier Series for f with the standard orthonormal basis
in this format.

I'm not sure what you mean by "in this format". I'll interpret it to mean that you want to look at finding the Fourier series for a function as projecting the function on to a subspace of functions, the subspace defined by all possible Fourier series.


Is it accurate to say that the Fourier-series for f is the orthogonal
projection of f on the span of the basis{ 1/2Pi, +/-cos(nx),+/-sin(nx), n=1,2,...}?

I think it is correct. There might be some technicalties in defining terms for infinite dimensional spaces that would need to be done before we could say it was "accurate".

I am having some trouble with the fact that we are using an infinite-dimensional
space; if we cut off the series at some value N, then I think an argument is easier.

You are correct that infinite dimensional vectors spaces require methods of proof that finite dimensional space do not and some things that are true in finite dimensional vectors spaces don't hold in infinite dimensional ones. For example, a "vector" given by its "components" in a finite dimensional space is unremarkable, but a vector given by a list that is an infinite series of basis functions might be a divergent series. The general setting for studying such things is "functional analysis". Look up "Banach spaces" and "Hilbert Spaces".

I've often asked experts on functional analysis about analogies between finite dimensional vectors spaces and matrices and infinite dimensional vector spaces and operators. A few say "Yes, of course" and others say "No,No,No!". I think the ones that say "No, No, No!" are thinking in terms of the technicalities of convergence etc. The ones that say "Yes, of course" are thinking in terms of The Big Picture. From the point of view of The Big Picture, expressing a function in Fourier series and or in terms of various kinds of orthogonal polynomials is an attempt to project a vector onto a countably infinite set of basis functions.
 
  • #3
Hi, Stephen; unfortunately, the quoting function is not working too well; I'll try my best, tho; I will use """" to start and finish quotes.


I wrote:
""""
Hi, All:

Given a normed vector space (X,||.||), and an inconsistent system Ax=b, the generalized
least squares solution x^ to Ax=b is the point in the span of Ax that is closest to b, i.e.,
given a fixed matrix A, we define AX={Ax: x in X}, and then:

x^:={ x in AX :||x-b||<||x'-b||, for all x' in AX}

In an inner-product space, x^ is the orthogonal projection of b into AX. The value
x^ that minimizes ||x-b|| also minimizes ||x-b||^2

(the least-squares problem in statistics is a sort of reverse problem of finding
a subspace that minimizes the sums of squares of distances of data points given.)""""

Stephen Tashi wrote:
""""I don't understand in what sense it is a "reverse" problem. """""

A correction: x^:={x in AX: ||Ax-b||<||Ax'-b||, for all x,x' in Ax}

I mean that the standard setup is one in which we are given the specific subspace and
a point b that is not on the subspace, and we want to minimize the distance/norm
between b and the subspace; we are given a map A:V,W, for V,W normed vector spaces,
and AV is the subspace, and some b not in AV is the point. In the case of statistical
(linear)least squares, we are given a collection of points (in R^n, usually, but in R^2 for least squares)and we want to find the line/subspace of R^n such that the sum of the squares of residuals is minimal.


I wrote:
""""
I am trying to express the Fourier Series for f with the standard orthonormal basis
in this format. """"

Stephen Tashi wrote:
""""I'm not sure what you mean by "in this format". I'll interpret it to mean that you want to look at finding the Fourier series for a function as projecting the function on to a subspace of functions, the subspace defined by all possible Fourier series.""""

I mean that I am trying to describe the Fourier series for f as the best least squares
approximation to f itself, in that the Fourier series for f are the projection of f into the
span of the standard orthogonal basis, and so that the Fourier series minimizes the
square residuals.

Sorry, I got to go, I will write the rest later.
 

FAQ: Fourier Series as (Generalized)Least Squares?

What is a Fourier Series as a (Generalized) Least Squares?

A Fourier Series as a (Generalized) Least Squares is a mathematical technique used to approximate a function as a sum of sine and cosine waves. It is a form of regression analysis that minimizes the sum of squared errors between the actual function and its approximation.

What is the purpose of using a Fourier Series as a (Generalized) Least Squares?

The purpose of using a Fourier Series as a (Generalized) Least Squares is to find the best fit for a given set of data points. It allows us to represent a complex function with a simpler set of sine and cosine functions, making it easier to analyze and manipulate.

What are the main applications of Fourier Series as a (Generalized) Least Squares in science?

Fourier Series as a (Generalized) Least Squares has many applications in science, including signal processing, image reconstruction, and data compression. It is also used in various fields such as physics, engineering, and finance to model and analyze complex phenomena.

What are the limitations of using Fourier Series as a (Generalized) Least Squares?

One limitation of using Fourier Series as a (Generalized) Least Squares is that it can only approximate periodic functions. It may also struggle with functions that have sharp changes or discontinuities. Additionally, the accuracy of the approximation depends on the number of terms used, which can be computationally expensive.

How does a Fourier Series as a (Generalized) Least Squares differ from a standard Fourier Series?

A standard Fourier Series uses a specific set of coefficients to approximate a function, while a Fourier Series as a (Generalized) Least Squares allows for more flexibility in choosing the coefficients. This can result in a more accurate approximation, especially for functions with non-periodic components.

Similar threads

Replies
11
Views
1K
Replies
2
Views
1K
Replies
9
Views
3K
Replies
8
Views
4K
Replies
1
Views
2K
Replies
1
Views
2K
Back
Top