Finite differences on scalar? Matrix?

In summary, the conversation is about the \Delta^K finite difference operator and its application to a sequence of values. The end goal is to calculate the matrix V, which is the result of applying the finite difference operator to a polynomial expression involving the sequence. The algorithm in the link provided explains how to calculate V, but the speaker is unsure of how to actually do it and asks for clarification. They also suggest a possible solution involving taking discrete differences of the sequence.
  • #1
divB
87
0
Hi,

In a paper I have

[tex]v_{n,k} = \Delta^K ( (-1)^n n^k y_n )[/tex]

with [tex]n = K, \dots , N-1[/tex], [tex]k = 0, \dots, K[/tex] and [tex]N = 2K[/tex]

where [tex]\Delta^K[/tex] is the Kth finite difference operator.

As you can see, all [tex]v_{n,k}[/tex] consistute an [tex](N-K) \times (K+1)[/tex] matrix.

So without the [tex]\Delta[/tex]'s, each [tex]v_{n,k}[/tex] would be a scalar. I do not see how to calculate the finite difference of a scalar?!

Well, probably it is not a finite difference. But can anybody tell me what could be meant with that?

Regards,
divB
 
Physics news on Phys.org
  • #2
I don't understand. You say that [itex]\Delta^K[/itex] is the "Kth finite difference operator" and ask what would [itex]v_{n, k}[/itex] be without the [itex]\Delta[/itex]? It would no longer be a finite difference!

(The finite difference of a scalar function, at some n, would be f(n+1)- f(n).)
 
Last edited by a moderator:
  • #3
Hi,

Yes, this is exactly my question! I do not understand how it is meant!

Maybe you can take a look at http://biblion.epfl.ch/EPFL/theses/2001/2369/EPFL_TH2369.pdf, page 55-56.

I again try to explain: Suppose you have a sequence [tex]y_n[/tex] consisting of N values. Now consider the expression

[tex](-1)^n P(n) y_n[/tex] where [tex]P(n)[/tex] is a polynomial of order K in the variable n. The authors argue that this term vanishes if K finite differences are applied. This yields equation 3.14-3.16 in the link above and can be written as matrix equation:

[tex]
\Delta^K ((-1)^n P(n) y_n) = \sum_{k=0}^K p_k \underbrace{\Delta^K ((-1)^n n^k y_n)}_{v_{n,k}} = \mathbf{V} \cdot \mathbf{p} = 0
[/tex]

My question is how to calculate the matrix V. The algorithm 3.1 in the link above tells me exactly what I have asked in the original post but I do not know how to actually calculate it.

It would be easier to understand if there would be just one dimension.

But this brings me to an idea: The finite difference is the difference dependent on n, isn't it?

So for the first column I set k=0 and have the sequence [tex](-1)^n n^0 y_n[/tex] and the column are just the discrete differences of the sequence?

E.g. if [tex](-1)^n n^0 y_n = \left\{1, 4, 9, 8, 10\right\}[/tex] then the first column would be [tex]\left[3 , 5 , -1 , 2\right]^T[/tex]?

And the same for columns 1,...K?

Maybe this is the solution?

Regards, divB
 
Last edited by a moderator:

FAQ: Finite differences on scalar? Matrix?

What is the definition of finite differences on scalar?

Finite differences on scalar is a mathematical technique used to approximate the derivative of a function at a specific point. It involves calculating the difference between the function values at two nearby points and dividing by the distance between those points.

How is finite differences on scalar used in real-world applications?

Finite differences on scalar is commonly used in physics, engineering, and finance to approximate the rate of change of a system or process. It can also be used to solve differential equations numerically.

What is the formula for finite differences on scalar?

The general formula for finite differences on scalar is:
f'(x) ≈ (f(x+h) - f(x))/h
where h is the distance between the two points and f(x) is the function at the point of interest.

What are the advantages of using finite differences on scalar?

One advantage of using finite differences on scalar is that it is a relatively simple and straightforward method for approximating derivatives. It also does not require the function to be continuous or differentiable, making it applicable to a wide range of functions.

How does finite differences on scalar differ from finite differences on matrix?

Finite differences on scalar involves approximating the derivative of a single-variable function, while finite differences on matrix involves approximating the derivative of a multi-variable function. In finite differences on matrix, the differences between function values are taken along different axes of a matrix.

Similar threads

Back
Top