Numerical Methods: Solving Exam Pickle with Factorization & Stability

In summary, In this conversation, the speaker is seeking help understanding the concept of factorization and numerical stability in preparation for an upcoming exam. They are struggling to grasp the concepts and have not found any helpful materials. They also inquire about the use of eigenvalues and eigenfunctions in numerical stability.
  • #1
Thorra
45
0
Okay, I'm in a bit of a pickle here. Got the exam on thursday and (surprise) I am utterly clueless.

I cannot grasp a lot of concepts, but here's some I'd like to at least get an idea of:

Factorization method. I only scrapped that it is a special case of Gauss' Exclusion method, that you take a 2nd order difference and turn it into three 1st order differences (or 4th order difference to turn into five 2nd order differences I guess). But I simply could not follow the math. Very with playing with coefficients and this "recurrence equation" that goes like this. $$y_i=\alpha_{i+1}y_{i+1}+\beta_{i+1}$$
and they're put into some mysterious $$A_iy_{i-1}-C_i y_i + B_i y_{i+1}=-F_i; i=1,2,...,N-1$$
And later on some big importance is given to this $$|C_i|\geqslant|A_i|+|B_i|$$And I really don't know anything about stability. It's the first thread I made here, too. But yeah I can't find any materials that would actually explain to me what stability is and how the various elements like those eigenvalues $\lambda$ and eigenfunctions and whatnot come into play with it.

I guess that's all I can muster to write now.
 
Mathematics news on Phys.org
  • #2
Thorra said:
Okay, I'm in a bit of a pickle here. Got the exam on thursday and (surprise) I am utterly clueless.

I cannot grasp a lot of concepts, but here's some I'd like to at least get an idea of:

Well, we'll just start with the first. ;)

Factorization method. I only scrapped that it is a special case of Gauss' Exclusion method, that you take a 2nd order difference and turn it into three 1st order differences (or 4th order difference to turn into five 2nd order differences I guess). But I simply could not follow the math.

No clue what you mean here.
Generally, factorization means writing an expression as a product of factors.
For instance $a^2-b^2=(a-b)(a+b)$.
There you go, we have factorized $a^2-b^2$.

Very with playing with coefficients and this "recurrence equation" that goes like this. $$y_i=\alpha_{i+1}y_{i+1}+\beta_{i+1}$$
and they're put into some mysterious $$A_iy_{i-1}-C_i y_i + B_i y_{i+1}=-F_i; i=1,2,...,N-1$$

Those look like 2 different recurrence equations that have nothing in common.
A recurrence equation is just an equation that has stuff like $y_i$ and $y_{i+1}$ in it.
And later on some big importance is given to this $$|C_i|\geqslant|A_i|+|B_i|$$

Yes. This is leading to numerical stability.
It is related to the second recurrence equation you gave.
For the recurrence equation to be numerically stable, that condition should hold.
Presumably your textbook gives some more explanation why that is the case.
And I really don't know anything about stability. It's the first thread I made here, too. But yeah I can't find any materials that would actually explain to me what stability is and how the various elements like those eigenvalues $\lambda$ and eigenfunctions and whatnot come into play with it.

From WolframMathWorld:

Numerical stability refers to how a malformed input affects the execution of an algorithm. In a numerically stable algorithm, errors in the input lessen in significance as the algorithm executes, having little effect on the final output.
On the other hand, in a numerically unstable algorithm, errors in the input cause a considerably larger error in the final output.

So suppose we have a recurrence equation of the form:
$$y_{i+1} = \frac 1 2 y_i$$
There you go: a numerically stable algorithm! (Angel)
 
  • #3
Thorra said:
Very with playing with coefficients and this "recurrence equation" that goes like this. $$y_i=\alpha_{i+1}y_{i+1}+\beta_{i+1}$$

This would probably be intended to explain numerical stability.
If $|\alpha_{i+1}| < 1$ the recurrence equation is unstable (assuming we use a forward algorithm running from $y_i$ to $y_{i+1}$), since the error in $y_{i+1}$ would be greater than the error in $y_i$.
In particular $\beta_{i+1}$ has no influence on the stability.
If $|\alpha_{i+1}| > 1$ the recurrence equation is stable (in a forward algorithm).In the (forward) algorithm
$$y_{i+1}=A_{i+1}y_{i}+\beta_{i+1}$$
where $y_i$ would be a vector, and $A_{i+1}$ is a matrix, the same principle holds.
If $||A_{i+1}|| > 1$ the algorithm is unstable and if $||A_{i+1}|| < 1$ the recurrence equation is stable.
Here $|| \cdot ||$ is the matrix norm.
 
  • #4
I like Serena said:
No clue what you mean here.
Generally, factorization means writing an expression as a product of factors.
For instance $a^2-b^2=(a-b)(a+b)$.
There you go, we have factorized $a^2-b^2$.
Well I'm sure knowing this aS the base idea won't hurt me.

I like Serena said:
Those look like 2 different recurrence equations that have nothing in common.
A recurrence equation is just an equation that has stuff like $y_i$ and $y_{i+1}$ in it.
Ah
I like Serena said:
Yes. This is leading to numerical stability.
It is related to the second recurrence equation you gave.
For the recurrence equation to be numerically stable, that condition should hold.
Presumably your textbook gives some more explanation why that is the case.
Aah
I like Serena said:
From WolframMathWorld:

Numerical stability refers to how a malformed input affects the execution of an algorithm. In a numerically stable algorithm, errors in the input lessen in significance as the algorithm executes, having little effect on the final output.
On the other hand, in a numerically unstable algorithm, errors in the input cause a considerably larger error in the final output.
Yeeeah?
I like Serena said:
So suppose we have a recurrence equation of the form:
$$y_{i+1} = \frac 1 2 y_i$$
There you go: a numerically stable algorithm! (Angel)
say what?
i is going from N to 1, right? So $y_i=2y_{i+1}$ - it gets bigger so the error probably gets bigger as well. Or am I supposed to understand it like - the error is constant and the bigger the $y_i$ gets the smaller part of the result the error will hold?

I like Serena said:
This would probably be intended to explain numerical stability.
If $|\alpha_{i+1}| < 1$ the recurrence equation is unstable (assuming we use a forward algorithm running from $y_i$ to $y_{i+1}$), since the error in $y_{i+1}$ would be greater than the error in $y_i$.
In particular $\beta_{i+1}$ has no influence on the stability.
If $|\alpha_{i+1}| > 1$ the recurrence equation is stable (in a forward algorithm).
It does go together with this.

I like Serena said:
In the (forward) algorithm
$$y_{i+1}=A_{i+1}y_{i}+\beta_{i+1}$$
where $y_i$ would be a vector, and $A_{i+1}$ is a matrix, the same principle holds.
If $||A_{i+1}|| > 1$ the algorithm is unstable and if $||A_{i+1}|| < 1$ the recurrence equation is stable.
Here $|| \cdot ||$ is the matrix norm.
By matrix norm you mean that A includes both the coefficients and the divisions by steps?
I find the $A_{i+1}y_{i}$ bit strange. Why have a coefficient from a further spot directly affect your current spot rather than the current spot's coefficient?
 
  • #5
On a different note: do you know the differences and advantages between one-step and multiple-step methods of Cauchy problem? And difference between closed schemes $\phi(t_i,h_i,\omega_i,\omega_{i+1})$ and open schemes $\phi(t_i,h_i,\omega_i)$ besides just the fact that the $\omega_{i+1}$ is easier to be told in an open scheme?

Edit: I meant to do that symbol that is often used as the $f$ when it's about an approximate form. I thought it's called phi. Oh well.
 
  • #6
Thorra said:
say what?
i is going from N to 1, right? So $y_i=2y_{i+1}$ - it gets bigger so the error probably gets bigger as well. Or am I supposed to understand it like - the error is constant and the bigger the $y_i$ gets the smaller part of the result the error will hold?

In a forward algorithm, $i$ would be going from 1 to N.
Or that is at least what I intended.
By matrix norm you mean that A includes both the coefficients and the divisions by steps?

Huh?
The matrix norm $||A||$ is defined as the greatest value \(\displaystyle \frac{||Ax||}{||x||}\) can attain, where $||\cdot||$ is the regular vector norm.

Formally:
$$||A|| \overset{def}{=} \max_{x} \frac{||Ax||}{||x||} = \max_{||x||=1} ||Ax||$$

I find the $A_{i+1}y_{i}$ bit strange. Why have a coefficient from a further spot directly affect your current spot rather than the current spot's coefficient?

I have given it no thought whether it may be realistic or not.
The matrix $A_{i+1}$ is merely intended to be a matrix that can be accurately calculated and does not depend on the solution $y$.
Thorra said:
On a different note: do you know the differences and advantages between one-step and multiple-step methods of Cauchy problem?

From wiki:
Multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The process continues with subsequent steps to map out the solution. Single-step methods (such as Euler's method) refer to only one previous point and its derivative to determine the current value. Methods such as Runge-Kutta take some intermediate steps (for example, a half-step) to obtain a higher order method, but then discard all previous information before taking a second step. Multistep methods attempt to gain efficiency by keeping and using the information from previous steps rather than discarding it. Consequently, multistep methods refer to several previous points and derivative values. In the case of linear multistep methods, a linear combination of the previous points and derivative values is used.

What do you think that the advantages and disadvantages are?
And difference between closed schemes $\phi(t_i,h_i,\omega_i,\omega_{i+1})$ and open schemes $\phi(t_i,h_i,\omega_i)$ besides just the fact that the $\omega_{i+1}$ is easier to be told in an open scheme?

Say what?
Give some context perhaps?
Edit: I meant to do that symbol that is often used as the $f$ when it's about an approximate form. I thought it's called phi. Oh well.

An approximation of $f$ is often denoted as $\tilde f$.
Is that what you mean?
 
  • #7
I like Serena said:
From wiki:
Multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The process continues with subsequent steps to map out the solution. Single-step methods (such as Euler's method) refer to only one previous point and its derivative to determine the current value. Methods such as Runge-Kutta take some intermediate steps (for example, a half-step) to obtain a higher order method, but then discard all previous information before taking a second step. Multistep methods attempt to gain efficiency by keeping and using the information from previous steps rather than discarding it. Consequently, multistep methods refer to several previous points and derivative values. In the case of linear multistep methods, a linear combination of the previous points and derivative values is used.

What do you think that the advantages and disadvantages are?
Sounds like multi-step methods are better. They take in account the previous points in the case one point happesn to be a bit off. What possible disadvantage? Maybe if the function changes rapidly at one point, then the multistep method goes off the track cause of the previous multi-points trying to keep it in it's usual place? I'm just using my imagination, really.
I like Serena said:
Say what?
Give some context perhaps?
I remember the context being that we need to find $\omega_{i+1}$ and if it was in a closed scheme it was a part of the function and thus a harder time trying to determine it's function.
I like Serena said:
An approximation of $f$ is often denoted as $\tilde f$.
Is that what you mean?
Oh, I just meant this guy: $\varphi$. I'm pretty sure it's used as a difference approximation alternative to $f$ ($f$ going in the analytic solution).
 
  • #8
Thorra said:
Sounds like multi-step methods are better. They take in account the previous points in the case one point happesn to be a bit off. What possible disadvantage? Maybe if the function changes rapidly at one point, then the multistep method goes off the track cause of the previous multi-points trying to keep it in it's usual place? I'm just using my imagination, really..

From the top of my head:
advantages:
- multistep methods are more accurate (higher order)
- give an opportunity to cancel out forward and backward errors
disadvantages:
- multistep methods take more evaluations
- multistep methods may use steps that are not "on the grid"

To illustrate, Runge-Kutta is the favored method to solve differential equations.
It is a 4-step method that has order $\mathcal O(h^4)$, which is much better than for instance the 1-step Euler method with order $\mathcal O(h)$.

In addition the 1-step Euler methods gives errors that usually only increase in the same direction, while Runge-Kutta will give errors that can cancel each other out.

In principle we could also use methods with more steps per iteration that will indeed achieve a higher order of accuracy, but the required number of additional computations is worse than using Runge-Kutta with a smaller step size, such that it achieves the same accuracy.

The main disadvantage of Runge-Kutta is that its steps are not "on the grid". If you have to solve a differential equation of which only specific values are given at specific distances, you won't be able to use it.
 
  • #9
Hey, I know it's been a while, but could you explain in a bit more detail?
I like Serena said:
advantages:
- give an opportunity to cancel out forward and backward errors
disadvantages:
- multistep methods take more evaluations
- multistep methods may use steps that are not "on the grid"
The main disadvantage of Runge-Kutta is that its steps are not "on the grid". If you have to solve a differential equation of which only specific values are given at specific distances, you won't be able to use it.
Though Runge-Kutta also has a 2-step version, right? With order of accuracy $\mathcal O(h^2)$

Btw could you explain the difference between local and global truncation errors (LTE and GTE)? As I understand it, the LTE is the $\mathcal O(h^n)$ that we see at the end of any Tailor series expansion. But in the end result we use GTE, right? Like when we did differential approxation errors, were they GTEs? I just keep reading that GTE is lower by 1 order of approximation than LTE. And I don't know if it's supposed to only mean something for the single/multistep methods or if it's fit for the later things too with differential approximation & so forth. I have some really chaotic learning materials so it's problematic.

Thanks.
 
Last edited:

FAQ: Numerical Methods: Solving Exam Pickle with Factorization & Stability

1. What are numerical methods?

Numerical methods are techniques used to solve mathematical problems that cannot be solved analytically. They involve approximating solutions using a series of calculations instead of exact solutions.

2. What is factorization in numerical methods?

Factorization is a technique used in numerical methods to simplify a complex problem into smaller, more manageable parts. It involves breaking down a problem into smaller factors that are easier to solve individually.

3. How can numerical methods be used to solve exam pickles?

Numerical methods can be used to solve exam pickles by approximating the solution to the problem using various techniques such as interpolation, root finding, or integration. These methods can help to find the most accurate solution possible within a given time frame.

4. What is stability in numerical methods?

Stability in numerical methods refers to the ability of a method to produce consistent and accurate results, even when there are small changes in the input parameters. A stable method is one that produces reliable solutions that are not greatly affected by small variations in the problem.

5. What are the advantages of using numerical methods in problem-solving?

Numerical methods offer several advantages in problem-solving, including the ability to handle complex problems that cannot be solved analytically, the ability to find approximate solutions within a reasonable time frame, and the ability to handle problems with variables that are difficult to quantify.

Back
Top