- #1
pamparana
- 128
- 0
Hi everyone,
This is more of a numerical question but I felt this would be the most appropriate forum. I apologise if it is not.
I have a gradient descent problem of the following form:
[tex]\psi_{n+1}=\psi_{n}+\alpha(\nabla\psi_{n}*D^{2}\psi)[/tex]
I am trying this on a 256x256 image grid where everything is spaced uniformly and dx and dy =1. I am using s step size of 0.5 using a normal gradient descent.
Somewhere down the line the algorithm gets very stable and I see some artefacts appearing and the whole thing falls apart and never converges.
Looking through the internet, people recommend using the Crank-Nicholson scheme to solve these kind of systems. However, I am having trouble formulating this in that scheme.
Would anyone know how I can structure this problem using the CN scheme? Also, is there a way to determine the optimal step-size parameter so as not to cause unstability at each iteration?
Thanks,
Luca
This is more of a numerical question but I felt this would be the most appropriate forum. I apologise if it is not.
I have a gradient descent problem of the following form:
[tex]\psi_{n+1}=\psi_{n}+\alpha(\nabla\psi_{n}*D^{2}\psi)[/tex]
I am trying this on a 256x256 image grid where everything is spaced uniformly and dx and dy =1. I am using s step size of 0.5 using a normal gradient descent.
Somewhere down the line the algorithm gets very stable and I see some artefacts appearing and the whole thing falls apart and never converges.
Looking through the internet, people recommend using the Crank-Nicholson scheme to solve these kind of systems. However, I am having trouble formulating this in that scheme.
Would anyone know how I can structure this problem using the CN scheme? Also, is there a way to determine the optimal step-size parameter so as not to cause unstability at each iteration?
Thanks,
Luca