- #1
kelly0303
- 580
- 33
Hello! I have a matrix (about 20 x 20), which corresponds to a given Hamiltonian. I would like to write an optimization code that matches the eigenvalues of this matrix to some experimentally measured energies. I wanted to use gradient descent, but that seems to not work in a straightforward manner and I was wondering if someone has any advice on how to proceed. In my case, the diagonal term are mainly of the form ##ax^2+bx^4##, where a and b are the values I want to fit for, and in my case x is around 20. I expect (based on some theoretical calculations) that a is around 5000 and b is around 0.005, so the first term is on the order of ##5000 \times 20^2 = 2000000## and the second term is on the order ##0.005\times 20^4 = 800##. The off diagonal terms are much smaller on the order ~1. The main problem is that the gradient of the function with respect to b is huge i.e. ##x^4##, while b itself is very small. Moreover, when doing the diagonalization the ##bx^4## term gets mixed nonlinearly with the other terms of the matrix so in the end the gradient is not just simply ##x^4## and for example going from 0.0055 to 0.0056 changes the gradient of the eigenvalues with respect to b by almost 5 orders of magnitude. Is there a way to deal with this (for context this is for fitting rotational parameters to a molecular spectrum). Thank you!