How to use the Newton-Raphson method while keeping the sum constant?

  • #1
K-Manu
7
0
Question: How to find solutions using Newton-Raphson method with keeping the sum of solutions?

I am currently working on solving a high-dimensional equation to obtain accurate solutions using the Newton-Raphson method, which I have implemented in Python.
However, my problem requires that the sum of the solutions must equal a constant value, z, which I set. Despite trying several approaches to enforce this constraint, I have not yet achieved the desired result.
For example, I attempted the following methods:
  1. Ensuring that the sum of the updates sum(dx) → 0
  2. Normalizing the solution vector by setting x/sum(x)*z
  3. Extending the matrix to incorporate the constraint (as shown in my Python code, which I discussed here: Stack Overflow link).
Unfortunately, these approaches have not yielded the expected results.
Any mathematical insights or tips would be greatly appreciated.
 
Mathematics news on Phys.org
  • #2
Please explain "the sum of the solutions". If you mean [tex]
\left.\begin{array}{c}
f_1(x_1, \dots, x_n) = 0 \\
\vdots \\
f_n(x_1, \dots, x_n) = 0
\end{array}\right\}\mbox{ subject to }\sum_{i=1}^n x_i = z[/tex] then that is [itex]n + 1[/itex] equations in [itex]n[/itex] unknowns, which might not have any solutions; you can avoid that issue by adding [itex]z[/itex] as an unknown and starting with an initial guess with [itex]z[/itex] equal to your chosen value. But again, there may just not be any solutions for that value of [itex]z[/itex], or the choice of initial guess for the other [itex]x_i[/itex] might be in the domain of attraction of a fixed point which does not satisfy your constraint. Adding [itex]z[/itex] as an unknown is equivalent to solving [tex]
g_i(x_1, \dots, x_{n-1}, z) = f_i(x_1, \dots, x_{n-1}, z - (x_1 + \dots + x_{n-1})) = 0[/tex] which can be done using standard methods.

SciPy already has a library of root finding methods (see scipy.optimize); you don't need to write your own. Those methods which require the jacobian will allow you to either supply a function which computes is analytically, or indicate that the objective function returns both the function value and the jacobian, or you can not supply one and the method will use its own numerical approximation. O you can use one of the methods which don't require the jacobian att all.

If you tell us the exact problem you are trying to solve, we may be able to provide better advice.
 
Last edited:
  • Like
Likes BvU
  • #3
Have you played with the step size?

Finer steps often lead to better results.

Does the problem meet the criteria for using a Newton-Raphson approach?

I know in the case of Euler numerical approaches, we choose the method based on whether the system oscillates or not, as some methods introduce errors over time, and this error manifests as energy added or subtracted from the system.
 
Back
Top