- #1
Tusike
- 139
- 0
I understand that for many iterative methods, convergence rates can be shown to depend on the condition number of the coefficient matrix A in the linear equation
$$Ax=y.$$
Therefore, if a preconditioner satisfies
$$P \approx A,$$
then by solving the transformed linear equation
$$(AP^{-1}) (Px)=y.$$
the new coefficient matrix will now have more favorable spectral properties and hence better convergence can be achieved.
One of the main properties a good preconditioner should satisfy besides the above condition is that its inverse should be cheap to apply. Thus, they are often sought out for with a certain structure. Typical examples are the incomplete Cholesky and LU factorizations of the matrix A.
My question is: why do we want to have P approximate A, or, in a more direct approach, why do we formulate finding preconditioners as:
$$
\min_{P} \left\| AP^{-1} - I \right\|_F,
$$
where F represents the Frobenius-norm? The identity matrix isn't the only one with a condition number of 1; would it not be better to formulate the problem as:
$$
\min_{P,Q} \left\| AP^{-1} - Q \right\|_F,
$$
with Q having to be orthogonal? Given a certain structure restriction on P, I imagine this could lead to better preconditioning than in the previous case. Yet, I have not run across any such examples.
$$Ax=y.$$
Therefore, if a preconditioner satisfies
$$P \approx A,$$
then by solving the transformed linear equation
$$(AP^{-1}) (Px)=y.$$
the new coefficient matrix will now have more favorable spectral properties and hence better convergence can be achieved.
One of the main properties a good preconditioner should satisfy besides the above condition is that its inverse should be cheap to apply. Thus, they are often sought out for with a certain structure. Typical examples are the incomplete Cholesky and LU factorizations of the matrix A.
My question is: why do we want to have P approximate A, or, in a more direct approach, why do we formulate finding preconditioners as:
$$
\min_{P} \left\| AP^{-1} - I \right\|_F,
$$
where F represents the Frobenius-norm? The identity matrix isn't the only one with a condition number of 1; would it not be better to formulate the problem as:
$$
\min_{P,Q} \left\| AP^{-1} - Q \right\|_F,
$$
with Q having to be orthogonal? Given a certain structure restriction on P, I imagine this could lead to better preconditioning than in the previous case. Yet, I have not run across any such examples.