A discrepancy-based parameter adaptation and stopping rule for minimization algorithms aiming at Tikhonov-type regularization
|Title||A discrepancy-based parameter adaptation and stopping rule for minimization algorithms aiming at Tikhonov-type regularization|
We present a discrepancy-based parameter choice and stopping rule for iterative algorithms performing approximate Tikhonov-functional minimization which adapts the regularization parameter value during the optimization procedure. The suggested parameter choice and stopping rule can be applied to a wide class of penalty terms and iterative algorithms which aim at Tikhonov regularization with a fixed parameter value. It leads, in particular, to computable guaranteed estimates for the regularized exact discrepancy in terms of numerical approximations. Based on these estimates, convergence to a solution is shown. As an example, the developed theory and the algorithm is applied to the case of sparse regularization. We prove order optimal convergence rates in the case of sparse regularization, i.e. weighted ?p norms, which turn out to be the same as for the a priori parameter choice rule already obtained in the literature as well as for Morozov's principle applied to exact regularized solutions. Finally, numerical results for two different minimization techniques, iterative soft thresholding algorithm and monotone fast iterative soft thresholding algorithm, are presented, confirming, in particular, the results from the theory.