Italian FIRB Project on
 Parallel Algorithms and Numerical Nonlinear Optimization

First year research advances


Research activities
  1. Perturbed dumped Newton and SQP-IP methods for discretized optimal control problems  [description]
  2. Variable projection methods: a success approach for SVMs and large-scale simply contrained QP  [description]
  3. Adaptive diagonal space-filling curves: linking geometric Lipschitz and Bayesian approaches for non-differentiable global optimization.  [description]
  4. Symbolic calculus and parallel computing for error propagation automatic control  [description]
  5. Parallel domain decomposition techniques for space-variant blurred images restoration  [description]
  6. A parallel computing approach to dynamic magnetic  resonance imaging  [description]

Dissemination and training activities


Research Advances


Perturbed dumped Newton and SQP-IP methods for discretized optimal control problems

Researchers. V. Ruggiero, S. Bonettini, F. Tinti.

Advances. The research activity of the first year started from the initial idea that the Interior Point (IP) Newton method can be thought of as an inexact Newton method for the solution of the Karush-Khun-Tucker (KKT) nonlinear system with sign constraints on a subset of the variables.
This remark allows to study the IP method convergence with respect to the perturbation parameter, the solver of the inner linear system arising at each step and the steplength reduction rules.
From this viewpoint, it is possible to employ an iterative inner solver that uses an adaptive stopping rule, which avoids useless inner  iterations when the iterates are far from the solution and yields an efficient as well as globally convergent scheme.
The global convergence as well as the superlinear local convergence of the inexact scheme, already known for quadratic programming (QP) problems, was generalized in [DR-03a] to nonlinear programming (NLP) problems.
The assumptions for the method convergence were pointed out in [DR-04], as a generalization of those given by Tapia et al. for the exact IP Newton scheme.
In particular, by checking one of these assumptions at the software level, one is able to automatically emphasize critical situations where no convergence can be achieved due to a badly chosen starting point.
The convergence theory of the inexact Newton methods can be generalized to include non-monotone back-traking strategies [B-05].
Hence, given the theoretical analogies with the IP method, one is allowed to include these techniques in the IP method itself, thus improving its efficiency.
For the case of QP problems with a sparse Hessian and a special structure constraint matrix (such as in problems arising from the discretization of linear-quadratic optimal control problems, or from newtwork flow problems, or from stochastic programming problems, etc.), a parallel version of the inexact IP method was developed [DR-03b].
For the numerical implementation of this scheme, the preconditioned conjugate gradient method seems a suitable inner solver, both in the normal equations case and in the indefinite reduced KKT system case, which is obtainable from the inner linear system.
In this context, some parallel preconditioners were identified that are suitable for linear-quadratic optimal control problems, while  interesting results were obtained on the spectrum of the resulting indefinite preconditioned reduced KKT systems [DRZ-03, DR-02], that generalize those obtained by Luksan (1998).
The proposed approach (IP coupled with generalized PCG, named GIPPCG) was implemented in parallel FORTRAN for the Cray T3E and SGI Origin platforms, equipped with MPI communications routines.
The efficiency and the scalability of this code was evaluated on asset allocation under uncertainty problems and on discrete optimal control problems, sized up to 105 [DR-03b].
For more general NLP problems, the interpretation of the reduced KKT system as the Lagrange necessary conditions for the QP  problem minimum, suggests to combine the inexact IP Newton  method with the Hestenes multipliers method [BGR-05].
In this way, each inner solver step requires solving a positive definite system, whose dimension is given by the number of the problem variables, where the powerful techniques for sparse matrices Cholesky factorization (by Ng and Peyton) can be fruitfully employed. In [BGR-04] this approach is compared with other direct methods known in the literature. In particular, the proposed method shows its efficiency in the solution of NLP problems coming from the finite difference discretization of semielliptic optimal control problems (Mittelmann, Maurer), sized up to 4.104 and at times unsolved by the well known codes MINOS, Lancelot and SNOPT.

We analyzed from the numerical viewpoint the class of projection methods for solving pseudomonotone variational inequality problems. We focused on some specific extragradient-type methods that do not require differentiability of the operator and we address particular attention to the steplength choice. Subsequently, we analyzed the hyperplane projection methods in which we construct an appropriate hyperplane which strictly separates the current iterate from the solutions of the problem. Finally, in order to illustrate the effectiveness of the proposed methods, we reported the results of a numerical experimentation in (http://dm.unife.it/~tinti). Future research will still involve the study of projection methods for solving the quasi-variational inequality problems proposed by Noor in [A.M. Noor, New Zealand Journal of Mathematics, 1997.

[Top]


Variable projection methods: a successful approach to SVMs and large-scale simply constrained QP

Researchers. L. Zanni, G. Zanghirati.

Advances. A Generalized Variable Projection method (GVPM) for Quadratic Programming (QP) was developed, based on an adaptive alternation of the two Barzilai-Borwain rules for the steplength selection [SZZ-05a].
Its convergence properties were analyzed both theoretically and experimentally [SZZ-04b].
A parallel version of the GVPM was implemented on multiprocessor systems, as the inner QP solver in a decomposition technique designed to solve large-scale QP problems arising from the training of support vector machines (SVMs) [SZZ-04b].

[Top]


Adaptive diagonal space-filling curves: linking geometric  Lipschitz and Bayesian approaches for non-differentiable global optimization

Researchers. Ya. D. Sergeyev, D. Lera, D. Kvasov.

Advances. Concerning the "information" class of the Global Optimization methods:

  • sufficient convergence conditions were established [MPS-02] for a new multi-dimensional diagonal algorithm based on the "local tuning" technique to adaptively estimate the Lipschitz local constants;
  • an effective mono-dimensional method, which applies the "local tuning" technique to the objective function behaviour, was generalized to the multi-dimensional case through the diagonal approach with two different search domain partitioning strategies, thus obtaining geometrical-like diagonal methods whose convergence condition was proved. The results of a wide numerical experimentation have  clearly shown that the new methods, based on the adaptive estimation of the local Lipschitz constants in each search subdomain, are better than the traditional diagonal optimisation methods [KPS-03];
  • the convergence theory was developed for of a new algorithm, based on adaptive diagonal curves joining the ideas of both diagonal methods and Peano curves. The numerical experimentation confirms that the new method advantages increase with the problem size, as theoretically predicted [KS-03];
  • multi-dimensional constrained global optimization problems were considered on a feasible region, constituted by a finite number of robust and non-convex sub-regions, where both the problem objective function and the problem constraints are multi-extremal functions not necessarily differentiable, which satisfy the Lipschitz condition with unknown Lipschitz constants. To solve such a problem a new method was proposed, based on the space-filling Peano curves in the search domain. The convergence properties of this new method were obtained and its good performance was shown by the numerical tests [SPF-03].

Some innovative global optimization techniques are discussed in [StS-03], including the use of fractals for the optimization problem size reduction, the "index scheme" for constrained optimization problems and the non-redundant parallel approach to accelerate the optimum search procedure. New domain decomposition algorithms for Lipschitzian functions were considered, where a new uniform norm-based decomposition technique was introduced, which is able to adaptively select the sub-domains containing the global optima of a multi-extremal continuous real function on a compact set S, which is also Lipschitzian with respect to the maximum norm. A new algorithm was proposed, based on a combination of a local minimization technique with a size-reducing procedure, able to decrease the measure of the region containing the global optima. By using the function Lipschitz condition, at each step k the algorithm selects a point x(k) from a uniform probability distribution and constructs a set sequence S(k) (interval union) such that the global optimum is not included. Such a procedure satisfies the probability convergence conditions [GL-02].

Concerning the interval analysis methods:

  • a new method was proposed to construct mono-dimensional support functions which are more powerful with respect to the number of evaluations, the selection and the elimination of search intervals, thus obtaining a substantial improvement of the optimisation procedure. A wide numerical comparison on a large set of multi-extremal test functions has shown that the new technique is in average almost twice as fast as the "interval analysis" methods currently used for global optimisation [CGMS-03];
  • an effective multi-dimensional algorithm was proposed, which uses the whole information available during the search procedure to fully estimate the objective function behaviour [MCGST-04];
  • three new algorithms, which are based on "interval analysis" and "branch-and-bound" global optimisation approaches, were proposed to solve the minimal root-finding problem for a set of mono-dimensional multi-extremal non-differentiable functions.

The novelty of these algorithms essentially lies in improved intervals elimination criteria and a better choice of the function evaluations order in both points and intervals. These techniques accelerate the search with respect to "interval analysis" methods traditionally used to solve this problem [CGS-02]. The numerical solution of the minimal root-finding problem is also the subject of [S-06b].

Two sets of test functions were introduced to verify the performance of numerical methods for Lipschitzian mono-dimensional constrained global optimization problems, where both the objective function and the constraints can be multi-extremal. Furthermore, both the differentiable and the non-differentiable cases were considered. These test functions were used in the numerical testing of a Pijavskii-type method combined with a non-differentiable penalty function [FSP-02].
It was also proposed a procedure to generate three test functions classes (differentiable, continuously differentiable and twice continuously differentiable) for "black-box"-like multi-dimensional global optimisation; this procedure was also implemented in a package of C routines [GKLS-03].

The computational complexity of local and global minimization algorithms for real functions was studied. In particular, the complexity of "local search"-based "multistart"-type algorithms for global minimization of real functions on compact sets was considered: in this context, it was proven that the number of function  evaluations needed to solve the problem up to a given accuracy depends linearly on the problem size, as well as on other parameters connected to the objective function. Particular attention was devoted to a new definition by Stephens and Baritompa (1998) on the "local information" that an algorithm uses during its execution: three new definitions were then given of "local information", "global information" and "function structure information" [GL-07, GL-05].

[Top]

 Symbolic calculus and parallel computing for error propagation automatic control

Researchers. G. Spaletta.

Advances. An analysis is proposed on the feasibility for the software control of the error propagation, by employing the arbitrarily high precision available in  symbolic calculus systems, toghether with the large computing and memory resources of parallel architectures.
A study and an implementation of tools are considered, to analyze and improve the stability features of available methods, within a particular computer algebra environment, that of Mathematica. A verification is looked at on the possibility of employing "Significance Arithmetic" in a more and more complicated flow of computations. The focus is on the numerical geometric integration of differential equations: within the mentioned environment, known or novel methods are implemented according to criteria that allow, inside such a framework, the error propagation control and thus the methods stability.

The geometric integration of differential equations is originally designed, and it is meeting an increasing interest among resaerchers, to answer the quest for a solution that mantains the flow qualitative properties. Many functionalities have been and are developped, in Mathematica, for the analysis and the automatic derivation of numeric solvers of high order, that retain certain stability characteristics. The numeric, symbolic, graphics and parallel capabilities of Mathematica are exploited.

In [SS02-a] a new orthogonal projection method is described, that solves differential systems retaining orthonormality, while in [SS-02b] a solver is presented that is suitable to separable Hamiltonian systems. In both cases the finite representation of "Significance Arithmetic" is employed, as it is also in [SS-03a], in which algorithms to solve structured differential systems are implemented via a novel "increment" formulation, combined with the known technique of "compensated sum", in order to automatically control and minimize the error propagation.

[Top]

 Parallel domain decomposition techniques for space-variant blurred images restoration

Researchers. F. Zama, E. Loli Piccolomini, G. Landi, M. Bertaja.

Advances. Linear and nonlinear iterative regularization methods was studied. An efficient descent algorithm for the computation of both the solution and the regularization parameter of the Tikhonov method is proposed in [ZL-05]. A "Total Variation"-based iterative regularization method is studied in [LL-05] for the dynamic magnetic resonance image reconstruction problem.

[Top]

 A parallel computing approach to dynamic magnetic  resonance imaging

Researchers. F. Zama, E. Loli Piccolomini, G. Landi, M. Bertaja.

Advances. The domain decomposition method was applied in a parallel MATLAB environment for the reconstruction of row blocks in sequences of dynamic magnetic resonance images [LLZ-03], using open source software for PC-Linux clusters.

 

Dissemination and Training Activities

V. Ruggiero, Analysis of the Newton Inexact Interior-Point Method for Large Scale Nonlinear Optimization Problems, tutorial at the INdAM scientific meeting "OPT2003 - Numerical Methods for Local and Global Optimization: Sequential and Parallel Algorithms", Cortona (Italy), July 14-20, 2003.

Ya.D. Sergeyev, Non-redundant parallelism and adaptive schemes for univariate and multivariate Lipschitz global optimization problems, tutorial at the INdAM scientific meeting "OPT2003 - Numerical Methods for Local and Global Optimization: Sequential and Parallel Algorithms", Cortona (Italy), July 14-20, 2003.

L. Zanni, Metodi del gradiente proiettato per l'Ottimizzazione non lineare, 20 hours minicourse for the Mathematics Doctorate, University of Modena and Reggio Emilia (Italy), Springer 2003.

 

 

 




Status: completed.
Info: g.zanghirati@unife.it

 Last updated: 15/3/2007.