Pubblications
Preface
This
section collects all the papers published by
the members of the project research units.
The full list is also
available for download in BibTeX format [pn2o.bib].
Some
papers relates to software implementation:
in this case, please follow the link to the Software
section.
Papers
and Reports
 [GL07]
M. Gaviano, D. Lera
(2007),
A global minimization algorithm
for Lipschitz functions,
to appear on Optimization Letters.
Abstract.
The global optimization
problem min f(x)
subject to x in S = [a,b]
with a,b in R^{n}
that satisfies the Lipschitz
condition (i) f(x)  f(y)
<= L xy
_infty for all x,y in
S, L>0, is
considered. To solve it a new
algorithm is introducted. This
combines a local minimum
algorithm with a procedure that
decreases the measure of the region
where the global minimum is located.
Specifically, by making use of the
condition (i), at each iteration of
the algorithm it is constructed a
sequence of intervals s_{j},
s_{j} subset of S,
with the property that
a global minimum is not contained in
the union of all the s_{j}.
A convergence property of the
algorithm is given.
^{[
Printed version available on request from the authors
]}
[Top]
 [L07]
G. Landi
(2007),
The Lagrange Method for the Regularization of Discrete IllPosed Problems
, to appear on Computational Optimization and Applications (special issue).
Abstract.
In many science and engineering applications, the
discretization of linear illposed problems gives rise to large
illconditioned linear systems with the righthand side degraded by
noise. The solution of such linear systems requires the solution of
minimization problems with one quadratic constraint, depending on an
estimate of the variance of the noise. This strategy is known as
regularization. In this work, we propose a modification of the
Lagrange method for the solution of the noise constrained
regularization problem. We present the numerical results of test
problems, image restoration and medical imaging denoising. Our
results indicate that the proposed Lagrange method is effective and
efficient in computing good regularized solutions of illconditioned
linear systems and in computing the corresponding Lagrange
multipliers. Moreover, our numerical experiments show that the
Lagrange method is computationally convenient. Therefore, the
Lagrange method is a promising approach for dealing with illposed
problems.
^{[
Printed version available on request from the authors
]}
[Top]
 [S07a]
Ya.D. Sergeyev
(2007),
Measuring fractals by infinite and infinitesimal numbers,
to appear on Mathematical Methods, Physical Models \& Simulation in Science and Technology.
Abstract. Traditional mathematical tools used for analysis of fractals allow one to distinguish results of
selfsimilarity processes after a finite number of iterations. For example, the result of procedure
of construction of Cantor's set after two steps is different from that obtained after three steps.
However, we are not able to make such a distinction at infinity. It is shown in this paper that
infinite and infinitesimal numbers proposed recently allow one to measure results of fractal
processes at different iterations at infinity too. First, the new technique is used to measure at
infinity sets being results of Cantor's procedure. Second, it is applied to calculate the lengths
of polygonal geometric spirals at different points of infinity.
^{[
Printed version available on request from the authors
]}
[Top]
 [S07b]
Ya.D. Sergeyev
(2007),
Blinking fractals and their quantitative analysis using infinite and infinitesimal numbers,
Chaos, Solitons \& Fractals 33, 5075.
Abstract.
The paper considers a new type of objects  blinking fractals  that are not covered by traditional
theories studying dynamics of selfsimilarity processes. It is shown that the new approach allows
one to give various quantitative characteristics of the newly introduced and traditional fractals
using infinite and infinitesimal numbers proposed recently. In this connection, the problem of the
mathematical modelling of continuity is discussed in detail. A strong advantage of the introduced
computational paradigm consists of its wellmarked numerical character and its own instrument 
Infinity Computer  able to execute operations with infinite and infinitesimal numbers.
^{[
Printed version available on request from the authors
]}
[Top]
 [SKK07b]
Ya.D. Sergeyev, F.M.H. Khalaf and D.E. Kvasov
(2007),
Univariate Algorithms for Solving Global Optimization Problems with Multiextremal NonDifferentiable Constraints
, in "Models and Algorithms for Global Optimization",
A. Törn, J. Zilinskas, Eds.,
Springer Optimization and Its Applications 4, Springer, 123140.
Abstract.
In this paper, Lipschitz univariate constrained global
optimization problems where both the objective function and
constraints can be multiextremal and nondifferentiable are
considered. The constrained problem is reduced to a discontinuous
unconstrained problem by the index scheme without introducing
additional parameters or variables. It is shown that the index
approach proposed by R.G. Strongin for solving these problems in the
framework of stochastic information algorithms can be successfully
extended to geometric algorithms constructing nondifferentiable
discontinuous minorants for the reduced problem. A new geometric
method using adaptive estimates of Lipschitz constants is described
and its convergence conditions are established. Numerical experiments
including comparison of the new algorithm with methods using penalty
approach are presented.
^{[
Printed version available on request from the authors
]}
[Top]
 [SKK07a]
Ya.D. Sergeyev, D.E. Kvasov and Falah M.H. Khalaf
(2007),
A OneDimensional Local Tuning Algorithm for Solving GO Problems with Partially Defined Constraints
, Optimization Letters 1(1),
8599.
Abstract.
Lipschitz onedimensional constrained global optimization (GO)
problems where both the objective function and constraints can be multiextremal
and nondifferentiable are considered in this paper. Problems, where
the constraints are verified in a priori given order fixed by the nature of the
problem are studied. Moreover, if a constraint is not satisfied at a point, then
the remaining constraints and the objective function can be undefined at this
point. The constrained problem is reduced to a discontinuous unconstrained
problem by the index scheme without introducing additional parameters or
variables. A new geometric method using adaptive estimates of local Lipschitz
constants is introduced. The estimates are calculated by using the local tuning
technique proposed recently. Numerical experiments show quite a satisfactory
performance of the new method in comparison with the penalty approach and
a method using a priori given Lipschitz constants.
^{[
Printed version available on request from the authors
]}
[Top]
 [B06]
S. Bonettini (2006),
Some Preconditioned Conjugate Gradient Algorithms for the Solution of Equality Constrained Quadratic Programming Problems
, submitted
.
Abstract.
In this paper we consider the application of the preconditioned conjugate gradient (PCG) method to
the solution of equality constrained quadratic programming problems. In particular, we consider
three different PCG algorithms and two indefinite preconditioners. A special attention is given to
the choice of the factorization method for the preconditioner. The numerical experiments show a
comparison of the effectiveness of the proposed variants. Furthermore, we show the behaviour of the
PCG method for the solution of the inner subproblem arising at each step of an interior point
algorithm for the solution of nonlinear programming problems.
^{[
Printed version available on request from the authors
]}
[Top]
 [BGR06]
S. Bonettini, E. Galligani, V. Ruggiero (2006),
Inner solvers for interior point methods for large scale nonlinear
programming, Tech. Rep. 64,
University of Modena and Reggio Emilia,
Italy.
To appear on Computational Optimization and Applications.
Abstract.
This paper deals with the solution of nonlinear programming problems
arising from elliptic control problems by an interior point scheme.
At each step of the scheme, we have to solve a large scale symmetric and
indefinite system; inner iterative solvers, with adaptive stopping rule, can
be used in order to avoid unnecessary inner iterations, especially when
the current outer iterate is far from the solution.
In this work, we analyze the method of multipliers and the preconditioned
conjugate gradient method as inner solvers for interior point schemes. We
discuss on the convergence of the whole approach, on the implementation
details and we report results of a numerical experimentation on a set of
large scale test problems arising from the discretization of elliptic control
problems. A comparison with other interior point codes is also reported.
^{[
Printed version available on request from the authors
]}
[Top]
 [BRT06]
S. Bonettini, V. Ruggiero, F. Tinti (2006),
On the solution of indefinite systems arising in nonlinear optimization
,
Tech. Rep. 359, Department of Mathematics, University of Ferrara, Italy.
Submitted.
Abstract.
We consider the application of the preconditioned conjugate gradient (PCG) method to the solution
of indefinite linear systems arising in nonlinear optimization. Our approach is based on the choice
of quasidefinite preconditioners and of a suitable factorization routine. Some theoretical and
numerical results about these preconditioners are obtained. Furthermore, we show the behaviour of
the PCG method for different representations of the indefinite systems and we compare the
effectiveness of the proposed variants.
^{[
Printed version available on request from the authors
]}
[Top]
 [BT06]
S. Bonettini, F. Tinti (2006),
A nonmonotone semismooth inexact Newton method
, to appear on Optimization Methods and Software.
Abstract.
In this work we propose a variant of the inexact Newton method for the solution of semismooth
nonlinear systems of equations. We introduce a nonmonotone scheme, which couples the inexact
features with the nonmonotone strategies. For the nonmonotone scheme, we present the convergence
theorems. Finally, we show how we can apply these strategies in the variational inequalities
context and we present some numerical examples.
^{[
Printed version available on request from the authors
]}
[Top]
 [CPS06]
L. Carotenuto, P. Pugliese, Ya.D. Sergeyev
(2006),
Maximizing performance and robustness of PI and PID controllers by global optimization
, International Journal of Control and Intelligent Systems,
34(3), 225235.
Abstract.
This paper reports a new algorithm for global constrained
optimization and shows its application to the design of PI and PID
controllers. The algorithm is described in detail, and the features
that make it suited for controller design are emphasized. Various
design criteria and constraints are considered, for some plant models
currently used as benchmarks. The numerical results show that the
algorithm performs very well in all the tested cases: its flexibility
and ease of use make it a valuable alternative to classical design
procedures.
^{[
Printed version available on request from the authors
]}
[Top]
 [KKS06]
F.M.H. Khalaf, D.E. Kvasov, Ya.D. Sergeyev
(2006),
Onedimensional global optimization problems with multiextremal constraints
, Proc. of the VIII Congress of the Italian Society for Applied and Industrial Mathematics SIMAI2006,
184.
Abstract.
Lipschitz univariate constrained global optimization
problems where both the objective function and constraints can be
multiextremal and nondifferentiable are considered. The constrained
problem is reduced to a discontinuous unconstrained problem by the
index scheme without introducing additional parameters or variables. A
new geometric method using adaptive estimates of Lipschitz constants
is described, its convergence conditions are established. Numerical
experiments including comparison of the new algorithm with methods
using penalty approach are given. Algorithms with local tuning
technique on behavior of both the objective function and constraints
are considered.
^{[
Printed version available on request from the authors
]}
[Top]
 [KS06]
D.E. Kvasov, Ya.D. Sergeyev
(2006),
Global search based on efficient diagonal partitions and several techniques based on Lipschitz condition
, Proceedings of the 3rd International Conference on Applied Mathematics TICAM 2006,
2(3), D. Bainov, S. Nenov, Eds., Plovdiv, Bulgaria, 164165.
Abstract.
In this talk, we consider a global optimization problem of a
multidimensional "blackbox" function satisfying the Lipschitz
condition over a hyperinterval with an unknown Lipschitz constant.
Several techniques for obtaining the Lipschitz information are
discussed and a novel technique balancing usage of local
and global information during the search is proposed. A new
efficient algorithm for solving this problem is presented.
^{[
Printed version available on request from the authors
]}
[Top]
 [L06]
G. Landi
(2006),
A Fast Truncated Lagrange Method for LargeScale Image Restoration Problems
, to appear on Applied Mathematics and Computation.
Abstract.
In this work, we present a new method for the restoration of images
degraded by noise and spatially invariant blur. In the proposed
method, the original image restoration problem is replaced by an
equality constrained minimization problem. A quasiNewton method is
applied to the firstorder optimality conditions of the constrained
problem. In each Newton iteration, the hessian of the lagrangian is
approximated by a circulant matrix and the Fast Fourier Transform is
used to compute the Newton step. The quasiNewton iteration is
terminated according to the discrepancy principle. Results of
numerical experiments are presented to illustrate the effectiveness
and usefulness of the proposed method.
^{[
Printed version available on request from the authors
]}
[Top]
 [LL06]
G. Landi, E. Loli Piccolomini
(2006),
Representation of high resolution images from low sampled Fourier data. Applications to dynamic MRI
, Journal of Mathematical Imaging and Vision
26(12), 2740.
Abstract.
In this work we propose the use of Bspline functions for the
parametric representation of high resolution images from low sampled
data in the Fourier domain. Traditionally, exponential basis
functions are employed in this situation, but they produce artifacts
and amplify the noise on the data. We present the method in an
algorithmic form and carefully consider the problem of solving the
illconditioned linear system arising from the method by an
efficient regularization method.
Two applications of the proposed method to dynamic Magnetic
Resonance images are considered. Dynamic Magnetic Resonance
acquires a time series of images of the same slice of the body; in
order to fasten the acquisition, the data are low sampled in the
Fourier space. Numerical experiments have been performed both on
simulated and real Magnetic Resonance data. They show that the
Bsplines reduce the artifacts and the noise in the representation
of high resolution Magnetic Resonance images from low sampled data.
^{[
Printed version available on request from the authors
]}
[Top]
 [LZ06]
G. Landi, F. Zama
(2006),
The activeset method for nonnegative regularization of linear illposed problems
, Applied Mathematics and Computation
175(1), 715729.
Abstract.
In this work, we analyze the behavior of the ActiveSet
method for the nonnegative regularization of discrete illposed
problems. In many applications, the solution of a linear illposed
problem is known to be nonnegative. Standard Tikhonov regularization
often provides an approximated solution with negative entries. We
apply the ActiveSet method to find a nonnegative approximate
solution of the linear system starting from the Tikhonov regularized
one. Our numerical experiments show that the ActiveSet method is
effective in reducing the oscillations in the Tikhonov regularized
solution and in providing a nonnegative regularized solution of the
original linear system.
^{[
Printed version available on request from the authors
]}
[Top]
 [MRSZ06]
S. Morigi, L. Reichel, F. Sgallari, F. Zama
(2006),
Iterative methods for illposed problems and semiconvergent sequences
, Journal of Computational and Applied Mathematics
193, 157167.
Abstract.
Iterative schemes, such as LSQR and RRGMRES, are among the
most efficient methods for the solution of largescale illposed
problems. The iterates generated by these methods form
semiconvergent sequences. A meaningful approximation of the desired
solution of an illposed problem often can be obtained by choosing a
suitable member of this sequence. However, it is not always a simple
matter to decide which member to choose. Semiconvergent sequences
also arise when approximating integrals by asymptotic expansions,
and considerable experience and analysis of how to choose a suitable
member of a semiconvergent sequence in this context are available.
The present note explores how the guidelines developed within the
context of asymptotic expansions can be applied to iterative methods
for illposed problems.
^{[
Printed version available on request from the authors
]}
[Top]
 [RT06]
V. Ruggiero, F. Tinti (2006),
A preconditioner for solving large scale variational inequality problems
, International Journal of Computer Mathematics 83(10), 723739.
Abstract.
The numerical solution of a large scale variational inequality
problem can be obtained using the generalisation of an Inexact
Newton method applied to a semismooth nonlinear system. This
approach requires a sparse and large linear system to be solved at
each step. In this work we obtain an approximate solution of this
system by the LSQR algorithm of Paige and Saunders combined with a
convenient preconditioner that is a variant of the incomplete
LUfactorization. Since the computation of the factorization of
the preconditioning matrix can be very expensive and memory
consuming, we propose a preconditioner that admits
blockfactorization. Thus the direct factorization is only
applied to submatrices of smaller sizes. Numerical experiments on
a set of testproblems arising from the literature show the
effectiveness of this approach.
^{[
Printed version available on request from the authors
]}
[Top]
 [S06a]
Ya.D. Sergeyev
(2006),
Univariate global optimization with multiextremal nondifferentiable constraints without penalty functions
, Computational Optimization and Applications
34(1), 229248.
Abstract.
This paper proposes a new algorithm for solving constrained
global optimization problems where both the objective function and
constraints are onedimensional nondifferentiable multiextremal
Lipschitz functions. Multiextremal constraints can lead to complex
feasible regions being collections of isolated points and intervals
having positive lengths. The case is considered where the order the
constraints are evaluated is fixed by the nature of the problem and a
constraint i is defined only over the set where the constraint i1 is
satisfied. The objective function is defined only over the set where
all the constraints are satisfied. In contrast to traditional
approaches, the new algorithm does not use any additional parameter or
variable. All the constraints are not evaluated during every iteration
of the algorithm providing a significant acceleration of the search.
The new algorithm either finds lower and upper bounds for the global
optimum or establishes that the problem is infeasible. Convergence
properties and numerical experiments showing a nice performance of the
new method in comparison with the penalty approach are given.
^{[
Printed version available on request from the authors
]}
[Top]
 [S06b]
Ya.D. Sergeyev
(2006),
Finding the minimal root of an equation: applications and algorithms based on Lipschitz condition
, in "Global Optimization  Scientific and Engineering Case Studies", invited survey,
J.D. Pintér, Ed., Springer, 441460.
Abstract.
In this survey the problem of finding the minimal root to an equation is discussed. It is supposed
that the equation under consideration can have many roots. In the case when the Lipschitz constant
for the objective function or its first derivative is known a priori, two methods based on global
optimization ideas are presented. The algorithms either find the minimal root or determine the
global minimizers (in the case when the objective function has no roots). If the Lipschitz
constants are unknown, there are introduced two methods adaptively estimating local Lipschitz
constants during the search. This approach allows us to accelerate the search in comparison with
the case with known a priori Lipschitz constants. Sufficient conditions for convergence of the new
methods to the desired solution are established.
^{[
Printed version available on request from the authors
]}
[Top]
 [SK06]
Ya.D. Sergeyev, D.E. Kvasov
(2006),
Global search based on efficient diagonal partitions and a set of Lipschitz constants
, SIAM Journal on Optimization
16(3), 910937.
Abstract.
In the paper, the global optimization problem of a
multidimensional "blackbox" function satisfying the Lipschitz
condition over a hyperinterval with an unknown Lipschitz constant is
considered. A new efficient algorithm for solving this problem is
presented. At each iteration of the method a number of possible
Lipschitz constants are chosen from a set of values varying from zero
to infinity. This idea is unified with an efficient diagonal partition
strategy. A novel technique balancing usage of local and global
information during partitioning is proposed. A new procedure for
finding lower bounds of the objective function over hyperintervals is
also considered. It is demonstrated by extensive numerical experiments
performed on more than 1600 multidimensional test functions that the
new algorithm shows a very promising performance.
^{[
Printed version available on request from the authors
]}
[Top]
 [SS06a]
M. Sofroniou, G. Spaletta (2006),
Hybrid solvers for splitting and
composition methods,
Journal of Computational and Applied Mathematics 185(2),
278291.
Abstract.
Composition and splitting are useful techniques for constructing
special purpose integration methods for numerically solving many
types of differential equations.
In this article we will review these methods and summarise the
essential ingredients of an implementation that has recently
been added to a framework for solving differential equations
in Mathematica.
^{[
Printed version available on request from the authors
]}
[Top]
 [SS06b]
M. Sofroniou, G. Spaletta (2006),
Rounding error reduction in extrapolation methods
,
invited review,
special issue "Numerical Computing in Problem Solving Environments
with Emphasis on Differential Equations",
edited by L. Shampine,
to appear on Journal of Numerical Analysis, Industrial and Applied Mathematics.
Abstract.
Extrapolation methods are very efficient when high accuracy is desired
in a numerical solution of an ordinary differential equation.
High order methods can suffer from cumulative rounding error propagation.
A new formulation for reducing the effect of cumulative rounding errors
is outlined here and numerical examples are given to illustrate the
benefits over the standard formalism.
^{[
Printed version available on request from the authors
]}
[Top]
 [SS06c]
M. Sofroniou, G. Spaletta (2006),
Efficient computation of Krylov approximations to the matrix
exponential and related functions
,
invited paper, special issue
"Structural Dynamical Systems: Computational Aspects",
edited by L. Lopez,
IMACS Journal.
Abstract.
In recent years there has been growing interest in the use of
exponential integrators for solving large scale problems.
Exponential integrators involve approximations to the exponential
and related phi functions.
Krylov subspace approximations to the exponential typically exhibit
better convergence than those used in solving the linear systems
that arise in conventional stiff integrators.
In this work, we begin by revising the ideas used in {EXPOKIT}
[Sidje R.B., {EXPOKIT}: A software package for computing matrix
exponentials, {ACM} Trans. Math. Soft. 24, 1 (1998) 130156],
which is perhaps the most complete software for computing the
matrix exponential.
We will then outline some new results which allow the computation
of linear sums of phi functions acting on vectors at virtually
the same cost as the computation of the matrix exponential itself.
We then describe issues relating to error control
and describe an enhancement that is particularly relevant to
applications involving exponential integrators.
^{[
Printed version available on request from the authors
]}
[Top]
 [Sp06]
G. Spaletta (2006),
Reconstruction in Space and Visualization of a Planar
Image: a Mathematical and Computational Introduction
,
to appear on Acta Biomedica.
Abstract.
Aim of this paper is to provide an introduction to the fundamental
mathematical ideas involved in tridimensional image reconstruction.
Particular attention is given to the case in which the bidimensional
input data are represented of one or very few echographic or
scintigraphic images.
^{[
Printed version available on request from the authors
]}
[Top]
 [SC06]
G. Spaletta, L. Caucci (2006),
Constrained Iterations for Blind Deconvolution and Convexity Issues
,
Journal of Computational and Applied Mathematics,
197(1),
2943.
Abstract.
The need for image restoration arises in many applications of various
scientific disciplines, such as medicine and astronomy and, in general,
whenever an unknown image must be recovered from blurred and noisy data.
The algorithm studied in this work restores the image without the
knowledge of the blur, using little a priori information and
a blind inverse filter iteration.
The problem of interest here is an inverse one, that
cannot be solved by simple filtering since it is ill posed.
The imaging system is assumed to be linear and space invariant:
this allows a simplified relationship between unknown and observed
images, described by a 'point spread function' modeling the distortion.
The blurring, though, makes the restoration ill conditioned:
regularization is therefore also needed, obtained by adding
constraints to the formulation.
The restoration is modeled as a constrained minimization: particular
attention is given here to the analysis of the objective function
and on establishing whether or not it is a convex function, whose
minima can be located by classic optimization techniques and descent
methods.
^{[
Printed version available on request from the authors
]}
[Top]
 [T06]
F. Tinti (2006),
Comparing two numerical approaches for solving variational inequality problems
, submitted
.
Abstract.
The numerical solution of a large scale variational inequality problem can be
obtained using an Inexact Newton Method, or a generalization of it, to nonlinear system.
In this work we compare two approaches: in the first one we solve a smooth
nonlinear system using an interior point approach while in the second one we
solve a nonsmooth nonlinear system using an semismooth approach.
Both approaches require a sparse and large linear system to be solved at each
step. We obtain an approximate solution of these systems by the LSQR algorithm
of Paige and Saunders combined with a convenient preconditioner.
We compare the two algorithms related to the two approaches. Numerical experiments on a set of
testproblems arising from the literature show the effectiveness of the two methods.
^{[
Printed version available on request from the authors
]}
[Top]
 [Z06]
L. Zanni (2006),
An Improved Gradient Projectionbased Decomposition Technique for Support Vector Machines
,
Computational Management Science 3(2),
131145.
Abstract.
In this paper we propose some improvements to a recent decomposition technique
for the large quadratic program arising in training Support Vector
Machines. As standard decomposition approaches, the technique we
consider is based on the idea to optimize, at each iteration, a subset
of the variables through the solution of a quadratic programming
subproblem. The innovative features of this approach consist in using a
very effective gradient projection method for the inner subproblems
and a special rule for selecting the variables to be optimized at
each step. These features allow to obtain promising performance by
decomposing the problem into few large subproblems instead of many small
subproblems as usually done by other decomposition schemes. We improve this
technique by introducing a new inner solver and a simple strategy for
reducing the computational cost of each iteration. We evaluate the
effectiveness of these improvements by solving largescale benchmark
problems and by comparison with a widely used decomposition package.
^{[
Printed version available on request from the authors
]}
[Top]
 [ZSZ06a]
L. Zanni, T. Serafini, G. Zanghirati
(2006),
Parallel Training of LargeScale Kernel Machines
,
Science and Supercomputing at CINECA. Report 2005.
M. Voli, P. Coluccia, Eds., Monograf, Bologna, Italy,
415419.
Abstract.
A parallel software to train linear and nonlinear
SVMs for classification problems is presented, which is suitable for
distributed memory multiprocessor systems. It solves the SVM quadratic
programming problem by an iterative decomposition technique based
on a well parallelizable gradientprojection solver for the inner
subproblems. The numerical experience show the significant speedup that
the proposed parallel package can achieve in training largescale
SVMs. The package is available at http://dm.unife.it/gpdt.
^{[
Printed version available on request from the authors
]}
[Top]
 [ZSZ06b]
L. Zanni, T. Serafini, G. Zanghirati (2006),
Parallel Software for Training Large Scale Support Vector
Machines on Multiprocessor Systems
,
Journal of Machine Learning Research 7,
14671492.
Abstract.
Parallel software for solving the quadratic program arising in training support vector machines for
classification problems is introduced. The software implements an iterative decomposition technique
and exploits both the storage and the computing resources available on multiprocessor systems,
by distributing the heaviest computational tasks of each decomposition iteration. Based on a
wide range of recent theoretical advances, relevant decomposition issues, such as the quadratic
subproblem
solution, the gradient updating, the working set selection, are systematically described and
their careful combination to get an effective parallel tool is discussed. A comparison with
stateoftheart packages on benchmark problems demonstrates the good accuracy and the remarkable
time saving achieved by the proposed software. Furthermore, challenging experiments on realworld
data sets with millions training samples highlight how the software makes large scale standard
nonlinear support vector machines effectively tractable on common multiprocessor systems. This
feature is not shown by any of the available codes.
^{[
Printed version available on request from the authors
]}
[Top]
 [B05]
S. Bonettini (2005),
A nonmonotone inexact Newton method
, Optimization Methods and Software,
20, 475491.
Abstract.
In this paper we describe a variant of the Inexact Newton method for solving nonlinear systems of
equations. We define a nonmonotone Inexact Newton step and a nonmonotone backtracking strategy. For
this nonmonotone Inexact Newton scheme we present the convergence theorems. Finally, we show how we
can apply these strategies to Inexact Newton InteriorPoint method and we present some numerical
examples.
^{[
Printed version available on request from the authors
]}
[Top]
 [BGR05]
S. Bonettini, E. Galligani, V. Ruggiero
(2005),
An inexact
Newton method combined with Hestenes
multipliers' scheme for the
solution of KarushKuhnTucker systems,
Applied Mathematics and Computations
168, 651676.
Abstract.
In this work a Newton interiorpoint
method for the solution of
KarushKuhnTucker systems is
presented. A crucial feature of this
iterative method is the solution, at
each iteration, of the inner
subproblem. This subproblem is a
linearquadratic programming
problem, that can solved
approximately by an inner iterative
method such as the method. A deep
analysis on the choices of the
parameters of the method
(perturbation and damping
parameters) has been done. The
global convergence of the Newton
interiorpoint method is proved when
it is viewed as an inexact Newton
method for the solution of nonlinear
systems with restriction on the sign
of some variables.
^{[
Printed version available on request from the authors
]}
[Top]
 [BR05]
S. Bonettini, V. Ruggiero (2005),
Some iterative methods for the solution of a symmetric indefinite KKT system
, to appear on Computational Optimization and Applications.
Abstract.
This paper is concerned with the numerical solution of a KarushKuhnTucker system. Such
symmetric indefinite system arises when we solve a nonlinear programming problem by an
Interior Point (IP) approach. In this framework, we discuss the effectiveness of two inner
iterative solvers: the method of multipliers and the preconditioned conjugate gradient method.
We discuss the implementation details of these algorithms in an IP scheme and we report the results
of a numerical comparison on a set of large scale test problems arising from the discretization of
elliptic control problems.
^{[
Printed version available on request from the authors
]}
[Top]
 [GL05]
M. Gaviano, D. Lera
(2005),
Complexity
of general continuous minimization
problems: a survey,
Optimization Methods and Software
20, 525544.
Abstract.
The problem of finding a local or a
global minimum of a real function on
a subset S of R^{n}
occurs in many real world problems.
When f is a general
nonlinear function, deterministic as
well as stochastic algorithms have
been proposed. The numerical \emph{cost}
of these algorithms, that can be the
number of function evaluations or
the number of elementary operations
required when executed, has been
investigated in the last few years.
In this paper we survey the main
results by classifying them
according to the field they refer to:
local minimization, global
minimization and checking the
optimality conditions. Further, some
results concerning the information
that an algorithm may use, are
surveyed.
[Top]
 [LL05]
G. Landi, E. Loli Piccolomini
(2005),
A Total Variation
Regularization Strategy in Dynamic
MRI, Optimization
Methods and Software 20, 545558.
Abstract.
Recently, the application of
Magnetic Resonance Imaging for the
study of dynamic processes has
increased the search of everfaster
dynamic imaging methods. The Keyhole
method is a popular technique for
fast dynamic imaging that
unfortunately produces images
affected by artifacts. In this work,
we propose a Total Variation
regularization strategy for
postprocessing the dynamic Keyhole
images. The strategy consists of the
solution of an unconstrained
minimization problem whose
EulerLagrange equation is solved by
Newton method. A trace of the
algorithm is given. Results of
numerical experiments on simulated
and real data are presented to
illustrate the effectiveness of the
proposed method. Our experimental
results indicate that the method
reduces the degrading artifacts and
improves the quality of the final
images.
^{[
Printed version available on request from the authors
]}
[Top]
 [LLZ05]
E. Loli Piccolomini, G. Landi, F. Zama
(2005),
A BSpline
parametric model for high resolution
dynamic Magnetic Resonance imaging,
Applied Mathematics and Computation 164,
133148.
Abstract.
In this paper we propose a numerical method
for the reconstruction of dynamic Magnetic Resonance images
from limited data in the Fourier space. In some medical dynamic
applications, temporal sequences of Magnetic Resonance data of the
same slice of the body are acquired. In order to hasten the
acquisition process, only the low frequencies are acquired for all
the data sets, with the exception of the first one which is
completely acquired. Then, it is not possible to use a Fourier
technique to obtain high resolution images from limited data. In the
proposed method, the images of the sequence are represented via a
Bspline parametric model using {\em a priori} information from the
first image. The use of Bsplines is due to their computational
efficiency and their good behaviour in image fitting and
representation. The use of this parametric model allows to obtain
high resolution images of good quality and zoom regions of interest
without loosing important details. Mathematical and numerical
aspects of the method are investigated and an outline
of the algorithm is given. Reconstructions from simulated
and experimental data are presented and analyzed.
^{[
Printed version available on request from the authors
]}
[Top]
 [S05]
Ya.D. Sergeyev
(2005),
Efficient partition of
Ndimensional intervals in the
framework of onepointbased
algorithms, to appear on
Journal of Optimization Theory and
Applications, 124(2).
Abstract.
In this paper, the problem of
the minimal description of the
structure of a vector function f(x) over
an Ndimensional interval is
studied. Methods adaptively
subdividing the original interval in
smaller subintervals and evaluating f(x)
at only one point within each
subinterval are considered. Two
partition strategies traditionally
used for solving this problem are
analyzed. A new partition strategy
based on an efficient technique
developed for diagonal algorithms is
proposed and studied.
^{[
Printed version available on request from the authors
]}
[Top]
 [SC05]
G. Spaletta, L. Caucci (2005),
Blind Restoration of Astronomical Images
,
MathSource, Wolfram Media Inc.,
paper and software package available for download at library.wolfram.com.
Abstract.
The atmosphere of the earth and the instruments of observation both
constitute a source of distortion in astronomical imaging.
Assuming linearity and space invariance of the image formation system,
the observed image can be expressed as the convolution of the original
image with the blurring function: if the latter is known, then we are
faced with the inverse problem of classical deconvolution.
The restoration algorithms considered here try to remove the blur using
a priori constraints, without the knowledge of the blurring function;
this approach is often referred to as blind deconvolution.
It represents a variation of the methods proposed in
[Kundur and Hatzinakos, A Novel Blind Deconvolution Scheme for Image
Restoration using Recursive Filtering, IEEE Transactions on Signal
Processing 46(2), 1998, 375390] and
[Ng, Plemmons and Qiao, Regularization of RIF Blind Image Deconvolution,
IEEE Transactions on Image Processing 9(6), 2000, 11301134].
The programming environment is that of Mathematica.
The effectiveness of this class of methods is tested on both simulated
and real data derived from various applications.
Comparison with the behavior of the methods by KundurHatzinakos
and by NgPlemmonsQiao show the effectiveness of our variant.
^{[
Printed version available on request from the authors
]}
[Top]
 [SS05a]
M. Sofroniou, G. Spaletta (2005),
Complexity Analysis for Accurate Solution of Linear Systems
, to appear on
International Journal of Computer Mathematics.
Abstract.
The efficient solution of linear systems of equations A X = B
is a task that arises in various applied problems.
In this work we describe a simplified complexity analysis
that reliably shows when iterative solution procedures
are advantageous over direct solution methods.
The analysis involves an estimate of the condition number
of a matrix and can be used to automatically decide at run time
which method to use.
Accurate solutions are sought, therefore it can be necessary
to resort to high precision computations; the cost of linear algebra
in high precision, though, can be prohibitively expensive.
An established idea for the accurate solution of linear systems
is to use iterative refinement:
(i) compute an initial solution A X_{0} = B in precision p_{0} ;
(ii) compute the residual R_{i} = B  A X_{i} in precision p_{i} ;
(iii) compute a correction A Z_{i} = R_{i} in precision p_{0} ;
(iv) correct the solution X_{i+1} = X_{i} + Z_{i} in precision p_{i} .
It has been recently shown that a modification of this algorithm
can be advantageous for high precision computation
[Geddes and Zheng, Exploiting Fast Hardware Floating Point in High
Precision Computation, Procs ISSAC, 2003, 111118].
This involves carrying out the linear solution and refinement steps
(i) and (iii) using machine precision arithmetic p_{0}.
Steps (ii) and (iv) are then performed in monotonically increasing
precision p_{i} > p_{0}, using software arithmetic.
Here we also outline a scaling technique that is advantageous when
solving for the solution and correction steps using machine precision.
^{[
Printed version available on request from the authors
]}
[Top]
 [SS05b]
M. Sofroniou, G. Spaletta
(2005),
Precise Numerical Computation
, Journal of Logic and Algebraic Programming
64(1), 113134.
Abstract.
Arithmetic systems such as those based on IEEE standards currently
make no attempt to track the propagation of errors. A formal error
analysis, however, can be complicated and is often confined to the
realm of experts in numerical analysis. In recent years there has
been a resurgence of interest in automated methods for accurately
monitoring the error flow. In this work a floating point system based on
significance arithmetic, in Mathematica, is described, together with its
differences over conventional fixed precision floating point systems.
^{[
Printed version available on request from the authors
]}
[Top]
 [SS05c]
M. Sofroniou, G. Spaletta (2005),
Derivation
of symmetric composition constants
for symmetric integrators,
Optimization Methods and Software 20(45),
597613.
Abstract.
This work focuses on the derivation of composition methods for the
numerical integration of ordinary differential equations,
which give rise to very challenging optimization problems.
Composition is a very useful technique for constructing high order
approximations whilst conserving certain geometric properties.
We survey existing composition methods and describe results
of an intensive numerical search for new methods.
Details of the search procedure are given along with numerical
examples which indicate that the new methods perform better than
previously known methods.
Some insights into the location of global minima for these problems
is obtained as a result.
^{[
Printed version available on request from the authors
]}
[Top]
 [SZZ05a]
T. Serafini, G. Zanghirati, L. Zanni
(2005),
Gradient Projection Methods for
Quadratic Programs and Applications
in Training Support Vector Machines,
Optimization Methods and Software 20,
347372.
Abstract.
Gradient projection methods based on
the BarzilaiBorwein spectral
steplength choices are considered
for quadratic programming problems
with simple constraints.
Wellknown nonmonotone spectral
projected gradient methods and
variable projection methods are
discussed. For both approaches the
behavior of different combinations
of the two spectral steplengths is
investigated. A new adaptive
steplength alternating rule is
proposed that becomes the basis for
a generalized version of the
variable projection method (GVPM).
Convergence results are given for
the proposed approach and its
effectiveness is shown by means of
an extensive computational study on
several test
problems, including the special
quadratic programs arising in
training support vector machines.
Finally, the GVPM behavior as inner
QP solver in decomposition
techniques for largescale support
vector machines is also evaluated.
^{[
Printed version available on request from the authors
]}
[Top]
 [SZZ05b]
T. Serafini, G. Zanghirati, L. Zanni, (2005),
Some improvements to a parallel decomposition technique for
training support vector machines
,
Lecture Notes in Computer Science 3666,
917.
Abstract.
We consider a parallel decomposition technique for solving
the large quadratic programs arising in training the learning methodology
Support Vector Machine. At each iteration of the technique a subset
of the variables is optimized through the solution of a quadratic programming
subproblem. This inner subproblem is solved in parallel by a
special gradient projection method. In this paper we consider some improvements
to the inner solver: a new algorithm for the projection onto
the feasible region of the optimization subproblem and new linesearch
and steplength selection strategies for the gradient projection scheme.
The effectiveness of the proposed improvements is evaluated, both in
terms of execution time and relative speedup, by solving largescale benchmark
problems on a parallel architecture.
^{[
Printed version available on request from the authors
]}
[Top]
 [T05]
F. Tinti (2005),
Numerical Solution for Pseudomonotone
Variational Inequality Problems by
Extragradient Methods
, in $quot;Variational Inequality and Application",
A. Maugeri, F. Giannessi, Eds., Kluwer Academic Publisher, 11011128.
Abstract.
In this work we analyze from the
numerical viewpoint the class of
projection methods for solving
pseudomonotone variational
inequality problems. We focus on
some specific extragradienttype
methods that do not require
differentiability of the operator
and we address particular attention
to the steplength choice.
Subsequently, we analyze the
hyperplane projection methods in
which we construct an appropriate
hyperplane which strictly separates
the current iterate from the
solutions of the problem. Finally,
in order to illustrate the
effectiveness of the proposed
methods, we report the results of a
numerical experimentation.
^{[
Printed version available on request from the authors
]}
[Top]
 [ZL05]
F. Zama, E. Loli Piccolomini (2005),
A Descent Method for
Regularization of IllPosed Problems,
Optimization Methods
and Software 20, 615625.
Abstract.
In this paper we describe an
iterative algorithm, called
DescentTCG, based on truncated
Conjugate Gradients iterations to
compute Tikhonov regularized
solutions of illposed problems. The
succession of approximatesolutions
and regularization parameters,
computed by the algorithm, is shown
to decrease the value of the
Tikhonov functional. Suitable
termination criteria are builtup to
define an innerouter iteration
scheme that computes a regularized
solution. Numerical experiments are
performed to compare this
algorithm with other well
established regularization methods.
We observe that DescentTCG has best
results when the noise is high and
always computes fairly reliable
solutions, preventing dangerous
error explosions as in other well
established regularization methods.
Finally, the DescentTCG method is
computationally advantageous
especially for large size problems.
^{[
Printed version available on request from the authors
]}
[Top]
 [BGR04]
S. Bonettini, E. Galligani, V. Ruggiero
(2004),
Hestenes Method
for Symmetric Indefinite Systems in
InteriorPoint Methods,
Rendiconti di Matematica e delle sue Applicazioni,
Università di Roma "La Sapienza", Serie VII,
Vol. 24, 185199.
Abstract.
This paper deals with the analysis
and the solution of the
KarushKuhnTucker (KKT) system that
arises at each iteration of an
InteriorPoint (IP) method for
minimizing a nonlinear function
subject to equality and inequality
constraints. This system is
generally large and sparse and it
can be reduced so that the
coefficient matrix is still sparse,
symmetric and indefinite, with size
equal to the number of the primal
variables and of the equality
constraints. Instead of transforming
this reduced system to a
quasidefinite form by regularization
techniques used in available codes
on IP methods, under standard
assumptions on the nonlinear
problem, it can be viewed as the
optimality Lagrange conditions for
an linear equality constrained
quadratic programming problem, so
that Hestenes multipliers' method
can be applied. Numerical
experiments on elliptic control
problems with boundary and
distributed control show the
effectiveness of Hestenes scheme as
inner solver for IP methods.
^{[
Printed version available on request from the authors
]}
[Top]
 [DR04]
C. Durazzi, V. Ruggiero
(2004),
A
Note on the Global Convergence of
the Newton InteriorPoint
Method for Nonlinear Programming,
JOTA 120(1), 188208.
Abstract.
The aim of this paper is to show
that the theorem on the global
convergence of the Newton
InteriorPoint (IP) method presented
in (Ref. ElBakry A. S., Tapia R. A.
et al., JOTA, 1996) can be proved
under weaker assumptions. Indeed, we
assume the boundedness of the
sequences of the multipliers related
to nontrivial constraints instead of
the hypothesis that the gradients of
those inequality constraints
corresponding to slack variables not
bounded away from zero, are linear
independent. Consequently, by some
numerical examples we show that, in
the implementation of the Newton IP
method, the loss of the boundedness
of the iteration sequence of the
multipliers detects when the
algorithm does not converge from the
chosen starting point.
^{[
Printed version available on request from the authors
]}
[Top]
 [FPS04]
D. Famularo , P. Pugliese , Ya.D. Sergeyev
(2004),
A global optimization technique for
fixedorder control design,
International Journal of Systems
Science 35(7), 425434.
Abstract.
A global optimization algorithm is used to solve
the fixedorder controller synthesis problem for linear systems. The
algorithm, which belongs to the information methods family, is
described in some detail and numerical experiments are performed on
test problems to show its effectiveness.
^{[
Printed version available on request from the authors
]}
[Top]
 [LLZ04]
G. Landi, E. Loli Piccolomini, F. Zama
(2004),
A Total Variation based regularization
strategy in Magnetic Resonance
Imaging, Proccedings of SPIE,
P.J. Bones, M.A. Fiddy and
R.P. Millane eds., vol. 5562.
Abstract.
In this paper we present some variational functionals for the
regularization of Magnetic Resonance (MR) images, usually corrupted
by noise and artifacts. The mathematical problem has a
Tikhonovlike formulation, where the regularization functional is
a nonlinear variational functional. The problem is
numerically solved as an optimization problem with a quasiNewton
algorithm. The algorithm has been applied to MR images corrupted by
noise and to dynamic MR images corrupted by truncation artifacts due
to limited resolution. The results on test problems obtained from
simulated and real data are presented. The functionals actually
reduce noise and artifacts, provided that a good regularizing
parameter is used.
^{[
Printed version available on request from the authors
]}
[Top]
 [MCGST04]
A. Martínez , L.G. Casado, I. García,
Ya.D. Sergeyev, B. Toth (2004), On
an efficient use of gradient
information for accelerating
interval global optimization
algorithms, Numerical Algorithms
37, 6169.
Abstract.
This paper analyzes and evaluates an
efficient ndimensional
global optimization algorithm. It is
a natural ndimensional
extension of the algorithm from
Casado, I. García, J.A. Martínez
and Ya.D. Sergeyev, New interval
analysis support functions using
gradient information in a global
minimization algorithm, J.
Global Optimization 25(4)
(2003), 118. This algorithm takes
advantage of all available
information to estimate better
bounds of the function. Numerical
comparison made on a wide set of
multiextremal test functions has
shown that on average the new
algorithm works faster than a
traditional interval analysis global
optimization method.
^{[
Printed version available on request from the authors
]}
[Top]
 [SZZ04a]
T. Serafini, G. Zanghirati, L. Zanni
(2004),
Training Support Vector Machines on Parallel Architectures
,
Science and Supercomputing at CINECA, Report 2003,
M. Voli, P. Coluccia, Eds., Monograf, Bologna, Italy
391394.
Abstract.
The parallel solution of the large quadratic programming
problem arising in training support vector machines is analysed.
Some improvements to a recent decomposition technique are discussed.
The effectiveness of the proposed approach is evaluated by solving
largescale benchmark problems on different parallel architectures.
^{[
Printed version available on request from the authors
]}
[Top]
 [SZZ04b]
T. Serafini, G. Zanghirati, L. Zanni
(2004),
Parallel Decomposition
Approaches for Training Support
Vector Machines, Proceedings of
the International Conference
"Parallel Computing 2003",
in "Parallel Computing: Software Technology, Algorithms, Architectures and Applications",
G.R. Joubert, W.E. Nagel, F.J. Peters and W.V. Walter, Eds.,
Advances in Parallel Computing 13, Amsterdam, The Netherlands, 2004,
259266.
Abstract.
We consider parallel decomposition
techniques for solving the large
quadratic programming (QP) problems
arising in training support vector
machines.A recent technique is
improved by introducing an efficient
solver for the inner QP subproblems
and a "preprocessing" step
useful to hot start the
decomposition strategy. The
effectiveness of the proposed
improvements is evaluated by solving
largescale benchmark problems on
different parallel architectures.
^{[
Printed version available on request from the authors
]}
[Top]
 [S04a]
Ya.D. Sergeyev (2004),
Generation of symmetric exponential sums,
Journal of Number Theory,
108, 6075
Abstract.
In this paper, a new method for generation of infinite series of symmetric identities written for
exponential sums in real numbers is proposed. Such systems have numerous applications in theory of
numbers, chaos theory, algorithmic complexity, dynamic systems, etc. Properties of generated
identities are studied. Relations of the introduced method for generation of symmetric exponential
sums to the MorseHedlund sequence and to the theory of magic squares are established.
^{[
Printed version available on request from the authors
]}
[Top]
 [S04b]
Ya.D. Sergeyev (2004),
A new
approach for executing calculations
with finite, infinite, and
infinitesimal quantities, Tech.
Rep. RTICARCS0414, ICARCNR,
Rende (CS), 46 pp.
Abstract.
All the existing computers are able to execute arithmetical
operations only with finite numerals. Operations with infinite and
infinitesimal quantities could not be realized. The paper
introduces a new positional system with infinite radix allowing us
to write down finite, infinite, and infinitesimal numbers as
particular cases of a unique framework. This new numeral system
gives possibility to introduce a new type of computer able to
operate not only with finite numbers but also with infinite and
infinitesimal ones. A possible organization of memory and
arithmetic logic unit of such a computer are described briefly.
The new approach both gives possibilities to execute calculations
of a new type and simplifies fields of mathematics where usage of
infinity and/or infinitesimals is necessary (for example,
divergent series, limits, derivatives, and integrals).
^{[
Printed version available on request from the authors
]}
[Top]
 [SK04]
Ya.D. Sergeyev,
D.E. Kvasov (2004),
Diagonal
Lipschitz global
optimization algorithm working with
a set of Lipschitz constants, Technical
Report RTICARCS0415, ICARCNR,
Cosenza, Italy. 33 pages.
Abstract.
In this report, the global
optimization problem of a
multidimensional blackbox function
satisfying the Lipschitz condition
over a hyperinterval with an unknown
Lipschitz constant is considered. A
new efficient algorithm for solving
this problem is presented. At each
iteration of the method a number of
possible Lipschitz constants is
chosen from a set of values varying
from zero to infinity. This idea is
unified with an efficient diagonal
partition strategy. A novel
technique balancing usage of local
and global information during
partitioning is proposed. A new
procedure for finding lower bounds
of the objective function
overhyperintervals is also
considered. It is demonstrated by
extensive numerical experiments
performed on more than 1600
multidimensional test functions that
the new algorithm shows a very
promising performance.
^{[
Printed version available on request from the authors
]}
[Top]
 [SS04]
M. Sofroniou, G. Spaletta (2004),
Construction of explicit RungeKutta pairs with stiffness detection
,
Mathematical and Computer Modelling 40(1112),
11571169.
Abstract.
Explicit RungeKutta schemes are the methods of choice for solving
nonstiff systems of ordinary differential equations at low to medium
tolerances.
The construction of optimal formulae has been the subject of much research.
In this article, it will be shown how to construct some low order
formula pairs using tools from computer algebra.
Our focus will be on methods that are equipped with local error
detection (for adaptivity in the step size) and with the ability
to detect stiffness.
It will be demonstrated how criteria governing "optimal" tuning
of free parameters and matching of the embedded method can be
accomplished by forming a constrained optimization problem.
In contrast to standard numerical optimization processes our
approach finds an exact (infinite precision) global minimum.
Quantitative measures will be given comparing our new methods
with some established formula pairs.
^{[
Printed version available on request from the authors
]}
[Top]
 [Sp04]
G. Spaletta (2004),
Approximation of a 3D Volume from one 2D Image. A Medical Application
,
invited review,
Memorie della Accademia delle Scienze dell'Istituto di Bologna
Serie I, Vol. II,
4965.
Abstract.
We study the possibility for a tridimensional reconstruction
of one lobe of the thyroid gland from a single bidimensional
echographic or scintigraphic image.
From a numerical point of view the scope is to provide an
approximation to its volume, besides to a visualization of
the gland in space, that is more accurate than the current
way of estimating such a volume.
We focus on a particular pathology of the thyroid, namely the
Plummer nodule.
Through the tridimensional information obtained, the aim is to
perform both a functional and morphological analysis of the
thyroid lobe affected by the Plummer pathology, in order to obtain
therapeutic indication, to avoid the surgical extraction of the gland.
^{[
Printed version available on request from the authors
]}
[Top]
 [ZLL04]
F. Zama, E. Loli Piccolomini, G. Landi
(2004),
A descent method
for computing the Tikhonov
regularized solution of linear
inverse problems, Proccedings of
SPIE, P.J. Bones, M.A. Fiddy and
R.P. Millane eds., vol. 5562.
Abstract.
In this paper we describe an iterative algorithm, called
DescentTCG, based on truncated Conjugate Gradient iterations to
compute Tikhonov regularized solutions of linear illposed problems.
Suitable termination criteria are builtup to define an innerouter
iteration scheme for the computation of a regularized solution.
Numerical experiments are performed to compare the algorithm with
other wellestablished regularization methods. We observe that the
best DescentTCG results occur for highly noised data and we always
get fairly reliable solutions, preventing the dangerous error growth
often appearing in other wellestablished regularization methods.
Finally, the DescentTCG method is computationally advantageous
especially for large size problems.
^{[
Printed version available on request from the authors
]}
[Top]
 [BA03]
A. Baricordi, I. Argentesi (2003), A
Reference Guide to "Hooking
your solver to AMPL",
Tech. Rep.
Abstract.
AMPL is one of the most widely used
modelling languages for Mathematical
Programming. Invented by D. Gay and
R. Fourer, it is able to model and
solve problems ranging from student
examples to very complex, higly
nonlinear industrial applications.
For the problem solution, AMPL needs
to call an external solver that
provides the capabilities required
to solve the problem at hand.
Despite the today quite wide set of
numerical solvers supplying an AMPL
interface (thus being callable from
inside the AMPL interactive
environment), it would be very
interesting for both researchers and
practitioners to be able to provide
their own custommade solvers to
AMPL. Unfortunately, this is not as
easy as one would hope. David Gay
explains in its guide "Hooking
your solver to AMPL" how to
use some of the C coded AMPL library
functions, in order to make AMPL
working with a custom solver, but
the description is quite fragmented
and relates a lot on the C coded
example routines coming with the
AMPL freely downloadable version. In
this report we try to build a more
"structured" approach to
the problem of hooking a custom
solver to AMPL, by providing a sort
of reference guide to the many
library routines that one needs to
call from its own code. This rough
document is still just an
"addon" to the Gay's
guide and it is intended to help the
programmers to better understand how
to call a given routine and where
to find
description/examples/suggestions on
its use. These are the easier tasks.
The most difficult tasks, that is to
say which, why and when
a routine must/can be called is
still under development and the
programmers are referred to the
Gay's guide.
^{[
Printed version available on request from the authors
]}
[Top]
 [CGMS03]
L.G. Casado, I. García, J.A.
Martínez, Ya.D. Sergeyev (2003),
New interval analysis support functions
using gradient information in a
global minimization algorithm,
Journal of Global Optimization 25(4),
345362.
Abstract.
The performance of interval analysis
branchandbound global
optimization algorithms
strongly depends on the efficiency
of selection, bounding, elimination,
division, and termination rules used
in their implementation. All the
information obtained during
the search process has to be taken
into account in order to increase
algorithm efficiency, mainly when
this information can be obtained
and elaborated without
additional cost (in comparison with
traditional approaches). In
this paper a new way to calculate
interval analysis support functions
for multiextremal univariate
functions is presented. The new
support functions are based on
obtaining the same kind of
information used in interval
analysis global optimization
algorithms. The new support
functions enable us to develop more
powerful bounding, selection,
and rejection criteria and, as
a consequence, to significantly
accelerate the search. Numerical
comparisons made on a wide set of
multiextremal test functions have
shown that on average the new
algorithm works almost two times
faster than a traditional interval
analysis global optimization method.
[Top]
 [DR03a]
C. Durazzi, V. Ruggiero (2003),
A Newton Inexact InteriorPoint Method
for Large Scale Nonlinear
Optimization Problems,
Ann. Univ. Ferrara
49, 333357.
Abstract.
In this paper, we describe a variant
of the Newton InteriorPoint method
in El Bakry, Tapia et al., JOTA
1996, for nonlinear programming
problems. In this scheme, the
perturbation parameter can be chosen
within a range of values and we can
use an iterative method for solving
the reduced linear system arising at
each step. We have devised the inner
termination rule which guarantees
the global convergence of this
Newton Inexact InteriorPoint
method. We remark that the required
assumptions are weaker than those
stated in El Bakry, Tapia et al.,
JOTA 1996, as shown by some
numerical examples.
^{[
Printed version available on request from the authors
]}
[Top]
 [DR03b]
C. Durazzi, V. Ruggiero
(2003),
Numerical
Solution of Special Linear and
Quadratic Programs via a Parallel
InteriorPoint, Parallel
Computing 29(4), 485503.
Abstract.
This paper concerns a parallel
inexact InteriorPoint (IP) method
for solving linear and quadratic
programs with a special structure in
the constraint matrix and in the
objective function. In order to
exploit these features, a
Preconditioned Coniugate Gradient
(PCG) algorithm is used to
approximately solve the normal
equations or the reduced KKT system
obtained from the linear inner
system arising at each iteration of
the IP method. A suitable adaptive
termination rule for the PCG method
enables to save computing time at
the early steps of the outer scheme
and, at the same time, it assures
the global and the local superlinear
convergence of the whole method. We
analyse a parallel implementation of
the method, referring some kinds of
meaningful largescale problems. In
particular we discuss the data
allocation and the workload
distribution among the processors.
The results of a numerical
experimentation carried out on Cray
T3E and SGI Origin 3800 show a good
scalability of the parallel code and
confirm the effectiveness of the
method for problems with special
structure.
^{[
Printed version available on request from the authors
]}
[Top]
 [DRZ03]
C. Durazzi, V. Ruggiero, G. Zanghirati
(2003),
Solving a
Special Class of Discrete Optimal Control
Problems via a Parallel
InteriorPoint Method, in
"Equilibrium Problems and
Variational Models", P.
Daniele, F. Giannessi, A. Maugeri,
Eds., Kluwer Academic Publishers,
141161.
Abstract.
This paper is concerned with a
parallel primaldual InteriorPoint
method combined with the
preconditioned conjugate gradient
algorithm. Recently, the global
convergence of this scheme has been
proved and this approach has been
used for solving linear stochastic
programming and robust optimization
problems on parallel computers.
In this paper, we analyse the method
for a more general formulation of a
constrained quadratic programming
problem and we provide the
hypotheses for the global
convergence of the obtained scheme.
A largescale application related to
a class of discrete quadratic
optimal control problems is
presented: the numerical results on
parallel architectures confirm the
effectiveness of the IPPCG method on
programs with special structure.
^{[
Printed version available on request from the authors
]}
[Top]
 [GRZ03]
E. Galligani, V. Ruggiero, L. Zanni
(2003),
Variable Projection
Methods for LargeScale Quadratic
Optimizaion in Data Analysis
Applications, in
"Equilibrium Problems and
Variational Models", P.
Daniele, F. Giannessi, A. Maugeri,
Eds., Kluwer Academic Publishers
(2003), 186211.
Abstract.
The paper concerns with the
numerical evaluation of the variable
projection method for quadratic
programming problems in three data
analysis applications. The three
applications give rise to
largescale quadratic programs with
remarkable differences in the
Hessian definition and/or in the
structure of the constraints.
[Top]
 [GKLS03]
M. Gaviano, D.E. Kvasov, D. Lera, Ya.D. Sergeyev
(2003),
Algorithm 829: Software for Generation of Classes of
Test Functions with
Known Local and Global Minima for Global
Optimization,
ACM Transaction on Mathematical Software 29(4), 469480.
Abstract.
A procedure for generation of
nondifferentiable, continuously
differentiable, and twice
continuously differentiable classes
of test functions for multiextremal
multidimensional boxconstrained
global optimization and a
corresponding package of C
subroutines are presented. Each test
class consists of 100 functions.
Test functions are generated by
defining a convex quadratic function
systematically distorted by
polynomials in order to introduce
local minima. To determine a class,
the user defines the following
parameters: (i) problem dimension,
(ii) number of local minima, (iii)
value of the global minimum, (iv)
radius of the attraction region of
the global minimizer, (v) distance
from the global minimizer to the
vertex of the quadratic function.
Then, all the other necessary
parameters are generated randomly
for all 100 functions of the class.
A rich information about each test
function including locations and
values of all local minima is
supplied to the user. Partial
derivatives are also generated where
it is possible.
^{[
Printed version available on request from the authors
]}
[Top]
 [KPS03]
D.E. Kvasov, C. Pizzuti, Ya.D.
Sergeyev (2003), Local tuning and
partition strategies for diagonal GO
methods, Numerische Mathematik 94(1),
93106.
Abstract.
In this paper, global optimization
(GO) Lipschitz problems are
considered where the
multidimensional multiextremal
objective function is determined
over a hyperinterval. An
efficient onedimensional GO method
using local tuning on the behavior
of the objective function is
generalized to the multidimensional
case by the diagonal approach
using two partition strategies.
Global convergence conditions are
established for the obtained
diagonal geometric methods. Results
of a wide numerical comparison
show a strong acceleration
reached by the new methods working
with estimates of the local
Lipschitz constants over different
subregions of the search domain in
comparison with the
traditional approach.
[Top]
 [KS03]
D.E. Kvasov, Ya.D. Sergeyev (2003),
Multidimensional
global optimization algorithm based
on adaptive diagonal curves,
Comput. Maths. Math. Phys. 43(1),
4056.
Abstract.
A classical global optimization
problem is considered: minimization
of a multidimensional multiextremal
function that is Lipschitzian on a
hyperinterval. A new information
statistical algorithm for solving
this problem is suggested. The new
method relies on adaptive diagonal
curves that combine the ideas of
diagonal algorithms and Peano
curves. Conditions for global
convergence of the suggested
algorithm are established. The
results of extensive numerical
experiments are presented to
demonstrate the advantage of the new
method as compared to
traditional diagonal global
optimization algorithms. The
experiments corroborate confirm
the theoretical conclusion
that the advantage of the new method
increases with the problem
dimension.
[Top]
 [LLZ03]
G. Landi, E. Loli Piccolomini, F. Zama
(2003),
A Parallel Software for the Reconstruction
of Dynamic MRI Sequences,
Proceedings of the 10th European meeting on PVM/MPI, 107119.
Abstract.
In this paper we present a parallel version of
an existing Matlab software for dynamic Magnetic
Resonance Imaging which implements a reconstruction
technique based on Bspline Reducedencoding Imaging
by Generalized series Reconstruction. The parallel
primitives used are provided by MatlabMPI.
The parallel Matlab application is tested
on a network of Linux workstations.
^{[
Printed version available on request from the authors
]}
[Top]
 [LZ03]
E. Loli Piccolomini, F. Zama
(2003),
An Experiment of parallel SPECT Data
Reconstruction, Parallel
Algorithms and Applications 18(3), 107119.
Abstract.
In this work we consider the problem
of reconstructing human brain
sections from experimental data
obtained from a Gamma camera
equipped with parallel hole
collimators. We compute least
squares regularized solutions
by means of weighted conjugate
gradient iterations coupled with a
suitable stopping rule. The
computations are distributed to the
CRAY T3E parallel processors. In
particular, two decomposition
strategies are examinated and the
results are compared.
^{[
Printed version available on request from the authors
]}
[Top]
 [SZZ03]
T. Serafini, G. Zanghirati, L. Zanni
(2003),
Large Quadratic Programs
in Training Gaussian Support Vector
Machines,
Rendiconti di Matematica e delle sue Applicazioni,
Università di Roma "La
Sapienza", Serie VII, Vol. 23,
257275.
Abstract.
We consider the numerical solution
of the large convex quadratic
program arising in training the
learning machines named
\emph{support vector machines}.
Since the matrix of the quadratic
form is dense and generally large,
solution approaches based on
explicit storage of this matrix are
not practicable. Well known
strategies for this quadratic
program are based on decomposition
techniques that split the problem
into a sequence of smaller quadratic
programming subproblems. For the
solution of these subproblems we
present an iterative projectiontype
method suited for the structure of
the constraints and very effective
in case of Gaussian support vector
machines. We develop an appropriate
decomposition technique designed to
exploit the high performance of the
proposed inner solver on medium or
large subproblems. Numerical
experiments on largescale benchmark
problems allow to compare this
approach with another widely used
decomposition technique. Finally, a
parallel extension of the proposed
strategy is described.
^{[
Printed version available on request from the authors
]}
[Top]
 [SPF03]
Ya.D. Sergeyev, P. Pugliese, D. Famularo
(2003),
Index
information algorithm with local
tuning for solving multidimensional
global optimization problems with
multiextremal constraints,
Mathematical Programming 96(3),
489512.
Abstract.
Multidimensional optimization
problems where the objective
function and the constraints
are multiextremal nondifferentiable
Lipschitz functions (with unknown
Lipschitz constants) and the
feasible region is a finite
collection of robust nonconvex
subregions are considered. Both the
objective function and the
constraints may be partially
defined. To solve such problems an
algorithm is proposed, that uses
Peano spacefilling curves and the
index scheme to reduce the original
problem to a Hölder onedimensional
one. Local tuning on the behaviour
of the objective function and
constraints is used during the work
of the global optimization procedure
in order to accelerate the search.
The method neither uses penalty
coefficients nor additional
variables. Convergence conditions
are established. Numerical
experiments confirm the good
performance of the technique.
^{[
Printed version available on request from the authors
]}
[Top]
 [SS03a]
M. Sofroniou, G. Spaletta (2003),
Increment formulations for rounding error reduction
in the numerical solution of structured differential systems
,
Future Generation Computer Systems 19(3),
375383.
Abstract.
Strategies for reducing the effect of cumulative rounding errors
in geometric numerical integration are outlined.
The focus is, in particular, on the solution of separable Hamiltonian
systems using explicit symplectic integration methods and on
solving orthogonal matrix differential systems using projection.
Examples are given that demonstrate the advantages of an increment
formulation over the standard implementation of conventional integrators.
We describe how the aforementioned special purpose integration methods
have been set up in a uniform, modular and extensible framework
being developed in the problem solving environment Mathematica.
^{[
Printed version available on request from the authors
]}
[Top]
 [SS03b]
M. Sofroniou, G. Spaletta (2003),
On the Construction of a new generalization of RungeKutta Methods
,
Electronic Notes in Theoretical Computer Science 74,
118.
Abstract.
We give an overview of the construction of algebraic conditions for
determining the order of RungeKutta methods and describe a novel
extension for numerically solving systems of differential equations.
The new schemes, called Elementary Differential RungeKutta methods,
include as a subset RungeKutta methods, Taylor series methods,
Multiderivative RungeKutta methods.
We outline how order conditions have been constructed for the new
schemes using Bseries and their composition and give details relating
to a Mathematica implementation.
^{[
Printed version available on request from the authors
]}
[Top]
 [SS03c]
M. Sofroniou, G. Spaletta (2003),
Symmetric
Composition of Symmetric Numerical
Integration Methods, submitted.
Abstract.
This work focuses on the derivation
of composition methods for the
numerical integration of ordinary
differential equations, which give
rise to very challenging
optimization problems. Composition
is a very useful technique for
constructing high order
approximations whilst conserving
certain geometric properties. We
survey existing composition methods
and describe results of an intensive
numerical search for new methods.
Details of the search procedure are
given along with numerical examples
which indicate that the new methods
perform better than previously known
methods. Some insights into the
location of global minima for these
problems is obtained as a results.
^{[
Printed version available on request from the authors
]}
[Top]
 [StS03]
R.G. Strongin, Ya.D. Sergeyev
(2003),
Global optimization:
fractal approach and nonredundant
parallelism, Journal of Global
Optimization 27(1), 2550.
Abstract.
More and more optimization problems
arising in practice can not be
solved by traditional optimization
techniques making strong
suppositions about the problem
(differentiability, convexity,
etc.). This happens because very
often in reallife problems both the
objective function and constraints
can be multiextremal,
nondifferentiable, partially
defined, and hard to be evaluated.
In this paper, a modern
approach for solving such problems
(called global optimization
problems) is described. This
approach combines the following
innovative and powerful tools:
fractal approach for reduction of
the problem dimension, index scheme
for treating constraints,
nonredundant parallel computations
for accelerating the search. Through
the paper, rigorous theoretical
results are illustrated by figures
and numerical examples.
[Top]
 [ZZ03]
G. Zanghirati, L. Zanni
(2003),
A Parallel Solver for Large Quadratic
Programs in Training Support Vector
Machines,
Parallel Computing 29(4), 535551.
Abstract.
This work is concerned with the
solution of the convex quadratic
programming problem arising in
training the learning machines named
support vector machines. The
problem is subject to box
constraints and to a single linear
equality constraint; it is dense
and, for many practical
applications, it becomes a
largescale problem. Thus,
approaches based on explicit storage
of the matrix of the quadratic form
are not practicable. Here we present
an easily parallelizable approach
based on a decomposition technique
that splits the problem into a
sequence of smaller quadratic
programming subproblems. These
subproblems are solved by a variable
projection method that is well
suited to a parallel implementation
and is very effective in the case of
Gaussian support vector machines.
Performance results are presented on
well known largescale test
problems, in scalar and parallel
environments. The numerical results
show that the approach is comparable
on scalar machines with a widely
used technique and can achieve good
efficiency and scalability on a
multiprocessor system.
^{[
Printed version available on request from the authors
]}
[Top]
 [CGS02]
L.G. Casado, I. Garc&IACUTE;a, Ya.D.
Sergeyev (2002), Interval
algorithms for finding the minimal
root in a set of multiextremal
nondifferentiable onedimensional
functions, SIAM Journal on
Scientific Computing 24(2),
359376.
Abstract.
Two problems very often arising in
applications are considered.
The first problem consists of
finding the minimal root of an
analytic onedimensional
function over a given interval. It
is supposed that the objective
function can be multiextremal and
nondifferentiable. Three new
algorithms based on Interval
Analysis and BranchandBound Global
Optimization approaches are proposed
for solving this problem. The
novelty of the new algorithms is
basically in improving the
elimination criteria and the order
in which interval and point
evaluations are realized. The
introduced techniques accelerate the
search in comparison with interval
analysis methods traditionally used
for finding roots of
equations. The second problem
considered in the paper is a
generalization of the first
one and deals with the search of the
minimal root in a set of
multiextremal and
nondifferentiable functions. The
methods proposed for solving the
first problem are generalized
for solving the second. The main
idea is in using the information
obtained from any of the functions
to reduce the search domain
associated to all the functions.
Numerical experiments carried out on
a wide set of test functions
demonstrate a quite satisfactory
performance of the new algorithms in
both cases.
[Top]
 [DR02]
C. Durazzi, V. Ruggiero
(2002),
Indefinitely
Preconditioned Conjugate Gradient
Method for Large Sparse Equality and
Inequality Constrained Quadratic
Problems,
Numerical Linear Algebra and Applications 10,
673688.
Abstract.
This paper is concerned with the
numerical solution of a symmetric
indefinite system which is a
generalization of the
KarushKuhnTucker system. Following
the recent approach of Luksan and
Vlcek, we propose to solve this
system by a Preconditioned Conjugate
Gradient (PCG) algorithm and we
devise two indefinite
preconditioners with good
theoretical properties. In
particular, for one of these
preconditioners, the finite
termination property of the PCG
method is stated. The PCG method
combined with a parallel version of
these preconditioners is used as
inner solver within an inexact
InteriorPoint (IP) method for the
solution of large and sparse
quadratic programs. The numerical
results obtained by a parallel code
implementing the IP method on
distributed memory multiprocessor
systems enable us to confirm the
effectiveness of the proposed
approach for problems with special
structure in the constraint matrix
and in the objective function.
^{[
Printed version available on request from the authors
]}
[Top]
 [FSP02]
D. Famularo, Ya.D. Sergeyev, P. Pugliese (2002),
Test Problems
for Lipschitz univariate global
optimization with
multiextremal constraints, in
Stochastic and Global Optimization,
G. Dzemyda, V. Saltenis, and A.
Zilinskas, eds., Kluwer Academic
Publishers, Dordrecht, 93110.
Abstract.
In this paper, Lipschitz univariate
constrained global optimization
problems where both the objective
function and constraints can be
multiextremal are considered. Two
sets of test problems are
introduced, in the first one both
the objective function and
constraints are differentiable
functions and in the second one they
are nondifferentiable. Each series
of tests contains 3 problems with
one constraint, 4 problems with 2
constraints, 3 problems with 3
constraints, and one infeasible
problem with 2 constraints.
All the problems are shown in
Figures. Lipschitz constants and
global solutions are given. For each
problem it is indicated whether the
optimum is located on the boundary
or inside a feasible subregion and
the number of disjoint feasible
subregions is given. Results of
numerical experiments executed with
the introduced test problems using
Pijavskii's method combined
with a nondifferentiable penalty
function are presented.
[Top]
 [GL02]
M. Gaviano, D. Lera (2002), A
complexity analysis of local search
algorithms in global optimization,
Optimization Methods and Software 17(1),
113127.
Abstract.
In this paper a complexity analysis
is performed for an algorithm which
solves the global optimization
problem (P): minimize a real
function f(x) in a subset S
of R^{n}. The
algorithm yields local minimizations
of (P) starting from points chosen
at random in S, and then the
global minimum is selected among the
local ones found. The local search
algorithm is based on the
steepestdescent method combined
either with an exact line search or
with the Armijo procedure. In the
first case it is found that the
number of line searches required to
solve (P) within a fixed accuracy
depends linearly on the problem
dimension; in the second case a
similar result involving the number
of function evaluations is
established.
[Top]
 [LS02]
D. Lera, Ya.D. Sergeyev (2002),
Global
minimization algorithms for Hölder
functions, BIT 42(1),
119133.
Abstract.
This paper deals with the
onedimensional global optimization
problem where the objective function
satisfies Hölder condition over a
closed interval. A direct extension
of the popular Piyavskii method
proposed for Lipschitz functions to
Hölder optimization requires an a
priori estimate of the Hölder
constant and solution to an equation
of degree N at each iteration. In
this paper a new scheme is
introduced. Three algorithms are
proposed for solving onedimensional
Hölder global optimization
problems. All of them work without
solving equations of degree N. The
case (very often arising in
applications) when Hölder constant
is not given a priori is considered.
It is shown that local information
about the objective function used
inside the global procedure can
accelerate the search significantly.
Numerical experiments show quite
promising performance of the new
algorithms.
^{[
Printed version available on request from the authors
]}
[Top]
 [LZZ02]
E. Loli Piccolomini, F. Zama, G. Zanghirati
(2002),
Regularization
Methods in Dynamic MRI,
Applied Mathematics and Computations
132, 325339.
Abstract.
In this work we consider an inverse
ill–posed problem coming from the
area of dynamic Magnetic Resonance
Imaging (MRI), where high resolution
images must be reconstructed from
incomplete data sets collected in
the Fourier domain. The RIGR
(Reduced–encoding Imaging by
Generalized–series Reconstruction)
method used leads to ill–
conditioned linear systems with
hermitian Toeplitz matrix and noisy
right hand side. We analyze the
behaviour of some regularization
methods, such as the Truncated
Singular Value Decomposition, the
Lavrent’yev regularization method
and Conjugate Gradients type
iterative methods. For what concerns
the choice of the regularization
parameter, we use some known methods
and we propose new heuristic
criteria for iterative
regularization methods. The
simulations are carried on test
problems obtained from real data
acquired on a 1.5 Tesla Phillips
system. Keywords: Regularization
methods,Magnetic Resonance images,
TSVD, Lavrent’yev, Conjugate
Gradients, regularization parameter.
^{[
Printed version available on request from the authors
]}
[Top]
 [MPS02]
A. Molinaro, C. Pizzuti, Ya.D.
Sergeyev (2002), A diagonal
global optimization method, in
Combinatorial and Global
Optimization, P.M. Pardalos, A.
Migdalas, and R. Burkard, eds.,
World Scientific, London, 251263.
Abstract.
In this paper we consider the
classical global optimization
problem of searching the global
minimum of a multiextremal
multidimensional Lipschitz function
over a hyperinterval. We present a
new multidimensional diagonal
algorithm belonging to the class of
information global optimization
methods. It is supposed that the
Lipschitz constant is unknown
and only the values of the objective
function can be evaluated. The new
method uses local tuning on the
behaviour of the objective function
to accelerate the search adaptively
estimating local Lipschitz constants
during minimization. Sufficient
convergence conditions are
established for the proposed
technique. Numerical examples are
also reported.
[Top]
 [SS02a]
M. Sofroniou, G. Spaletta (2002),
Solving orthogonal matrix differential systems in Mathematica
,
Lecture Notes in Computer Science 2331,
496505.
Abstract.
A component of a new environment for the numerical solution
of ordinary differential equations in Mathematica is outlined.
We briefly describe how special purpose integration methods can be
constructed to solve structured dynamical systems.
In particular we focus on the solution of orthogonal matrix differential
systems using projection.
Examples are given to illustrate the advantages of a projection scheme
over conventional integration methods.
^{[
Printed version available on request from the authors
]}
[Top]
 [SS02b]
M. Sofroniou, G. Spaletta (2002),
Symplectic methods for separable Hamiltonian systems
,
Lecture Notes in Computer Science 2331,
506515.
Abstract.
This paper focuses on the solution of separable Hamiltonian systems
using explicit symplectic integration methods.
Strategies for reducing the effect of cumulative rounding errors are
outlined and advantages over a standard formulation are demonstrated.
Procedures for automatically choosing appropriate methods are also
described.
^{[
Printed version available on request from the authors
]}
[Top]

