Variational Analysis and Monotone Operator Theory
Org:
Heinz Bauschke (University of British Columbia),
Walaa Moursi (Stanford University) and
Xianfu Wang (University of British Columbia)
[
PDF]
 MINH BUI, North Carolina State University
On sums and convex combinations of projectors onto convex sets [PDF]

The projector
onto the
Minkowski
sum
of closed convex sets
is generally not
equal to
the sum of individual projectors.
In this talk,
we provide a complete
answer
to
the question
of characterizing
the instances
where such
an equality holds.
Our results
unify and extend
the case of linear subspaces
and
Zarantonello's results
for projectors onto cones.
The question
''When is a convex average
of projectors is a projector?"
is also discussed.
In addition,
we present
the partial sum property
for projectors
onto convex cones
as well as the univariate case.
This talk is based on joint work with
Heinz Bauschke
and Xianfu Wang.
 MICHAEL P. FRIEDLANDER, University of British Columbia
 WEI LI, Chengdu University of Technology
differential inverse variational inequalities in finite dimensional spaces [PDF]

A new differential inverse variational inequality is introduced and studied in finite dimensional Euclidean spaces. Some results concerned with the linear growth of the solution set for the inverse variational inequalities are obtained under different conditions.
Some existence theorems of Carath\'{e}odory weak solutions for the differential inverse variational inequality are also established under suitable conditions.An application to the timedependent spatial price equilibrium control problem is also given.
 ZHAOSONG LU, Simon Fraser University
Exact penalty results on sparse recovery models based on partial regularization [PDF]

In the context of sparse recovery, it is known that most of existing regularizers such as L1 suffer from some bias incurred by some leading entries (in magnitude) of the associated vector. We propose a class of models with partial regularizers to neutralize this bias for recovering sparse vectors. We also study some theoretical properties for these models including sparsity inducing, local or global recovery and also stable recovery. Under some suitable assumptions, we show that the penalty formulation based on a partial regularization is an exact reformulation of the original problem in the sense that they both share the same global minimizers. Furthermore, we show that any local minimizer of the original problem is a local minimizer of the penalty reformulation. In addition, a firstorder feasible augmented Lagrangian (FAL) method is proposed for solving these models, in which each subproblem is solved by a nonmonotone proximal gradient (NPG) method. The global convergence of this method is also established. Numerical results on compressed sensing and sparse logistic regression demonstrate that the proposed models substantially outperform the widely used ones in the literature in terms of solution quality.
 WALAA MOURSI, Stanford University
Reflected resolvents in the DouglasRachford algorithm: order of the operators and linear convergence [PDF]

The DouglasRachford algorithm is a popular method for finding zeros of
sums of monotone operators. By its definition, the DouglasRachford
operator is not symmetric with respect to the order of the two
operators. In this talk we report a systematic study of the two possible
DouglasRachford operators. We show that the reflectors of the
underlying operators act as bijections between the fixed points sets
of the two DouglasRachford operators. Some elegant formulae arise under
additional assumptions.
Results on linear rates of convergence are also presented.
Various examples illustrate our results.
 MAU NAM NGUYEN, Portland State University
Smoothing Techniques and DC Programming with Applications to Image Reconstructions [PDF]

In this talk we present some characterizations of differentiability for realvalued functions based on generalized differentiation. These characterizations provide the mathematical foundation for Nesterov's smoothing techniques in infinite dimensions. As an application, we provide a simple approach to image reconstructions based on Nesterov's smoothing techniques and DCA (Difference of Convex functions Algorithm) that involves the $\ell_1\ell_2$ regularization.
 HUI OUYANG, UBC Okanagan
Circumcenters of finite sets with applications in Hilbert spaces [PDF]

First, we present basic results and properties of the circumcenter of sets containing finitely many points in Hilbert space. This is motivated by two papers of Behling, Bello Cruz, and Santos on accelerated versions of the DouglasRachford method. Then we introduce two new notions: circumcenter mapping induced by operators and the properness of the circumcenter mapping. Some properties of the circumcenter mapping will be given. Finally, we talk about the applications of the proper circumcenter mappings. Several examples are provided to illustrate the tightness of various assumptions. In particular, the performance profile is used to compare the convergence rate of some of our proper circumcenter mappings with that of the method of alternating projections and the DouglasRachford method.
 HENRY WOLKOWICZ, University of Waterloo
Solving DNN Relaxations of the Quadratic Assignment Problem with ADMM and Facial Reduction [PDF]

The quadratic assignment problem, QAP, has many important
applications ranging from the
planning of building locations of a university, to the positioning of modules
on a computer chip (VLSI design), to the design of keyboards.
This discrete optimization problem is arguably one of the hardest of
the NPhard problems, as problems with dimension 30 are
still considered hard to solve to optimality.
The QAP in the trace formulation is modelled as the minimization of a
quadratic function over the permutation matrices. The set of
permutation matrices can be represented by quadratic constraints. Relaxations
of these constraints are used in branch and bound solution methods. These
relaxations include the eigenvalue and projected eigenvalue relaxations, as
well as various semidefinite programming, SDP, and doubly nonnegative, DNN,
relaxations. These latter relaxations are particularly strong and often solve
the QAP to optimality. However, they can be extremely expensive to solve.
We show that the combination of an alternating directions method of
multipliers, ADMM, in combination with facial reduction, FR, works
extremely well in solving the very difficult DNN relaxation.
 YURIY ZINCHENKO, University of Calgary, Gurobi LLC
Towards efficient approximation of pcones [PDF]

2norm cone, a.k.a. the secondorder cone (SOC), gained wide applicability in modern optimization. SOC may be effectively handled directly by interiorpoint methods as well as can be well approximated with polyhedra. Specifically, BenTal and Nemirovski constructed an elegant ’optimal’ efficient approximation scheme where the number of required linear inequalities grows only logarithmically with respect to the desired approximation precision. In contrast, the situation
with SOC extensions to pnorm cones remained dramatically different: despite applications being present, our capacity to handle these cones is somewhat limited. Neither there are
dimensioninvariant selfconcordant barriers known for such cones, nor has one been able to approximate these cones efficiently. In this work, we describe a few novel approaches aimed at constructing good approximations to pcones, and provide evidence that indeed an efficient polyhedral approximation may be within reach for such cones.