Réunion d'hiver SMC 2020

Montréal, 4 - 7 décembre 2020

       

Analyse Variationnelle : Théorie et Applications
Org: Heinz Bauschke (UBC), Walaa Moursi (Waterloo), Shawn Wang (UBC) et Henry Wolkowicz (Waterloo)
[PDF]

SALIHAH ALWADANI, UBC, Okanagan
Resolvents and Yosida approximations of displacement mappings of isometries  [PDF]

Maximally monotone operators are fundamental objects in modern optimization. We study in detail a nice class of monotone operators: displacement mappings of isometries of finite order. We derive explicit formulae for resolvents, Yosida approximations, and (set-valued and Moore-Penrose) inverses. We illustrate our results by considering rational rotators and circular shift operators.

SEDI BARTZ, University of Massachusetts Lowell
Open questions in multi-marginal monotonicity and convex analysis  [PDF]

In the two-marginal case, aspects of monotonicity and convex analysis underline optimal transport theory. Similarly to the two-marginal case, multi-marginal monotonicity and convex analysis underline multi-marginal optimal transport theory yet can be studied as an independent topic. We discuss basic extensions of the classical theory and point out several open questions regarding the construction of multi-marginal convex antiderivatives, a multi-marginal Minty type characterization of maximal monotonicity and a multi-marginal extension of the maximal monotonicity of the convex subdifferential. We illustrate our discussion by several examples.

MINH BUI, North Carolina State University
Multivariate Monotone Inclusions in Saddle Form  [PDF]

We propose a novel approach to monotone operator splitting based on the notion of a saddle operator. Under investigation is a highly structured multivariate monotone inclusion problem involving a mix of set-valued, cocoercive, and Lipschitzian monotone operators, as well as various monotonicity-preserving operations among them. This model encompasses most formulations found in the literature. The analysis leads to an algorithm of unprecedented flexibility, which achieves full splitting, exploits the specific attributes of each operator, is asynchronous, and requires to activate only blocks of operators at each iteration, as opposed to activating all of them. Several applications are discussed.

Joint work with P. L. Combettes.

PATRICK COMBETTES, North Carolina State University
Proximal Analysis of Deep Neural Networks  [PDF]

We show that proximal calculus and variational inequalities can be used to model and analyze neural networks and better understand the behavior of deep learning structures.

Joint work with J.-C. Pesquet.

KIMON FOUNTOULAKIS, University of Waterloo
p-Norm Flow Diffusion for Local Graph Clustering  [PDF]

Local graph clustering and the closely related seed set expansion problem are primitives on graphs that are central to a wide range of analytic and learning tasks such as local clustering, community detection, semi-supervised learning, nodes ranking and feature inference. In this talk we will present a family of convex optimization formulations based on the idea of diffusion with p-norm network flow. In the context of local clustering, we will present a characterization of the optimal solutions for these optimization problems and we will show their usefulness in finding low conductance cuts around input seed set. In particular, we achieve quadratic approximation of conductance in the case of p = 2 similar to the Cheeger-type bounds of spectral methods, constant factor approximation when p goes to infinity, and a smooth transition for general p values in between. Moreover, in this talk we will show that the proposed problem can be solved in strongly local running time for p is larger or equal to 2. Finally, we will present empirical evaluations on both synthetic and real-world graphs to illustrate our approach compares favorably with existing methods.

TIM HOHEISEL, McGill University
From perspective maps to epigraphical projections  [PDF]

The projection onto the epigraph or a level set of a closed, proper, convex function can be done by finding a root of a scalar function that involves the proximal operator as a function of the proximal parameter. To study the variational-analytic properties of this function, we consider general optimization problems that are (partial) infimal projections of the function in question and the perspective map of a kernel. When the latter is the Euclidean norm squared, we recover the proximal map as the solution map, and extract properties such as local Lipschitz continuity, directional differentiability, and semismoothness under suitable assumptions. Based on this, we establish an SC1 optimization framework for computing epigraphical and level set projections, which is competitive with methods tailored specifically to certain instances such as the projection onto the l1-unit ball.

This is joint work with Michael P. Friedlander (UBC) and Ariel Goodwin (McGill).

HAO HU, University of Waterloo
Computing the Nearest Doubly Stochastic Matrix by a Newton-type Method  [PDF]

In this talk, we present a Newton-type method for finding the nearest doubly stochastic matrix in the Frobenius norm to a given matrix. The optimality condition of this problem can be stated as a system of strongly semismooth functions. We study a Newton-type method to solve this system, and thus finding the nearest doubly stochastic matrix. We provide a sufficient condition for the quadratic convergence of the semismooth Newton method. We also propose a modified Newton method for the general case. This is a joint work with Haesol Im, Xinxin Li and Henry Wolkowicz.

BORIS MORDUKHOVICH, Wayne State University
A Generalized Newton Method for Subgradient Systems  [PDF]

This talk presents a new Newton-type algorithm to solve subdifferential inclusions defined by subgradients of extended-real-valued prox-regular functions. The proposed algorithm is formulated in terms of the second-order subdifferential of such functions that enjoys extensive calculus rules and can be efficiently computed for broad classes of extended-real-valued functions. Based on this and on metric regularity and subregularity properties of subgradient mappings, we establish verifiable conditions ensuring well-posedness of the proposed algorithm and its local superlinear convergence. The obtained results are also new for the class of equations defined by continuously differentiable functions with Lipschitzian derivatives , which is the underlying case of our consideration. The developed algorithm for prox-regular functions is formulated in terms of proximal mappings related to and reduces to Moreau envelopes. Besides numerous illustrative examples and comparison with known algorithms, we present applications of the proposed algorithm to the practically important class of Lasso problems arising in statistics and machine learning. Based on joint work with P.D. Khanh (University of Chile) and V. T. Phat (Wayne State University)

WALAA MOURSI, Waterloo

HRISTO SENDOV, The University of Western Ontario
A unified approach to operator monotone functions  [PDF]

The notion of operator monotonicity dates back to a work by L\"owner in 1934. A map $F :S^n \to S^m$ is called {\it operator monotone}, if $A \succeq B \mbox{ implies } F(A) \succeq F(B)$. (Here, $S^n$ is the space of symmetric matrices with the semidefinite partial order $\succeq$.) Often, the function $F$ is defined in terms of an underlying simpler function $f$. Of main interest is to find the properties of $f$ that characterize operator monotonicity of $F$. In that case, it is said that $f$ is also operator monotone. Classical examples are the L\"owner operators and the spectral (scalar-valued isotropic) functions. Operator monotonicity for these two classes of functions is characterized in seemingly very different ways.

This talk extends the notion of operator monotonicity to symmetric functions $f$ on $k$ arguments. The latter is used to define {\it (generated) $k$-isotropic maps} $F : S^n \to S^{n \choose k}$ for any $n \ge k$. Necessary and sufficient conditions are given for $f$ to generate an operator monotone $k$-isotropic map $F$. When $k=1$, the $k$-isotropic map becomes a L\"owner operator and when $k=n$ it becomes a spectral function. This allows us to reconcile and explain the differences between the conditions for monotonicity for the L\"owner operators and the spectral functions.

MOHAMED TAWHID, Thompson Rivers University
Improved Salp Swarm Optimization Algorithm for Data Clustering  [PDF]

In this works, an improved Salp Swarm Optimization algorithm is proposed for data clustering. Our proposed algorithm utilizes the crossover operator to obtain an improvised version of the existing Salp Swarm Optimization algorithm. The performance of our suggested algorithm is tested by comparing the proposed algorithms with standard Salp Swarm Optimization algorithm and other existing algorithms in the literature. The performance of our algorithm outperforms the performance of other algorithms for the data clustering problem in terms of computational time and accuracy.

LEVENT TUNCEL, University of Waterloo
A journey from the theory of self-concordant functions and variable metrics to applications in convex optimization  [PDF]

Abstract: Self-concordant functions (variation of the Hessian is bounded by a suitable function of the Hessian) and variable metric methods are central to many of the successful approaches for solving convex optimization problems. We will report on some of the recent progress on solving convex optimization problems by algorithms designed via self-concordant functions and variable metric methods. We will treat convex optimization problems that can be formulated by using, in some combination, second-order cones, cones of symmetric positive semidefinite matrices, power functions and $p$-norms, entropy function, relative vector entropy, quantum entropy, hyperbolic multivariate polynomial inequalities, nuclear norm and many others. Among important features of our approach are that our algorithms deliver dual certificates and that our algorithms are easily extensible to cover new classes of convex sets.

This talk is based on joint work with Mehdi Karimi.

STEVE VAVASIS, University of Waterloo
Nonlinear Conjugate Gradient for Convex Functions  [PDF]

Nonlinear conjugate gradient is a classic and widely used method for unconstrained optimization. Nemirovskii and Yudin in their 1983 book showed that these methods can converge slowly even for strongly convex functions. We propose an inexpensive safeguard operation to assure the optimal convergence rate for these methods when applied to smooth, convex functions. The safeguard step preserves the asymptotic property of the method. We also present preliminary computational experiments. Joint work with S. Karimi.

JANE YE, University of Victoria
Second-order optimality conditions for non-convex set-constrained optimization problems  [PDF]

In this paper we study second-order optimality conditions for non-convex set-constrained optimization problems. For a convex set-constrained optimization problem, it is well-known that second-order optimality conditions involve the support function of the second-order tangent set. In this paper we propose two approaches for establishing second-order optimality conditions for the non-convex case. In the first approach we extend the concept of the support function so that it is applicable to general non-convex set-constrained problems, whereas in the second approach we introduce the notion of the directional regular tangent cone and apply classical results of convex duality theory. Besides the second-order optimality conditions, the novelty of our approach lies in the systematic introduction and use, respectively, of directional versions of well-known concepts from variational analysis.


© Société mathématique du Canada : http://www.smc.math.ca/