A mathematical programming problem with equilibrium constraints (MPEC)
is a constrained optimization problem with equality and inequality
constraints and, additionally, some equilibrium constraints:
|
| |||||||||||||||||||||
|
The locally starshapedness is a rich structure and we will first present some existence results, necessary and sufficient optimality conditions for the problem of minimizing a quasiconvex function over a locally starshaped set.
Corresponding results for quasiconvex MPECs will be obtained as particular cases.
In 1988, Simon Fitzpatrick defined a new convex function FA-nowadays called the Fitzpatrick function-associated with a monotone operator A, and similarly a monotone operator Gf associated with a convex function f.
This talk surveys recent joint works with Alex McLaren (Guelph) and Hristo Sendov (Guelph), and with Jon Borwein (Dalhousie) and Shawn Wang (UBC Okanagan).
We consider the Fitzpatrick function of the subdifferential operator of a proper, lower semicontinuous, and convex function. This operator is cyclically monotone. A refinement of the classical Fenchel-Young inequality is derived and conditions for equality are investigated. The results are illustrated by several examples.
We also study the problem, originally posed by Fitzpatrick, of determining when A=GFA. Fitzpatrick proved that this identity is satisfied whenever A is a maximal monotone; however, he also observed that it can hold even in the absence of maximal monotonicity. We propose a new condition sufficient for this identity, formulated in terms of the polarity notions introduced recently by Martinez-Legaz and Svaiter.
In a number of applications to stability and control one is interested in the behavior of certain functions of the roots of a polynomial as the polynomial varies. For example, how does the maximum modulus of the roots of a polynomial change as the coefficients of the polynomial varies?
In this talk we show how convex the geometry of the sets of roots of a polynomial and its derivative can be used to give insight into the variational behavior of such functions.
We review a framework for the analysis of sets of constraints, with no explicit assumptions, and we show the connection between minimal representations, irreducible infeasible systems, minimal infeasibility sets, as well as other attributes of the preprocessing of mathematical programs.
We show how the framework facilitates the development of a probabilistic preprocessing algorithm for a variety of mathematical programs, and we apply it specifically to convex quadratic constraint sets. We show, by example, that the method is surprisingly effective in finding feasible solutions.
Co-author: Dr. Tim Traynor, University of Windsor.
The Mesh Adaptive Direct Search (MADS) class of derivative-free algorithms is effective for industrial strength nonlinear inequality constrained optimization. It has been incorporated into the MATLAB GADS package.
MADS is the direct result of applying nonsmooth analysis to GPS, an earlier class of similar algorithms. This powerful tools of nonsmooth analysis made clear GPS deficiencies glossed over by assuming smoothness. MADS has a satisfying convergence theory based on the Clarke calculus and Rockafeller's notion of a hypertangent cone.
MADS replaced GPS in our NOMAD software presently in use by our industrial collaborators. It is applicable to a wider class of problems than GPS, including yes/no constraints, and in all trials to date, MADS is more reliable and efficient.
This research was done in collaboration with Professor Charles Audet of Ecole Polytechnique de Montréal.
Investment and risk management in finance often require determining a small number of assets to minimize some measure of tracking error. Determining such a solution requires solving a mixed non-linear integer programming problem. We propose a graduated non-convexity method to minimize portfolio tracking error with the total number of assets no greater than a specified integer K. The solution of this tracking error minimization problem is the global minimizer of the sum of the tracking error function and the discontinuous counting function. We attempt to track the globally minimal tracking error portfolio by approximating the discontinuous counting function with a sequence of continuously differentiable non-convex functions, a graduated non-convexity process. We discuss the advantages of this approach and present numerical results in index tracking and hedging under basis risk applications in finance.
We present a new algorithm (named NEP for NonExpansive Proximal mapping) to compute the discrete Moreau envelope Ml,X (s) = minx Î X [[(|| s-x ||2)/(2l)] + f(x)] of a function f, where X is a discrete grid and l > 0. Numerical comparisons between the NEP and two existing algorithms: The Linear-time Legendre Transform (LLT) and the Parabolic Envelope (PE) algorithms will be shown along with worst-case time complexity, convergence results, numerical comparison, and examples.
The algorithms will be applied to compute numerical solutions to Hamilton-Jacobi equations, and the distance transform of image processing.
In the past few years, spectral clustering has caught much attention in the machine learning community and numerous algorithms have been proposed and well-studied. In this paper, we present a unified framework for these methods based on a new optimization model: 0-1 semidefinite programming (0-1 SDP). We also show that various constrained clustering problems can be embedded into the new 0-1 SDP model. Secondly, we consider the issue of how to solve the underlying 0-1 SDP problem. We consider two approximation methods based on principal component analysis (PCA) and projected PCA, respectively and prove that both algorithms can provide a 2-approximation to the original clustering problem. The complexity of these approximation algorithms are also discussed.
In this talk, we consider two applications of H-differentiability. In the first application, we derive a necessary optimality condition for a local minimum of an H-differentiable function. In the second application, we consider a non-linear complementarity problem corresponding to an H-differentiable function f and show how, under appropriate conditions on an H-differential of f, minimizing a merit function corresponding to f leads to a solution of the non-linear complementarity problem. These two applications were motivated by numerous studies carried out for C1, convex, locally Lipschitzian, and semismooth function by various researchers.
We study a second-order cone programming (SOCP) relaxation of the NP-hard sensor network localization problem. We show that SOCP relaxation, though weaker than SDP relaxation, has nice properties that make it useful as a problem preprocessor. In particular, an error bound result shows that sensors that are uniquely positioned among interior solutions of the SOCP relaxation are accurate up to the square root of the distance error. Thus, these sensors, which can be easily identified, are accurately positioned. In our numerical simulation, the interior solution found can accurately position up to 80-90% of the sensors. We also propose a smoothing coordinate gradient descent method for finding an interior solution faster than using SeDuMi.
Assume that a Banach space has a Fréchet differentiable and locally uniformly convex norm. We show that the reflexive property of the Banach space is not only sufficient but also a necessary condition for the fulfillment of the proximal extremal principle in non-smooth analysis.
A somewhat overlooked 1970 paper by Edgar Asplund establishes a decomposition for a maximal monotone operator T : X®X* on a general Banach space X. The operator T is decomposed as the pointwise sum of the (well-behaved) subdifferential mapping ¶f of a convex function f, and an `acyclic' component A. I will reproduce a modern version of this result, and summarize current knowledge of the properties of this acyclic part.
A partial pre-distance matrix A is a matrix with zero diagonal and with certain elements fixed to given nonnegative values; the other elements are considered free. The EDM completion problem chooses nonnegative values for the free elements in order to obtain a Euclidean distance matrix, EDM. The nearest (or approximate) EDM problem is to find a Euclidean distance matrix that is nearest in the Frobenius norm to the matrix A, when the free variables are discounted.
Applications for EDM include: molecular conformation problems in chemistry; multidimensional scaling and multivariate analysis problems in statistics; sensor localization; genetics, geography, etc.
We present two algorithms: one for the exact completion problem and one for the approximate completion problem. Both use a reformulation of EDM into a semidefinite programming problem, SDP. The first algorithm is based on an implicit equation for the completion that for many instances provides an explicit solution. The other algorithm is based on primal-dual interior-point methods that exploit the structure and sparsity. Included are results on maps that arise that keep the EDM and SDP cones invariant.
We conclude with numerical tests.
In this talk we consider the first order optimality conditions for the class of generalized semi-infinite programming problems (GSIPs). We extend the well-known constraint qualifications for finite programming problems such as the calmness condition, the Abadie constraint qualification, the Kuhn-Tucker constraint qualification, the Zangwill constraint qualification, the Slater condition, the linear independence constraint qualification and the weak reverse convex constraint qualification to GSIPs and analyze the extent to which a corresponding Karush-Kuhn-Tucker condition depends on these extensions. Relationships among various constraint qualifications are also given.
This talk presents some hybrid methods for solving monotone variational inequalities with separate structures. In particular, some descent directions are derived from the well-known alternating directions method and its variants and thus some hybrid methods are presented. The optimal step sizes along the descent directions also improve the efficiency of the new methods. Some numerical results demonstrate that the new methods are effective in practice.