Plenary Speakers [PDF]
- JOHN BALDWIN, University of Illinois at Chicago
Categoricity Proofs
[PDF] -
Over 40 years ago, Morley proved that a first order theory which is
categorical (has only one model) in one uncountable cardinality is
categorical in all uncountable cardinalities. The theory of the
complex field is the prototypical example of this situation. Morley's
theorem spawned modern model theory; applications have included
results in arithmetic algebraic geometry. Zilber now conjectures an
analogous, but no longer first order, categorical axiomatization for
the complex field with exponentiation; a first order approach is
impossible. There are now several approaches to proving categoricity
theorems. Some focus on the existence of a dimension governing the
models just as in the case of algebraically closed fields. Others
focus on the transfer of `saturation'. Some have an `upwards
transfer' and a `downwards transfer' component; others have only one
step. We will compare and contrast several of the proofs (for both
first order and analogous results in infinitary logic). Such an
analysis of proof techniques may shed light both the nature of
connections with geometry/algebra and on the distinctions between
different logics.
- KRISTIN BENNETT, Rensselaer Polytechnic Institute
Optimization Challenges in Machine Learning
[PDF] -
In this talk, we examine model selection problems in machine learning
and the optimization challenges that they represent. Great progress
has been made in machine learning by cleverly reducing machine
learning problems to convex optimization problems with one or more
hyper-parameters. For example, in support vector machines, highly
non-linear classifiers are constructed by solving a convex quadratic
program with at least two hyper-parameters specified by the users.
While cross validation is a commonly employed and widely accepted
method for selecting these parameters, its implementation by a
grid-search procedure in the parameter space effectively limits the
desirable number of hyper-parameters in a model, due to the
combinatorial explosion of grid points in high-dimensions. Explicitly
tackling model selection by cross validation as a bilevel optimization
problem allows for efficient systematic search of large numbers of
hyper-parameters. We discuss recent progress in solution of the
resulting bilevel problems, and the many interesting optimization
challenges that remain. Finally, we investigate the intriguing
possibility of novel machine learning models enabled by multilevel
programming.
- RICHARD CLEVE, University of Waterloo and Perimeter Institute
Efficient quantum algorithms for simulating sparse
Hamiltonians
[PDF] -
A "quantum computer" is a computing device that can exist in a
quantum mechanical superposition of several states simultaneously and
whose computation paths can interfere with each other. Such computers
can perform some remarkable feats, such as efficient integer
factorization, which appear to go beyond the capabilities of
"classical computers". Another natural and important application
area of quantum computers is the efficient simulation of quantum
mechanical systems. After some preliminary remarks about quantum
computing, we describe some general frameworks in which these
simulation problems can be cast and explain how some efficient
simulation algorithms work.
This is joint work with Graeme Ahokas, Dominic Berry, and Barry Sanders.
- RICHARD KENYON, UBC, Vancouver, BC
Dimers and Limit shapes
[PDF] -
We study a natural family of smooth surfaces in R3 arising as
limits of random discrete "stepped" surfaces. For fixed boundary
conditions, the law of large numbers for stepped surfaces leads to a
PDE for the limit surfaces (when the lattice spacing tends to zero).
This PDE is a variant of the complex Burgers equation and can be
solved analytically via holomorphic functions. This is surprising
since the surfaces generically have both smooth parts and facets. The
interplay between analytic (even algebraic) functions and facet
formation in the surfaces leads to some interesting questions in real
algebraic geometry.
- CHARLES READ, University of Leeds, UK
The hypercyclicity problem
[PDF] -
We will discuss the recent solution to the hypercyclicity problem,
with some new developments.
An operator T on a Banach space X is hypercyclic if there
is a vector x whose translates Trx are dense. For the translates
thus to take a walk through the whole space is regarded as chaotic
behaviour, so this definition is related to the general area of
mathematical chaos. There is a well known sufficient condition for
T to be hypercyclic, known as the hypercyclicity criterion.
The hypercyclicity problem is the long standing problem of whether the
hypercyclicity criterion is also necessary-in fact, it is not, not
even on the Hilbert space.
- ARNOLD ROSENBERG, University of Massachusetts Amherst, Amherst, MA, 01003, USA
IC-Scheduling Theory: A New Scheduling Paradigm for
Internet-Based Computing
[PDF] -
Technological and economic developments have made the Internet a
viable platform for a new, "collaborative" modality of
Internet-based computing (IC, for short). Within this
modality, the owner of a large, typically compute-intensive,
computation enlists remote clients to "collaborate" in performing
the computation. When the computation's tasks have interdependencies
that prioritize their execution, then the temporal
unpredictability of IC-communication is over the Internet;
computing is by clients who arrive at unpredictable times and who are
typically not dedicated to the computation-can confute attempts to
benefit from "parallel" execution of tasks and can cause a
computation to stall for lack of tasks that are eligible for
allocation.
In recent papers, we have proposed a new scheduling paradigm that aims
to respond to the new challenges of IC. Faced with the impossibility
(due to temporal unpredictability) of scheduling to accommodate
"critical paths" in a computation, we seek to schedule in a way that
always renders as many tasks as possible eligible for
allocation (to remote clients). This has the dual goal of
maximizing the utilization of available clients and minimizing the
likelihood of a computation's stalling for lack of eligible tasks. We
have formalized this scheduling problem and, under idealized
assumptions, have developed the rudiments of an algorithmic theory of
how to schedule complex computations for IC.
We begin this talk with the concepts underlying the theory and the
algorithms that emerge from them. We then describe several familiar
computations and computational paradigms that the nascent theory can
schedule optimally. We finally describe simulation experiments whose
results suggest that the theory's schedules have a measurable benign
effect on "real" Internet-based computing.
- DROR VAROLIN, Stony Brook
Extension Theorems in Complex Analysis and Geometry
[PDF] -
Since the early 1960s with the work of J. J. Kohn and L. Hörmander,
Hilbert space methods have played a central in role in several complex
variables. In 1970 E. Bombieri introduced such methods to establish
important results in number theory. Shortly after that, H. Skoda used
L2 methods to establish, among many things, central results in
commutative algebra. Y.-T. Siu established many deep results in
complex geometry, and in the late 1980s T. Ohsawa and K. Takegoshi made
a breakthrough in L2 methods in proving their celebrated extension
theorem.
Two major developments came in the 1990s. Around 1992, K. Seip
initiated a movement, later joined by R. Wallsten, B. Berndtsson, and
J. Ortega Cerdà, to extend Beurling's results about interpolation
and sampling sequences from Hardy spaces to so-called Generalized
Bergman spaces. Some of these results were extended to higher
dimensions by Forgacs, Ortega Cerdà, Schuster and myself. On the
other side of the spectrum, Y.-T. Siu and J.-P. Demailly blew open
many very sturdily fortified paths and the area of analytic methods in
algebraic geometry has since then grown at an amazing rate. The
crowning jewel was Siu's work on the Fujita conjecture and his proof
of the deformation invariance of plurigenera, and it looks as though
the Fujita and Plurigenera methods will be strong enough to complete
many deep and interesting programs in algebraic geometry.
In this talk we will try to give an idea of the nature of the methods
behind these results. We will start with the basic Hörmander
solution of the dbar equation with L2 estimates, and some of its
applications. We will then review the new twist (literally)
introduced by Ohsawa and Takegoshi, and how it relates to
interpolation and sampling problems, as well as some extension
problems in algebraic geometry.
|