2025 CMS Summer Meeting

Quebec City, June 6 - 9, 2025

Abstracts        

Mathematics of Machine Learning
Org: Ben Adcock (Simon Fraser University), Simone Brugiapaglia (Concordia) and Giuseppe Alessio D'Inverno (SISSA)
[PDF]

MARZIA CREMONA, Université Laval

GIUSEPPE ALESSIO D'INVERNO, International School for Advanced Studies (SISSA), Trieste, Italy
Surrogate models for diffusion on graphs via sparse polynomials  [PDF]

Diffusion kernels over graphs have been widely utilized as effective tools in various applications due to their ability to accurately model the flow of information through nodes and edges. However, there is a notable gap in the literature regarding the development of surrogate models for diffusion processes on graphs. In this work, we fill this gap by proposing sparse polynomial-based surrogate models for parametric diffusion equations on graphs with community structure. In tandem, we provide convergence guarantees for both least squares and compressed sensing-based approximations by showing the holomorphic regularity of parametric solutions to these diffusion equations. Our theoretical findings are accompanied by a series of numerical experiments conducted on both synthetic and real-world graphs that demonstrate the applicability of our methodology.

MEHDI DAGDOUG, McGill University
Double Machine Learning for Nonresponse in Surveys  [PDF]

Predictive models are increasingly integrated into survey strategies, supporting tasks such as model-based estimation, model-assisted estimation, and the treatment of nonresponse through imputation and reweighting. In recent decades, the rise of statistical learning has provided survey statisticians with highly flexible new tools, alongside new theoretical and computational advancements. However, incorporating statistical learning into survey estimation poses challenges for conducting valid inference. In this work, we propose an extension of the Double Machine Learning framework to survey sampling, focusing on the treatment of nonresponse through Augmented Inverse Probability Weighting (AIPW) estimators. We establish that the resulting AIPW estimators are square-root n consistent and asymptotically normal under realistic rate conditions on the statistical learning algorithms. We further propose a consistent variance estimator, enabling the construction of asymptotically valid confidence intervals. Issues related to model selection and aggregation will also be discussed. Simulation studies demonstrating the strong performance of the proposed methods will be presented. This is a joint work with David Haziza (UOttawa).

SALAH IDBELOUCH, Polytechnique Montréal

EMMANUEL LORIN, Carleton University

SINA MOHAMMAD-TAHERI, Concordia University
Deep greedy unfolding: sorting out the argsort operator in greedy sparse recovery algorithms  [PDF]

Recent years have seen a growing interest in “unrolled neural networks” for various signal processing applications. These networks provide model-based architectures that mimic the iterations of an iterative algorithm and, when properly designed, admit recovery guarantees. However, there has been limited work on unrolling greedy and thresholding-based sparse recovery algorithms, such as Orthogonal Matching Pursuit (OMP) and Iterative Hard Thresholding (IHT), and existing efforts often lack full neural network compatibility. The primary challenge arises from the non-differentiable (discontinuous) argsort operator within their iterations, which obstructs gradient-based optimization during training. To address this issue, we approximate argsort operator by a continuous relaxation of it using a proxy called “softsort”. We then demonstrate, both theoretically and numerically, that the differentiable versions of OMP and IHT—termed “Soft-OMP” and “Soft-IHT”—serve as reliable approximations of their original counterparts, with minimal error under suitable conditions on the softsort temperature parameter and the gap between elements in the sorted vector. Finally, implementing these algorithms on neural networks, with weights as trainable parameters, reveals that unrolled Soft-OMP and Soft-IHT effectively capture hidden structures in data, establishing a connection between our approach and weighted sparse recovery.

ELLIOTT PAQUETTE, McGill

WODEGEBRIEL ASSEFA WOLDEGERIMA, York University

JUNXI ZHANG, Concordia


© Canadian Mathematical Society : http://www.cms.math.ca/