Session de recherche des étudiants
Org:
Kate Tretiakova (McMaster University) et
Daniel Zackon (McGill University)
[
PDF]
- RUCHITA AMIN, Western University
Qualitative Dynamics of bifurcation Analysis on Immunotherapy of a Tumor Model with Treatment. [PDF]
-
Among the recent advancements in cancer treatment, immunotherapy has emerged as a promising approach for managing and potentially curing malignant tumors. This work presents mathematical models to investigate the interactions between tumor cells, CD4+T cells, and cytokines, focusing on their role in tumor regression. The effectiveness of treatments involving CD4+T cells, cytokines, or polytherapy (a combination of both) is analyzed within this framework. The study identifies equilibrium points, examines solution stability, and conducts bifurcation analysis. Furthermore, the application of normal form theory provides insights into the amplitude, phase, and stability of limit cycles that arise from bifurcations. The research also examines the occurrence of multiple limit cycles driven by generalized Hopf bifurcations, leading to intricate dynamic behaviors. These findings indicate that Hopf bifurcations are a primary driver of oscillatory patterns, introducing a bistable configuration that includes both a stable limit cycle and a stable equilibrium. The implications of these results are discussed in the context of biological systems.
- JÉRÉMY CHAMPAGNE, University of Waterloo
- CHRISTINE EAGLES, University of Waterloo
- LIAM GAUVREAU, University of Toronto
- DONGLIN HAN, University of Alberta
- DANDAN HU, Memorial University
- YUCEN JIN, Western University
- NGUYEN DAC KHOI NGUYEN, Memorial University
- RAHUL PADMANABHAN, Concordia University
Deep Learning Approximation of Matrix Functions: From Feedforward Neural Networks to Transformers [PDF]
-
Deep Neural Networks (DNNs) have been at the forefront of Artificial Intelligence (AI)
over the last decade. Transformers, a type of DNN, have revolutionized Natural Language
Processing (NLP) through models like ChatGPT, Llama and more recently, Deepseek. While
transformers are used mostly in NLP tasks, their potential for advanced numerical
computations remains largely unexplored. This presents opportunities in areas like
surrogate modeling and raises fundamental questions about AI's mathematical capabilities.
We investigate the use of transformers for approximating matrix functions, which are
mappings that extend scalar functions to matrices. These functions are ubiquitous in
scientific applications, from continuous-time Markov chains (matrix exponential) to
stability analysis of dynamical systems (matrix sign function). Our work makes two main
contributions. First, we prove theoretical bounds on the depth and width requirements
for ReLU DNNs to approximate the matrix exponential. Second, we use transformers with
encoded matrix data to approximate general matrix functions and compare their performance
to feedforward DNNs. Through extensive numerical experiments, we demonstrate that the
choice of matrix encoding scheme significantly impacts transformer performance. Our
results show strong accuracy in approximating the matrix sign function, suggesting
transformers' potential for advanced mathematical computations.
- ANSH SHAN, Brock University
- IVAN SHEVCHENKO, University of Toronto
- ZHEN SHUANG, Memorial University of Newfoundland
Hyperbolic Riesz potentials-capacities [PDF]
-
We explore various energy estimates and optimal strong-weak embeddings for the hyperbolic Riesz potential, which is equivalent to the fractional wave operator $\square^\alpha$. Additionally, we examine the hyperbolic Riesz capacity associated with this potential, focusing on its fundamental properties, its relationship to Lebesgue and Hausdorff measures, and its dual capacity.
- TIANXU WANG, University of Alberta
Derivations of Animal Movement Models with Explicit Memory [PDF]
-
Highly evolved animals continuously update their knowledge of social factors, refining movement decisions based on both historical and real-time observations. Despite its significance, research on the underlying mechanisms remains limited. In this study, we explore how the use of collective memory shapes different mathematical models across various ecological dispersal scenarios. Specifically, we investigate three memory-based dispersal scenarios: gradient-based movement, where individuals respond to environmental gradients; environment matching, which promotes uniform distribution within a population; and location-based movement, where decisions rely solely on local suitability. These scenarios correspond to diffusion advection, Fickian diffusion, and Fokker-Planck diffusion models, respectively. We focus on the derivation of these memory-based movement models using three approaches: spatial and temporal discretization, patch models in continuous time, and discrete-velocity jump process.
These derivations highlight how different ways of using memory lead to distinct mathematical models. Numerical simulations reveal that the three dispersal scenarios exhibit distinct behaviors under memory-induced repulsive and attractive conditions. The diffusion advection and Fokker-Planck models display wiggle patterns and aggregation phenomena, while simulations of the Fickian diffusion model consistently stabilize to uniform constant states.
- XUYUAN WANG, University of Alberta
© Société mathématique du Canada