A Levy process is a process with independent increments (a generalization of a random walk). Due to the rich structure and computational efficiency, these processes are being successfully applied to model various phenomena in Mathematical Finance and other areas of Science. In this talk we will discuss the problem of computing numerically the function p(x;t,y)-the joint density of the first passage time and the overshoot of a Levy process. This function can be used to price barrier and lookback options, it is also the main building block in various structural models in Credit Risk. It is known that p(x;t,y) satisfies a linear partial integro-differential equation; moreover, using the Wiener-Hopf theory it can be expressed as a five-dimensional integral transform of known quantities. We will discuss the drawbacks associated with each of these approaches, and then we will introduce a new method, based on the combination of the above two. The PIDE method is used to obtain local information about p(x;t,y) at t=0, while the Wiener-Hopf methods provide us with the global information in the form of the moments of this function. As the numerical example we will discuss the Normal Inverse Gaussian process.
Pharmacokinetic and pharmacodynamic (PK/PD) indices are increasingly being used in the microbiological field to assess efficacy of a dosing regimen. Contrary to methods using MIC, PK-PD-based methods reflect the in vivo conditions and are more predictive of efficacy. Unfortunately, these methods are based on the use of one static pharmacokinetic value such as AUC or Cmax and may thus lead to biased efficiency information when inter- or intra-individual variability exists.
In this work I will discuss the opportunity to evaluate the efficacy of a treatment by adjusting classical breakpoints estimation methods to the situation of variable PK profile. We propose here a logical generalisation of the classic AUC methods by introducing the weighted efficacy function. We will formulate these methods for both antibiotic classes: concentration-dependent and time-dependent. Using two drug models, we will illustrate how the newly introduced method can be applied to accurately estimate breakpoints.
This is a joint work with D. Gogore Bi and F. Nekka.
For a given symmetric positive definite matrix A Î Rn×n, we develop a fast and backward stable algorithm to approximate A by a symmetric positive-definite semi-separable matrix, accurate to any prescribed tolerance. In addition, this algorithm preserves the product, A Z, for a given matrix Z Î Rn×d, where d << n. Our algorithm guarantees the positive-definiteness of the semi-separable matrix by embedding an approximation strategy inside a Cholesky factorization procedure to ensure that the Schur complements during the Cholesky factorization all remain positive definite after approximation. It uses a robust direction-preserving approximation scheme to ensure the preservation of A Z. We present experimental numerical results and discuss potential implications of our work.
This talk is devoted to the convergence analysis of the reservoir technique coupled with finite volume flux schemes approximating nonlinear hyperbolic conservation laws (J. Sci. Comput. 31(2007), 419-458; Eur. J. Mech. B 27(2008), 643-664). After a presentation of this method, we prove its long-time convergence, accuracy and its TVD property for some general 1d configurations. Proofs are based on a precise study of the treatment by the reservoir technique of shock and rarefaction waves. Some numerical simulations will be provided to illustrate the analytical results.
This is a joint work with Prof. S. Labbé (Université Joseph Fourier, Grenoble).
Numerical simulations of path integrals by stochastic differential equations (SDEs) sometimes show stability problems. An example of an ill-posed problem is the case of one-mode Bose-Einstein condensation in a coherent state representation . In this case, the numerical solution of the resulting SDE becomes unstable after a relatively short time for most numerical methods and can blow up in finite time if uncontrolled.
To improve the results, new numerical methods are being developed, but it appears that the choice of the SDE integrator alone is insufficient to guarantee stability. One reason for this is that the drift depends on a conformal martingale which can have arbitrary phase and amplitude. Furthermore, the related Fokker-Planck equation of the problem turns out to be of mixed type with hyperbolic regions. It appears that regularization techniques combined with implicit SDE solvers and extrapolation methods can yield significant stability improvements.
Increasing demands on the complexity of scientific models coupled with increasing demands for their scalability are placing programming models on equal footing with the numerical methods they implement in terms of significance. A recurring theme across several major scientific software development projects involves defining abstract data types (ADTs) that closely mimic mathematical abstractions such as scalar, vector, and tensor fields. In languages that support user-defined operators and/or overloading of intrinsic operators, coupling ADTs with a set of algebraic and/or integro-differential operators results in an ADT calculus. This talk will analyze ADT calculus using three tool sets: object-oriented design metrics, computational complexity theory, and information theory. It will be demonstrated that ADT calculus leads to highly cohesive, loosely coupled abstractions with code-size-invariant data dependencies and minimal information entropy. The talk will also discuss how these results relate to software flexibility and robustness.
Due to an incomplete picture of the underlying physics, the simulation of dense granular flow remains a difficult computational challenge. Currently, modeling in practical and industrial situations would typically be carried out by using the Discrete-Element Method (DEM), individually simulating particles according to Newton's Laws. The contact models in these simulations are stiff and require very small timesteps to integrate accurately, meaning that even relatively small problems require days or weeks to run on a parallel computer. These brute-force approaches often provide little insight into the relevant collective physics, and they are infeasible for applications in real-time process control, or in optimization, where there is a need to run many different configurations much more rapidly.
Based upon a number of recent theoretical advances, a general multiscale simulation technique for dense granular flow will be presented, that couples a macroscopic continuum theory to a discrete microscopic mechanism for particle motion. The technique can be applied to arbitrary slow, dense granular flows, and can reproduce similar flow fields and microscopic packing structure estimates as in DEM. Since forces and stress are coarse-grained, the simulation technique runs two to three orders of magnitude faster than conventional DEM. A particular strength is the ability to capture particle diffusion, allowing for the optimization of granular mixing, by running an ensemble of different possible configurations