2023 CMS Summer Meeting

Ottawa, June 2 - 5, 2023

       

Biostatistics
Org: Caitlin Daly (Cytel Inc.), Jemila Hamid (University of Ottawa) and Bouchra Nasri (Université de Montréal)
[PDF]

AUDREY BELIVEAU AND AUGUSTINE WIGLE, University of Waterloo
Bayesian Unanchored Additive Models for Component Network Meta-Analysis  [PDF]

Component Network Meta-Analysis (CNMA) models are an extension of standard Network Meta-Analysis models which account for the use of complex treatments in the network. This paper contributes to several statistical aspects of CNMA. First, by introducing a unified notation, we establish that currently available methods differ in the way additivity is assumed, an important distinction that has been overlooked so far. In particular, one model uses a more restrictive form of additivity than the other which we term anchored and unanchored additivity, respectively. We show that anchored additivity can easily be misspecified. Second, given that Bayesian models are often preferred by practitioners, we develop two unanchored Bayesian CNMA models. An extensive simulation study confirms the favorable performance of the novel models. This is the first simulation study to compare the statistical properties of CNMA models in the literature. Finally, the use of our novel models is demonstrated on a real dataset, and the results of CNMA models on the dataset are compared.

ANDREA BENEDETTI, McGill University
Individual participant data meta analyses  [PDF]

In this presentation, I will describe the DEPRESSD project, an individual participant data meta analysis (IPDMA) that aims to investigate the diagnostic accuracy of the most common depression screening tools. I will discuss selective cutoff reporting and how IPDMA allowed us to overcome this problem, as well as other approaches to the problem.

RICHARD COOK, University of Waterloo
Mitigating bias from marker-dependent observation times for internal covariates in Cox regression  [PDF]

Studies of chronic disease often involve modelling the relationship between marker processes and disease onset or progression. The Cox regression model is perhaps the most common and convenient approach to analysis in this setting. In most cohort studies, however, biomarker values are only measured intermittently (e.g. at clinic visits) so Cox models often treat biomarker values as fixed at their most recently observed value until they are updated at the next visit. We consider the implications of this convention on the limiting bias of estimators when the marker values themselves impact the intensity for clinic visits. A joint multistate model is described for the marker-failure-visit process which can be fitted to mitigate this bias and an expectation-maximization algorithm is developed. An application to data from a registry of patients with psoriatic arthritis is given for illustration.

CAITLIN DALY, Cytel
Comparative effectiveness research in pharma: A statistician’s role in demonstrating the value of a new product  [PDF]

As part of the application process for the reimbursement of a new product, a manufacturer must demonstrate the clinical benefit of the product against standard of care (SoC) in a local market. SoC may include several treatment options or “comparators”, most of which have not been directly compared to the new product in a head-to-head randomized controlled trial. There are several statistical approaches available to indirectly compare a new product to relevant comparators, including network meta-analysis. In a consultant role, it is important for a statistician to ensure valid indirect treatment comparisons (ITCs) are conducted in the manufacturer’s target patient population. As such, a statistician must think outside the modelling box and develop a good understanding of the disease space and comparator evidence; this will help the statistician assess for potential violations of the assumptions underlying the models. This talk will introduce ITC methods and will discuss how a statistician plays a role in all stages leading up to a valid ITC, including the collection of evidence, ITC feasibility assessment, ITC methods selection, and dealing with uncertainty.

OFIR HARARI, Core Clinical Sciences
Network Meta-Interpolation: fast and accurate NMA with effect modification  [PDF]

Effect modification may cause bias in network meta-analysis (NMA). Existing population adjustment NMA methods use individual patient data to adjust for EM but disregard available subgroup information from aggregated data in the evidence network. Worse yet: these methods often rely on the shared effect modification (SEM) assumption. In this talk, we present Network Meta-Interpolation (NMI): a method using subgroup analyses to adjust for EM that does not assume SEM. The method balances effect modifiers across studies by turning treatment effect (TE) estimates at the subgroup- and study level into TE and standard errors at EM values common to all studies. Simulation results comparing NMI with standard NMA, network meta-regression (NMR) and Multilevel NMR (ML-NMR) will be presented, to demonstrate NMI’s dominance in terms of estimation accuracy and CrI coverage, consistently across various scenarios.

SAYANTEE JANA, Indian Institute of Technology, Hyderabad
Robust Inference for Generalized Multivariate Analysis of Variance (GMANOVA) Models  [PDF]

Existing methods for estimating the parameters of the Growth Curve Model (GCM), which is a special case of Generalized Multivariate Analysis of Variance (GMANOVA) models, assume that the underlying distribution for the error terms is multivariate normal. In practical situations, however, we often come across skewed longitudinal data. Simulation studies show that existing normal-based estimators are sensitive to the presence of skewness in the data, where estimators are associated with increased bias and mean square error (MSE), when the normality assumption is violated. In this presentation, we will consider the GCM under multivariate skew normal (MSN) distribution, where the estimators are derived using the expectation maximization (EM) algorithm. We will also present an extension, where the extended growth curve model (EGCM) is used for clustered longitudinal data. We will discuss an extension of the Newton Raphson algorithm, which was used in developing the Restricted Expectation Maximization (REM) algorithm to derive estimators for the EGCM under MSN distribution. We will provide results from a simulation study and illustrate an application using real data sets.

ZELALEM NEGERI, University of Waterloo
Identifying and accommodating outlying studies in diagnostic test meta-analyses: a mixture modelling approach  [PDF]

Outlying studies are prevalent in meta-analyses of diagnostic test accuracy studies. Statistical methods for detecting and downweighting the effect of such studies have recently gained the attention of many researchers. These methods dichotomize each study in the meta-analysis as outlying or non-outlying and focus on examining the effect of outlying studies on the summary sensitivity and specificity only. In this work, we develop a random-effects bivariate mixture model for meta-analyzing diagnostic test accuracy studies by accounting for both the within- and across-study heterogeneity in diagnostic test results. Instead of dichotomizing the studies in the meta-analysis, the proposed model generates the probability that each study is outlying and allows assessing the impact of outlying studies on the pooled sensitivity, specificity, and between-study heterogeneity. We illustrate the performance of the developed method on real-life and simulated meta-analytic data.

DEREK OUYANG, Ottawa Hospital Research Institute
Maintaining the validity of inference in stepped-wedge cluster randomized trials under random effects misspecification  [PDF]

Mixed-effect regression is commonly used in stepped-wedge cluster randomized trials (SW-CRTs). A key requirement is to account for the complex correlation structures. Common structures are exchangeable (random intercept), nested exchangeable (random cluster-by-period), and exponential decay (discrete-time decay). In recent years, more complex models (e.g., random intervention models) have been proposed. In practice, it is challenging to specify appropriate random effects and obtain valid statistical inferences. Robust variance estimators (RVE) that have been widely discussed under the generalized estimating equations framework may also be applied in mixed-effect regression to deal with random-effect misspecifications. However, relevant discussion in SW-CRT has been limited. In this study, we first review five RVEs that are available for linear mixed models via R. Then, we describe the results of a simulation study to investigate the performance of RVE under mixed-effect regression. We focused on SW-CRTs with continuous outcomes assuming the data were generated from models with 1) exponential decay and random intervention effects, or 2) random cluster-by-period and random intervention effects. For each data generator, we found that the use of RVE with either the random intercept or the random cluster-by-period model was sufficient to provide valid statistical inference. With the Satterthwaite degrees of freedom approximation, among the five RVEs we investigated, CR3 (a small-sample corrected RVE that approximates the leave-one-cluster-out jackknife variance estimator) consistently gave the best coverage results, even though it might be slightly anti-conservative when the number of clusters was below sixteen.

ELEANOR PULLENAYEGUM, The Hospital for Sick Children
A proposed workflow for handling longitudinal data with irregular assessment times  [PDF]

Studies with longitudinal data often feature irregular observation times; a common cause of this is that data are collected as part of usual societal operations rather than for the purposes of research. For example, electronic health records (EHRs) are often used to study disease processes over time, or the impact of treatment on disease trajectory. When the assessment times and the outcome process are independent, failure to account for the assessment times will result in biased inferences; for example, if sicker patients visit more often, we will overestimate the burden of disease.

Although a very similar problem to missing data, the problem of irregular observation is typically ignored. When handling missing data, researchers know they need to report how much data is missing, consider the missingness mechanism, use an analytic approach suitable for their hypothesized missingness mechanism, and conduct sensitivity analysis. In this talk I will describe the irregular observation counterparts to these steps, outlining both the methods and procedures for implementing them in standard statistical software.

GRACE YI, University of Western Ontario
Graphical proportional hazards measurement error models  [PDF]

In survival data analysis, the Cox proportional hazards (PH) model is perhaps the most widely used model to feature the dependence of survival times on covariates. While many inference methods have been developed under such a model or its variants, those models are not adequate for handling data with complex structured covariates. High-dimensional survival data often entail several features: (1) many covariates are inactive in explaining the survival information, (2) active covariates are associated in a network structure, and (3) some covariates are error-contaminated. To handle such kinds of survival data, we propose graphical PH measurement error models and develop inferential procedures for the parameters of interest. Our proposed models significantly enlarge the scope of the usual Cox PH model and have great flexibility in characterizing survival data. Theoretical results are established to justify the proposed methods. Numerical studies are conducted to assess the performance of the proposed methods.


© Canadian Mathematical Society : http://www.cms.math.ca/