Biostatistics
Org:
Caitlin Daly (Cytel Inc.),
Jemila Hamid (University of Ottawa) and
Bouchra Nasri (Université de Montréal)
[
PDF]
 AUDREY BELIVEAU AND AUGUSTINE WIGLE, University of Waterloo
Bayesian Unanchored Additive Models for Component Network MetaAnalysis [PDF]

Component Network MetaAnalysis (CNMA) models are an extension of standard Network MetaAnalysis models which account for the use of complex treatments in the network. This paper contributes to several statistical aspects of CNMA. First, by introducing a unified notation, we establish that currently available methods differ in the way additivity is assumed, an important distinction that has been overlooked so far. In particular, one model uses a more restrictive form of additivity than the other which we term anchored and unanchored additivity, respectively. We show that anchored additivity can easily be misspecified. Second, given that Bayesian models are often preferred by practitioners, we develop two unanchored Bayesian CNMA models. An extensive simulation study confirms the favorable performance of the novel models. This is the first simulation study to compare the statistical properties of CNMA models in the literature. Finally, the use of our novel models is demonstrated on a real dataset, and the results of CNMA models on the dataset are compared.
 ANDREA BENEDETTI, McGill University
Individual participant data meta analyses [PDF]

In this presentation, I will describe the DEPRESSD project, an individual participant data meta analysis (IPDMA) that aims to investigate the diagnostic accuracy of the most common depression screening tools. I will discuss selective cutoff reporting and how IPDMA allowed us to overcome this problem, as well as other approaches to the problem.
 RICHARD COOK, University of Waterloo
Mitigating bias from markerdependent observation times for internal covariates in Cox regression [PDF]

Studies of chronic disease often involve modelling the relationship between marker processes and disease onset or progression. The Cox regression model is perhaps the most common and convenient approach to analysis in this setting. In most cohort studies, however, biomarker values are only measured intermittently (e.g. at clinic visits) so Cox models often treat biomarker values as fixed at their most recently observed value until they are updated at the next visit. We consider the implications of this convention on the limiting bias of estimators when the marker values themselves impact the intensity for clinic visits. A joint multistate model is described for the markerfailurevisit process which can be fitted to mitigate this bias and an expectationmaximization algorithm is developed. An application to data from a registry of patients with psoriatic arthritis is given for illustration.
 CAITLIN DALY, Cytel
Comparative effectiveness research in pharma: A statistician’s role in demonstrating the value of a new product [PDF]

As part of the application process for the reimbursement of a new product, a manufacturer must demonstrate the clinical benefit of the product against standard of care (SoC) in a local market. SoC may include several treatment options or “comparators”, most of which have not been directly compared to the new product in a headtohead randomized controlled trial. There are several statistical approaches available to indirectly compare a new product to relevant comparators, including network metaanalysis. In a consultant role, it is important for a statistician to ensure valid indirect treatment comparisons (ITCs) are conducted in the manufacturer’s target patient population. As such, a statistician must think outside the modelling box and develop a good understanding of the disease space and comparator evidence; this will help the statistician assess for potential violations of the assumptions underlying the models. This talk will introduce ITC methods and will discuss how a statistician plays a role in all stages leading up to a valid ITC, including the collection of evidence, ITC feasibility assessment, ITC methods selection, and dealing with uncertainty.
 OFIR HARARI, Core Clinical Sciences
Network MetaInterpolation: fast and accurate NMA with effect modification [PDF]

Effect modification may cause bias in network metaanalysis (NMA). Existing population adjustment NMA methods use individual patient data to adjust for EM but disregard available subgroup information from aggregated data in the evidence network. Worse yet: these methods often rely on the shared effect modification (SEM) assumption. In this talk, we present Network MetaInterpolation (NMI): a method using subgroup analyses to adjust for EM that does not assume SEM. The method balances effect modifiers across studies by turning treatment effect (TE) estimates at the subgroup and study level into TE and standard errors at EM values common to all studies. Simulation results comparing NMI with standard NMA, network metaregression (NMR) and Multilevel NMR (MLNMR) will be presented, to demonstrate NMI’s dominance in terms of estimation accuracy and CrI coverage, consistently across various scenarios.
 SAYANTEE JANA, Indian Institute of Technology, Hyderabad
Robust Inference for Generalized Multivariate Analysis of Variance (GMANOVA) Models [PDF]

Existing methods for estimating the parameters of the Growth Curve Model (GCM), which is a special case of Generalized Multivariate Analysis of Variance (GMANOVA) models, assume that the underlying distribution for the error terms is multivariate normal. In practical situations, however, we often come across skewed longitudinal data. Simulation studies show that existing normalbased estimators are sensitive to the presence of skewness in the data, where estimators are associated with increased bias and mean square error (MSE), when the normality assumption is violated. In this presentation, we will consider the GCM under multivariate skew normal (MSN) distribution, where the estimators are derived using the expectation maximization (EM) algorithm. We will also present an extension, where the extended growth curve model (EGCM) is used for clustered longitudinal data. We will discuss an extension of the Newton Raphson algorithm, which was used in developing the Restricted Expectation Maximization (REM) algorithm to derive estimators for the EGCM under MSN distribution. We will provide results from a simulation study and illustrate an application using real data sets.
 ZELALEM NEGERI, University of Waterloo
Identifying and accommodating outlying studies in diagnostic test metaanalyses: a mixture modelling approach [PDF]

Outlying studies are prevalent in metaanalyses of diagnostic test accuracy studies. Statistical methods for detecting and downweighting the effect of such studies have recently gained the attention of many researchers. These methods dichotomize each study in the metaanalysis as outlying or nonoutlying and focus on examining the effect of outlying studies on the summary sensitivity and specificity only. In this work, we develop a randomeffects bivariate mixture model for metaanalyzing diagnostic test accuracy studies by accounting for both the within and acrossstudy heterogeneity in diagnostic test results. Instead of dichotomizing the studies in the metaanalysis, the proposed model generates the probability that each study is outlying and allows assessing the impact of outlying studies on the pooled sensitivity, specificity, and betweenstudy heterogeneity. We illustrate the performance of the developed method on reallife and simulated metaanalytic data.
 DEREK OUYANG, Ottawa Hospital Research Institute
Maintaining the validity of inference in steppedwedge cluster randomized trials under random effects misspecification [PDF]

Mixedeffect regression is commonly used in steppedwedge cluster randomized trials (SWCRTs). A key requirement is to account for the complex correlation structures. Common structures are exchangeable (random intercept), nested exchangeable (random clusterbyperiod), and exponential decay (discretetime decay). In recent years, more complex models (e.g., random intervention models) have been proposed. In practice, it is challenging to specify appropriate random effects and obtain valid statistical inferences. Robust variance estimators (RVE) that have been widely discussed under the generalized estimating equations framework may also be applied in mixedeffect regression to deal with randomeffect misspecifications. However, relevant discussion in SWCRT has been limited. In this study, we first review five RVEs that are available for linear mixed models via R. Then, we describe the results of a simulation study to investigate the performance of RVE under mixedeffect regression. We focused on SWCRTs with continuous outcomes assuming the data were generated from models with 1) exponential decay and random intervention effects, or 2) random clusterbyperiod and random intervention effects. For each data generator, we found that the use of RVE with either the random intercept or the random clusterbyperiod model was sufficient to provide valid statistical inference. With the Satterthwaite degrees of freedom approximation, among the five RVEs we investigated, CR3 (a smallsample corrected RVE that approximates the leaveoneclusterout jackknife variance estimator) consistently gave the best coverage results, even though it might be slightly anticonservative when the number of clusters was below sixteen.
 ELEANOR PULLENAYEGUM, The Hospital for Sick Children
A proposed workflow for handling longitudinal data with irregular assessment times [PDF]

Studies with longitudinal data often feature irregular observation times; a common cause of this is that data are collected as part of usual societal operations rather than for the purposes of research. For example, electronic health records (EHRs) are often used to study disease processes over time, or the impact of treatment on disease trajectory. When the assessment times and the outcome process are independent, failure to account for the assessment times will result in biased inferences; for example, if sicker patients visit more often, we will overestimate the burden of disease.
Although a very similar problem to missing data, the problem of irregular observation is typically ignored. When handling missing data, researchers know they need to report how much data is missing, consider the missingness mechanism, use an analytic approach suitable for their hypothesized missingness mechanism, and conduct sensitivity analysis. In this talk I will describe the irregular observation counterparts to these steps, outlining both the methods and procedures for implementing them in standard statistical software.
 GRACE YI, University of Western Ontario
Graphical proportional hazards measurement error models [PDF]

In survival data analysis, the Cox proportional hazards (PH) model is perhaps the most widely used model to feature the dependence of survival times on covariates. While many inference methods have been developed under such a model or its variants, those models are not adequate for handling data with complex structured covariates. Highdimensional survival data often entail several features:
(1) many covariates are inactive in explaining the survival information, (2) active covariates are associated in a network structure, and (3) some covariates are errorcontaminated. To handle such kinds of survival data, we propose graphical PH measurement error models and develop inferential procedures for the parameters of interest. Our proposed models significantly enlarge the scope of the usual Cox PH model and have great flexibility in characterizing survival data. Theoretical results are established to justify the proposed methods. Numerical studies are conducted to assess the performance of the proposed methods.
© Canadian Mathematical Society