Randomized trials are considered the gold standard for estimating causal effects. Trial findings are often used to inform policy and programming efforts, yet their results may not generalize well to a relevant target population if the trial sample is not representative of the population of interest. More specifically, generalizations will be hindered if a trial is not similar to the population with respect to characteristics that moderate the treatment effect. Statistical methods have been developed to assess representativeness and improve upon generalizability by combining trials with data from non-experimental studies.
Background: Recent studies have demonstrated a decline in cancer screening and diagnosis during the COVID-19 pandemic. This study explored trends in the diagnosis and management of eBC at a sample of cancer clinics across the US early on in the pandemic. Methods: Patients were selected from the Flatiron Health Research Database (FHRD), an electronic health record-derived de-identified database comprising approximately 280 US cancer clinics (~800 sites of care). Eligible patients had an ICD code for breast cancer, at least two clinical encounters, and a confirmed eBC (Stage I-III) diagnosis from unstructured documents.
Real world data sources, like electronic health records (EHRs), may produce meaningful insights into the impact of COVID-19 infection on patients (pts) with cancer. Newly developed ICD codes are useful for identifying COVID-19 diagnoses in EHRs; however, there is concern over lagged clinical uptake and uncaptured testing outside of the EHR system. These may lead to underestimation of COVID-19 diagnoses in EHRs, thereby mischaracterizing the burden of COVID-19 infection on pts with cancer.
Randomized trials are considered the gold standard for estimating causal effects, and evidence from trials is highly regarded in decision making processes that impact entire populations. While rigorous in design, RCTs can still be flawed; leveraging data and information from additional non-experimental or “real world” studies can be advantageous for addressing statistical issues and improving inferences. This dissertation addresses two complications that arise in trials and can be addressed in this way: poor external validity and measurement error.
Many lifestyle intervention trials depend on collecting self-reported outcomes, like dietary intake, to assess the intervention’s effectiveness. Self-reported outcome measures are subject to measurement error, which could impact treatment effect estimation. External validation studies measure both self-reported outcomes and an accompanying biomarker, and can therefore be used for measurement error correction. Most validation data, though, are only relevant for outcomes under control conditions. Statistical methods have been developed to use external validation data to correct for outcome measurement error under control, and then conduct sensitivity analyses around the error under treatment to obtain estimates of the corrected average treatment effect.
- ← Newer
- 1 of 3
- Older →