Background: Pre-eclampsia is a leading cause of maternal and perinatal mortality and morbidity. Early identification of women at risk during pregnancy is required to plan management. Although there are many published prediction models for pre-eclampsia, few have been validated in external data. Our objective was to externally validate published prediction models for pre-eclampsia using individual participant data (IPD) from UK studies, to evaluate whether any of the models can accurately predict the condition when used within the UK healthcare setting. Methods: IPD from 11 UK cohort studies (217,415 pregnant women) within the International Prediction of Pregnancy Complications (IPPIC) pre-eclampsia network contributed to external validation of published prediction models, identified by systematic review. Cohorts that measured all predictor variables in at least one of the identified models and reported pre-eclampsia as an outcome were included for validation. We reported the model predictive performance as discrimination (C-statistic), calibration (calibration plots, calibration slope, calibration-in-the-large), and net benefit. Performance measures were estimated separately in each available study and then, where possible, combined across studies in a random-effects meta-analysis. Results: Of 131 published models, 67 provided the full model equation and 24 could be validated in 11 UK cohorts. Most of the models showed modest discrimination with summary C-statistics between 0.6 and 0.7. The calibration of the predicted compared to observed risk was generally poor for most models with observed calibration slopes less than 1, indicating that predictions were generally too extreme, although confidence intervals were wide. There was large between-study heterogeneity in each model’s calibration-in-the-large, suggesting poor calibration of the predicted overall risk across populations. In a subset of models, the net benefit of using the models to inform clinical decisions appeared small and limited to probability thresholds between 5 and 7%. Conclusions: The evaluated models had modest predictive performance, with key limitations such as poor calibration (likely due to overfitting in the original development datasets), substantial heterogeneity, and small net benefit across settings. The evidence to support the use of these prediction models for pre-eclampsia in clinical decision-making is limited. Any models that we could not validate should be examined in terms of their predictive performance, net benefit, and heterogeneity across multiple UK settings before consideration for use in practice. Trial registration: PROSPERO ID: CRD42015029349.
|State||Published - Dec 2020|
Bibliographical noteFunding Information:
This project was funded by the National Institute for Health Research Health Technology Assessment Programme (ref no: 14/158/02). Kym Snell is funded by the National Institute for Health Research School for Primary Care Research (NIHR SPCR). The UK Medical Research Council and Wellcome (grant ref.: 102215/2/13/2) and the University of Bristol provide core support for ALSPAC. This publication is the work of the authors, and ST, RR, KS, and JA will serve as guarantors for the contents of this paper. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care. Acknowledgements
The authors disclose support from NIHR HTA for the submitted work. LCC reports being Chair of the HTA CET Committee from January 2019. AK reports being a member of the NIHR HTA board. BWM reports grants from Merck; personal fees from OvsEva, Merck, and Guerbet; and other from NHMRC, Guerbet, and Merch, outside the submitted work. GS reports grants and personal fees from GlaxoSmithKline Research and Development Limited, grants from Sera Prognostics Inc., non-financial support from Illumina Inc., and personal fees and non-financial support from Roche Diagnostics Ltd., outside the submitted work. JK reports personal fees from Roche Canada, outside the submitted work. JM reports grants from National Health Research and Development Program, Health and Welfare Canada, during the conduct of the study. JEN reports grants from Chief Scientist Office Scotland, other from GlaxoSmithKline and Dilafor, outside the submitted work. AG reports personal fees from Roche Diagnostics, outside the submitted work. IH reports personal fees from Roche Diagnostics and Thermo Fisher, outside the submitted work. RR reports personal fees from the BMJ, Roche, and Universities of Leeds, Edinburgh, and Exeter, outside the submitted work.
© 2020, The Author(s).
- External validation
- Individual participant data
- Prediction model