Post by Admin on Feb 21, 2024 22:47:06 GMT
Recommendations for accurate reporting in medical research statistics
Mohammad Ali Mansournia
Maryam Nazemipour
Published:February 17, 2024DOI:https://doi.org/10.1016/S0140-6736(24)00139-9
www.thelancet.com/journals/lancet/article/PIIS0140-6736(24)00139-9/fulltext
An important requirement for validity of medical research is sound methodology and statistics, yet this is still often overlooked by medical researchers.1, 2 Based on the experience of reviewing statistics in more than 1000 manuscripts submitted to The Lancet Group of journals over the past 3 years, this Correspondence provides guidance to commonly encountered statistical deficiencies in reports and how to avoid them (panel).
Basic recommendations for accurate reporting of statistics
•
Depending on the distribution, report either mean and SD or median and IQR for the description of quantitative variables. Provide supplemental material showing histograms or tables of the variables used in analyses.
•
Check all model assumptions, preferably with graphs where feasible.
•
Do not dichotomise p values ≥0·0001; instead, show the precise p value (eg, a p value of 0·032 should be shown as p=0·032, not p<0·05). However, the inequality p<0·0001 can be used to report very small p values.
•
Do not report results as showing no effect, unless all effects inside the interval estimate are clinically unimportant.
•
Interpret results on the basis of the clinical importance, with appropriate estimates of association with 95% CIs.
•
Identify confounders on the basis of background information, as depicted in causal directed acyclic graphs, not significance tests.
•
If the proportion of missing data is high enough to potentially affect results, use methods beyond simply discarding incomplete records—eg, inverse-probability-of-missingness weighting or multiple imputation.
•
Assess and handle sparse-data bias in ratio estimates with methods developed for that purpose.
•
If the outcome frequency is high, report risk ratios or risk differences instead of odds ratios.
•
Assess additive interactions even if your model is multiplicative.
Data description is crucial to making sense of data. The mean and SD are often used for the description of quantitative variables. Nonetheless, for highly skewed variables (eg, typical environmental exposures) the median and IQR should be used instead; for variables that take only positive values,
meanSD<2
indicates serious skewness.3 Full data descriptions also require histograms of continuous variables and tabulation of counts for categorical variables, along with percentages of missing data. Due to the volume of such descriptions, they can be given as supplementary material.
All statistical analyses are based on fundamental assumptions, such as randomness of selection or treatment assignment. The validity of statistical modelling depends on further assumptions that should be assessed and, for this purpose, statistical tests are inadequate—graphical methods are needed. An important assumption underlying most regression models is linearity (on some scale) for quantitative predictors, which should be assessed with methods such as fractional polynomials or regression splines. In particular, categorisation of quantitative variables assumes an unrealistic step function, which can result in power loss or uncontrolled confounding.4, 5
Statistical inference remains heavily based on hypothesis testing and estimation. However, p values can provide useful information about the compatibility of data with statistical hypotheses or models and so should be reported precisely, not replaced by qualitative comments about being significant or not. Compatibility can be gauged through transformations of p values, called s values, based on coin-tossing experiments.6, 7 Over-reliance on statistical testing should be avoided and p values should not be dichotomised at levels such as 0·05 or 0·01. In particular, large p values should not be interpreted as showing no association or no effect: absence of evidence is not evidence of absence.8 Only a very narrow interval estimate near the null value (0 for differences, 1 for ratios) warrants inferring that the study found no important association or effect. More generally, the clinical importance of results should be judged on the basis of interval estimates of appropriate measures, such as the difference of means or of risks.
The research question for many studies is causality, for which confounding adjustment is crucial. Confounders should be selected on the basis of background causal information—eg, as depicted in a directed acyclic graph.9, 10 Significance-based methodologies, such as stepwise selection algorithms, can be highly misleading because they could omit important confounders.11, 12, 13
Missing data is common. Simple methods of handling missing data, such as complete-case analysis (ie, listwise deletion), missingness indicators, or last-observation-carried-forward, can be subject to considerable bias and should be avoided if the proportion of missing data is high (eg, >5%). Better methods include inverse probability weighting and multiple imputation, although these still depend on missingness being conditionally random.14, 15
An important source of bias in logistic or Cox regression is sparse data—ie, a low number of events in some combinations of levels of variables. Unrealistically large ratio measures with wide interval estimates (eg, an odds ratio >10 with limits of 2 and 50) indicate sparse-data bias, which can be reduced with penalised or Bayesian methods.16, 17 When the dependent variable is an indicator of a common outcome, adjusted risk ratios are preferable to odds ratios for assessing clinical relevance, due to their ease of proper interpretation and resistance to sparse-data bias. Risk ratios and differences can be estimated in cohort studies and randomised trials with modified Poisson regression or regression standardisation.18, 19
Many studies try to examine interactions between two treatments on the outcome or want to estimate how much an effect of a treatment is modified by another variable (ie, effect-measure modification). Modellers often add product terms in the regression model such as logistic or Cox, which correspond to multiplicative interactions on the odds or rate scale. However, additive interaction on risks is more relevant for both clinical decisions and public health and so should be assessed as well.20 In either case, studies will usually have little power to establish even the direction of an interaction and risk producing misleading estimates if they screen for interactions with statistical tests.
MAM is a statistical reviewer for The Lancet Group. We declare no other competing interests. We thank Sander Greenland and Jay Kaufman for their helpful comments on an earlier draft of this Correspondence.
References
1.Mansournia MA Collins GS Nielsen RO et al.
A checklist for statistical assessment of medical papers (the CHAMP statement): explanation and elaboration.
Br J Sports Med. 2021; 55: 1009-1017
View in Article
Scopus (88)
PubMed
Crossref
Google Scholar
2.Mansournia MA Collins GS Nielsen RO et al.
Checklist for statistical assessment of medical papers: the CHAMP statement.
Br J Sports Med. 2021; 55: 1002-1003
View in Article
Scopus (38)
PubMed
Crossref
Google Scholar
3.Altman DG Bland JM
Detecting skewness from summary information.
BMJ. 1996; 3131200
View in Article
Scopus (334)
Crossref
Google Scholar
4.Altman DG Royston P
The cost of dichotomising continuous variables.
BMJ. 2006; 3321080
View in Article
Crossref
Google Scholar
5.Binney ZO Mansournia MA
Methods matter: (mostly) avoid categorising continuous data—a practical guide.
Br J Sports Med. 2023; (published online Nov 28.)
doi.org/10.1136/bjsports-2023-107599
View in Article
Scopus (0)
Crossref
Google Scholar
6.Greenland S Mansournia MA Joffe M
To curb research misreporting, replace significance and confidence by compatibility: a Preventive Medicine Golden Jubilee article.
Prev Med. 2022; 164107127
View in Article
Scopus (22)
PubMed
Crossref
Google Scholar
7.Mansournia MA Nazemipour M Etminan M
p-value, compatibility, and s-value.
Glob Epidemiol. 2022; 4100085
View in Article
Google Scholar
8.Altman DG Bland JM
Statistics notes: absence of evidence is not evidence of absence.
BMJ. 1995; 311: 485
View in Article
Scopus (1287)
PubMed
Crossref
Google Scholar
9.Greenland S Pearl J Robins JM
Causal diagrams for epidemiologic research.
Epidemiology. 1999; 10: 37-48
View in Article
Scopus (2821)
PubMed
Crossref
Google Scholar
10.Lipsky AM Greenland S
Causal directed acyclic graphs.
JAMA. 2022; 327: 1083-1084
View in Article
Scopus (65)
PubMed
Crossref
Google Scholar
11.Etminan M Collins GS Mansournia MA
Using causal diagrams to improve the design and interpretation of medical research.
Chest. 2020; 158: S21-S28
View in Article
Scopus (71)
PubMed
Summary
Full Text
Full Text PDF
Google Scholar
12.Etminan M Brophy JM Collins G Nazemipour M Mansournia MA
To adjust or not to adjust: the role of different covariates in cardiovascular observational studies.
Am Heart J. 2021; 237: 62-67
View in Article
Scopus (38)
PubMed
Crossref
Google Scholar
13.Kyriacou DN Greenland P Mansournia MA
Using causal diagrams for biomedical research.
Ann Emerg Med. 2023; 81: 606-613
View in Article
Scopus (3)
Summary
Full Text
Full Text PDF
Google Scholar
14.Altman DG Bland JM
Missing data.
BMJ. 2007; 334: 424
View in Article
Scopus (145)
PubMed
Crossref
Google Scholar
15.Mansournia MA Altman DG
Inverse probability weighting.
BMJ. 2016; 352: i189
View in Article
Scopus (307)
PubMed
Crossref
Google Scholar
16.Greenland S Mansournia MA Altman DG
Sparse data bias: a problem hiding in plain sight.
BMJ. 2016; 352i1981
View in Article
PubMed
Google Scholar
17.Mansournia MA Geroldinger A Greenland S Heinze G
Separation in logistic regression: causes, consequences, and control.
Am J Epidemiol. 2018; 187: 864-870
View in Article
Scopus (142)
PubMed
Crossref
Google Scholar
18.Zou G
A modified Poisson regression approach to prospective studies with binary data.
Am J Epidemiol. 2004; 159: 702-706
View in Article
Scopus (6512)
PubMed
Crossref
Google Scholar
19.Greenland S
Model-based estimation of relative risks and other epidemiologic measures in studies of common outcomes and in case-control studies.
Am J Epidemiol. 2004; 160: 301-305
View in Article
Scopus (599)
PubMed
Crossref
Google Scholar
20.Knol MJ VanderWeele TJ
Recommendations for presenting analyses of effect modification and interaction.
Int J Epidemiol. 2012; 41: 514-520
View in Article
Scopus (746)
PubMed
Crossref
Google Scholar
Article info
Publication history
Published: 17 February 2024
Identification
DOI: doi.org/10.1016/S0140-6736(24)00139-9
Mohammad Ali Mansournia
Maryam Nazemipour
Published:February 17, 2024DOI:https://doi.org/10.1016/S0140-6736(24)00139-9
www.thelancet.com/journals/lancet/article/PIIS0140-6736(24)00139-9/fulltext
An important requirement for validity of medical research is sound methodology and statistics, yet this is still often overlooked by medical researchers.1, 2 Based on the experience of reviewing statistics in more than 1000 manuscripts submitted to The Lancet Group of journals over the past 3 years, this Correspondence provides guidance to commonly encountered statistical deficiencies in reports and how to avoid them (panel).
Basic recommendations for accurate reporting of statistics
•
Depending on the distribution, report either mean and SD or median and IQR for the description of quantitative variables. Provide supplemental material showing histograms or tables of the variables used in analyses.
•
Check all model assumptions, preferably with graphs where feasible.
•
Do not dichotomise p values ≥0·0001; instead, show the precise p value (eg, a p value of 0·032 should be shown as p=0·032, not p<0·05). However, the inequality p<0·0001 can be used to report very small p values.
•
Do not report results as showing no effect, unless all effects inside the interval estimate are clinically unimportant.
•
Interpret results on the basis of the clinical importance, with appropriate estimates of association with 95% CIs.
•
Identify confounders on the basis of background information, as depicted in causal directed acyclic graphs, not significance tests.
•
If the proportion of missing data is high enough to potentially affect results, use methods beyond simply discarding incomplete records—eg, inverse-probability-of-missingness weighting or multiple imputation.
•
Assess and handle sparse-data bias in ratio estimates with methods developed for that purpose.
•
If the outcome frequency is high, report risk ratios or risk differences instead of odds ratios.
•
Assess additive interactions even if your model is multiplicative.
Data description is crucial to making sense of data. The mean and SD are often used for the description of quantitative variables. Nonetheless, for highly skewed variables (eg, typical environmental exposures) the median and IQR should be used instead; for variables that take only positive values,
meanSD<2
indicates serious skewness.3 Full data descriptions also require histograms of continuous variables and tabulation of counts for categorical variables, along with percentages of missing data. Due to the volume of such descriptions, they can be given as supplementary material.
All statistical analyses are based on fundamental assumptions, such as randomness of selection or treatment assignment. The validity of statistical modelling depends on further assumptions that should be assessed and, for this purpose, statistical tests are inadequate—graphical methods are needed. An important assumption underlying most regression models is linearity (on some scale) for quantitative predictors, which should be assessed with methods such as fractional polynomials or regression splines. In particular, categorisation of quantitative variables assumes an unrealistic step function, which can result in power loss or uncontrolled confounding.4, 5
Statistical inference remains heavily based on hypothesis testing and estimation. However, p values can provide useful information about the compatibility of data with statistical hypotheses or models and so should be reported precisely, not replaced by qualitative comments about being significant or not. Compatibility can be gauged through transformations of p values, called s values, based on coin-tossing experiments.6, 7 Over-reliance on statistical testing should be avoided and p values should not be dichotomised at levels such as 0·05 or 0·01. In particular, large p values should not be interpreted as showing no association or no effect: absence of evidence is not evidence of absence.8 Only a very narrow interval estimate near the null value (0 for differences, 1 for ratios) warrants inferring that the study found no important association or effect. More generally, the clinical importance of results should be judged on the basis of interval estimates of appropriate measures, such as the difference of means or of risks.
The research question for many studies is causality, for which confounding adjustment is crucial. Confounders should be selected on the basis of background causal information—eg, as depicted in a directed acyclic graph.9, 10 Significance-based methodologies, such as stepwise selection algorithms, can be highly misleading because they could omit important confounders.11, 12, 13
Missing data is common. Simple methods of handling missing data, such as complete-case analysis (ie, listwise deletion), missingness indicators, or last-observation-carried-forward, can be subject to considerable bias and should be avoided if the proportion of missing data is high (eg, >5%). Better methods include inverse probability weighting and multiple imputation, although these still depend on missingness being conditionally random.14, 15
An important source of bias in logistic or Cox regression is sparse data—ie, a low number of events in some combinations of levels of variables. Unrealistically large ratio measures with wide interval estimates (eg, an odds ratio >10 with limits of 2 and 50) indicate sparse-data bias, which can be reduced with penalised or Bayesian methods.16, 17 When the dependent variable is an indicator of a common outcome, adjusted risk ratios are preferable to odds ratios for assessing clinical relevance, due to their ease of proper interpretation and resistance to sparse-data bias. Risk ratios and differences can be estimated in cohort studies and randomised trials with modified Poisson regression or regression standardisation.18, 19
Many studies try to examine interactions between two treatments on the outcome or want to estimate how much an effect of a treatment is modified by another variable (ie, effect-measure modification). Modellers often add product terms in the regression model such as logistic or Cox, which correspond to multiplicative interactions on the odds or rate scale. However, additive interaction on risks is more relevant for both clinical decisions and public health and so should be assessed as well.20 In either case, studies will usually have little power to establish even the direction of an interaction and risk producing misleading estimates if they screen for interactions with statistical tests.
MAM is a statistical reviewer for The Lancet Group. We declare no other competing interests. We thank Sander Greenland and Jay Kaufman for their helpful comments on an earlier draft of this Correspondence.
References
1.Mansournia MA Collins GS Nielsen RO et al.
A checklist for statistical assessment of medical papers (the CHAMP statement): explanation and elaboration.
Br J Sports Med. 2021; 55: 1009-1017
View in Article
Scopus (88)
PubMed
Crossref
Google Scholar
2.Mansournia MA Collins GS Nielsen RO et al.
Checklist for statistical assessment of medical papers: the CHAMP statement.
Br J Sports Med. 2021; 55: 1002-1003
View in Article
Scopus (38)
PubMed
Crossref
Google Scholar
3.Altman DG Bland JM
Detecting skewness from summary information.
BMJ. 1996; 3131200
View in Article
Scopus (334)
Crossref
Google Scholar
4.Altman DG Royston P
The cost of dichotomising continuous variables.
BMJ. 2006; 3321080
View in Article
Crossref
Google Scholar
5.Binney ZO Mansournia MA
Methods matter: (mostly) avoid categorising continuous data—a practical guide.
Br J Sports Med. 2023; (published online Nov 28.)
doi.org/10.1136/bjsports-2023-107599
View in Article
Scopus (0)
Crossref
Google Scholar
6.Greenland S Mansournia MA Joffe M
To curb research misreporting, replace significance and confidence by compatibility: a Preventive Medicine Golden Jubilee article.
Prev Med. 2022; 164107127
View in Article
Scopus (22)
PubMed
Crossref
Google Scholar
7.Mansournia MA Nazemipour M Etminan M
p-value, compatibility, and s-value.
Glob Epidemiol. 2022; 4100085
View in Article
Google Scholar
8.Altman DG Bland JM
Statistics notes: absence of evidence is not evidence of absence.
BMJ. 1995; 311: 485
View in Article
Scopus (1287)
PubMed
Crossref
Google Scholar
9.Greenland S Pearl J Robins JM
Causal diagrams for epidemiologic research.
Epidemiology. 1999; 10: 37-48
View in Article
Scopus (2821)
PubMed
Crossref
Google Scholar
10.Lipsky AM Greenland S
Causal directed acyclic graphs.
JAMA. 2022; 327: 1083-1084
View in Article
Scopus (65)
PubMed
Crossref
Google Scholar
11.Etminan M Collins GS Mansournia MA
Using causal diagrams to improve the design and interpretation of medical research.
Chest. 2020; 158: S21-S28
View in Article
Scopus (71)
PubMed
Summary
Full Text
Full Text PDF
Google Scholar
12.Etminan M Brophy JM Collins G Nazemipour M Mansournia MA
To adjust or not to adjust: the role of different covariates in cardiovascular observational studies.
Am Heart J. 2021; 237: 62-67
View in Article
Scopus (38)
PubMed
Crossref
Google Scholar
13.Kyriacou DN Greenland P Mansournia MA
Using causal diagrams for biomedical research.
Ann Emerg Med. 2023; 81: 606-613
View in Article
Scopus (3)
Summary
Full Text
Full Text PDF
Google Scholar
14.Altman DG Bland JM
Missing data.
BMJ. 2007; 334: 424
View in Article
Scopus (145)
PubMed
Crossref
Google Scholar
15.Mansournia MA Altman DG
Inverse probability weighting.
BMJ. 2016; 352: i189
View in Article
Scopus (307)
PubMed
Crossref
Google Scholar
16.Greenland S Mansournia MA Altman DG
Sparse data bias: a problem hiding in plain sight.
BMJ. 2016; 352i1981
View in Article
PubMed
Google Scholar
17.Mansournia MA Geroldinger A Greenland S Heinze G
Separation in logistic regression: causes, consequences, and control.
Am J Epidemiol. 2018; 187: 864-870
View in Article
Scopus (142)
PubMed
Crossref
Google Scholar
18.Zou G
A modified Poisson regression approach to prospective studies with binary data.
Am J Epidemiol. 2004; 159: 702-706
View in Article
Scopus (6512)
PubMed
Crossref
Google Scholar
19.Greenland S
Model-based estimation of relative risks and other epidemiologic measures in studies of common outcomes and in case-control studies.
Am J Epidemiol. 2004; 160: 301-305
View in Article
Scopus (599)
PubMed
Crossref
Google Scholar
20.Knol MJ VanderWeele TJ
Recommendations for presenting analyses of effect modification and interaction.
Int J Epidemiol. 2012; 41: 514-520
View in Article
Scopus (746)
PubMed
Crossref
Google Scholar
Article info
Publication history
Published: 17 February 2024
Identification
DOI: doi.org/10.1016/S0140-6736(24)00139-9