Original Articles
Capture-recapture is a potentially useful method for assessing publication bias

https://doi.org/10.1016/j.jclinepi.2003.09.015Get rights and content

Abstract

Objectives

Publication bias is a problem in systematic reviews of randomized controlled trials. The aim of this study was to assess the influence of publication bias in a systematic review of the effectiveness of Progressive Resistance Training (PRT) in older people.

Study Design and Setting

The relevant studies were ascertained from three sources: electronic databases, experts, and handsearching. Capture-recapture, visual inspection of funnel plots, two statistical tests, and two methods that make adjustments for publication bias, were employed to check the robustness of the conclusions of the systematic review.

Results

The methods employed gave broadly consistent results. Capture-recapture estimated that 3 (95% CI [1], [15]) relevant studies were missed, while Trim and Fill suggested 16 studies had been missed. Both Egger's test for bias and a funnel plot regression approach suggested that publication bias was present. A selection model approach suggested that the funnel plot asymmetry observed may not be entirely due to publication bias.

Conclusion

Capture-recapture is a potentially useful method for assessing publication bias. Further research in the form of simulation studies is required, using a variety of scenarios to investigate the extent to which each method approximates the truth.

Introduction

Meta-analysis aims to compare and combine estimates of effect across related studies. Although a meta-analysis is often employed, a systematic review does not have to include a quantitative synthesis of the data extracted from the studies included. One of the major problems that may influence the results of a meta-analysis is publication bias. Systematic reviews usually include studies found using electronic database search strategies, but these databases of published studies are more likely to show significant results. Publication bias in a systematic review can lead to an overestimation of effect sizes, which can be misleading to health professionals who rely on evidence for decision making [1].

To reduce publication bias, it is recommended that a comprehensive search be undertaken to locate all potential studies including those unpublished [2]. Effective strategies for searching electronic databases, such as Medline and PubMed, have been outlined by Dickersin et al. [3] and Robinson and Dickersin [4], while Hopewell et al. [5] have recently demonstrated that a combination of MEDLINE and handsearching is required to adequately identify all reports of randomized controlled trials. Moreover, whether or not studies from the “gray literature” (i.e., trials that are unpublished or are awaiting publication) are included in a review can have a significant influence on the reported effectiveness of a particular intervention [6].

Several methods of assessing publication bias have been proposed, the most common of which is the “funnel plot” [7]. This is simple graphical assessment of the effect size against the sample size of each study. The plot should have a “funnel” shape with the most precise effect sizes at the top, and least precise effect sizes at the base of the funnel. In the event of publication bias, there is a missing portion at the base of the funnel, signifying that small imprecise studies have not been included. Recently, Sutton et al. [8] have undertaken a review of statistical methods for assessing publication bias in meta-analysis. These methods range from computing a “fail-safe number” of negative unpublished studies that need to be found to overturn the results of the meta-analysis on the retrieved studies [9]; using a diagnostic plot, such as a funnel plot, and trying to fill in an equal number of extreme negative studies, to see how the results change [10], [11]; rank correlation methods [12]; regression methods [13]; to selection models that treat publication bias as a missing data problem [14], [15], [16]. All these methods are based on the assumption that publication bias leads to funnel plot asymmetry. Although the review of Sutton et al. and that of Song et al. [17] are extremely thorough in the appraisal of this issue, neither has discussed the possibility of employing capture-recapture techniques for assessing the completeness of study ascertainment.

Capture-recapture methods have traditionally been used in ecological studies but are now increasingly utilized in epidemiology most recently for checking the completeness of case ascertainment in population-based studies [18]. These methods have been explored by Spoor et al. [19] as a useful method of evaluating the completeness of a systematic literature search, albeit using the simplest scenario of two sources of study ascertainment. As Spoor et al. point out, though, capture-recapture, is an inappropriate name when applied to literature searching as there is actually no “recapture” being performed. In fact, the approach assesses multiple methods of study ascertainment within a specific time frame on a single occasion. Capture-recapture may therefore be more suitable for systematic reviews as it is employed prospectively, while most of statistical methods described in other reviews of publication bias are post hoc approaches.

Although validation is difficult, many of the statistical methods referred to above provide a useful framework for performing a sensitivity analysis regarding the likely impact of publication bias on a systematic review. There is, at present, no consensus on what form such a sensitivity analyses should take. In part, this is due to uncertainty regarding the appropriateness of the adjustment methods, and in part due to lack of data comparing the performance of the different approaches, although some do exist [20], [21]. The aim of this study is to employ capture-recapture techniques, and some other statistical methods of handling publication bias, as a form of sensitivity analysis to a systematic review to check the robustness of our conclusions. We will firstly outline the systematic review, we will then describe the capture-recapture methodology as well as the other methods employed in the sensitivity analysis, before finally, discussing the advantages and disadvantages of each approach in this particular case study.

Section snippets

Case study: progressive resistance training (PRT) in older people

One of the inevitable consequences of increasing age are declines in muscle strength, and muscle weakness in older people, particularly of the lower limbs, is associated with reduced physical performance [22], increased risk of disability [23], and falls [24], while there have been a number of studies that have shown that older people who participate in strength-training programs where the stimulus increases as strength improves, can experience large improvements in their muscle strength [25],

Impact of publication bias on the systematic review

As there are several approaches for investigating the presence of publication bias, as well as for correcting for it when it is present, it would seem useful to undertake a sensitivity analysis to estimate the robustness of the overall conclusions of our meta-analysis. The methods we used are based on the funnel plot, as this is the most common method employed in systematic reviews. Each method is first described and then applied to our case study of PRT.

Discussion

Due to the subjectivity of visual inspections of funnel plots, we employed a sensitivity analysis approach to investigate publication bias in a systematic review. This sensitivity analysis used some recently developed methods from the meta-analytic literature as well as capture-recapture to assess the completeness of study retrieval in our systematic review based on published and unpublished sources. The results of the capture-recapture analysis suggest that our systematic search strategy

Acknowledgements

We would like to thank Professor J. Copas and Dr. J.Q. Shi, who provided the S-Plus functions to implement the selection model analysis. We would also like to thank the members of the Cochrane Muscloskeletal Injuries Group for their assistance.

References (48)

  • K. Dickersin et al.

    Systematic reviews: identifying relevant studies for systematic reviews

    BMJ

    (1994)
  • K.A. Robinson et al.

    Development of a highly sensitive search strategy for the retrieval of reports of controlled trials using PubMed

    Int J Epidemiol

    (2002)
  • S. Hopewell et al.

    A comparison of handsearching versus MEDLINE searching to identify reports of randomized controlled trials

    Stat Med

    (2002)
  • R.J. Light et al.

    Summing up: the science of reviewing research

    (1984)
  • A.J. Sutton et al.

    Modelling publication bias in meta-analysis

    Stat Methods Med Res

    (2000)
  • S. Duval et al.

    Practical estimates of the effect of publication bias in meta-analysis

    Aust Epidemiol

    (1998)
  • S. Duval et al.

    Trim and Fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis

    Biometrics

    (2000)
  • C. Begg et al.

    Operating characteristics of a rank correlation test for publication bias

    Biometrics

    (1994)
  • M. Egger et al.

    Bias in meta-analysis detected by a simple, graphical test

    BMJ

    (1997)
  • S. Iyengar et al.

    Selection models and the file drawer problem

    Stat Sci

    (1988)
  • L.V. Hedges

    Modeling publication selection effects in meta-analysis

    Stat Sci

    (1992)
  • L.J. Gleser et al.

    Models for estimating the number of unpublished studies

    Stat Med

    (1996)
  • F. Song et al.

    Publication and related biases

    Health Technol Assess

    (2000)
  • E.B. Hook et al.

    Capture-recapture methods in epidemiology: methods and limitations

    Epidemiol Rev

    (1995)
  • Cited by (0)

    View full text