8.13.1  Rationale for concern about bias

Missing outcome data, due to attrition (drop-out) during the study or exclusions from the analysis, raise the possibility that the observed effect estimate is biased. We shall use the term incomplete outcome data to refer to both attrition and exclusions. When an individual participant’s outcome is not available we shall refer to it as missing.

 

Attrition may occur for the following reasons.

 

In addition, some participants may be excluded from analysis for the following reasons:

 

Some exclusions of participants may be justifiable, in which case they need not be considered as leading to missing outcome data (Fergusson 2002). For example, participants who are randomized but are subsequently found not to have been eligible for the trial may be excluded, as long as the discovery of ineligibility could not have been affected by the randomized intervention, and preferably on the basis of decisions made blinded to assignment. The intention to exclude such participants should be specified before the outcome data are seen.

 

An intention-to-treat (ITT) analysis is often recommended as the least biased way to estimate intervention effects in randomized trials (Newell 1992): see Chapter 16 (Section 16.2). The principles of ITT analyses are

  1. keep participants in the intervention groups to which they were randomized, regardless of the intervention they actually received;

  2. measure outcome data on all participants; and

  3. include all randomized participants in the analysis.

The first principle can always be applied. However, the second is often impossible due to attrition beyond the control of the trialists. Consequently, the third principle of conducting an analysis that includes all participants can only be followed by making assumptions about the missing values (see below). Thus very few trials can perform a true ITT analysis without making imputations, especially when there is extended follow-up. In practice, study authors may describe an analysis as ITT even when some outcome data are missing. The term ‘ITT’ does not have a clear and consistent definition, and it is used inconsistently in study reports (Hollis 1999). Review authors should use the term only to imply all three of the principles above, and should interpret with care any studies that use the term without clarification.

 

Review authors may also encounter analyses described as “modified intention-to-treat”, which usually means that participants were excluded if they did not receive a specified minimum amount of the intended intervention. This term is also used in a variety of ways so review authors should always seek information about precisely who was included.

 

Note that it might be possible to conduct analyses that include participants who were excluded by the study authors (re-inclusions), if the reasons for exclusions are considered inappropriate and the data are available to the review author. Review authors are encouraged to do this when possible and appropriate.

 

Concerns over bias resulting from incomplete outcome data are driven mainly by theoretical considerations. Several empirical studies have looked at whether various aspects of missing data are associated with the magnitude of effect estimates. Most found no clear evidence of bias (Schulz 1995b, Kjaergard 2001, Balk 2002, Siersma 2007). Tierney et al. observed a tendency for analyses conducted after trial authors excluded participants to favour the experimental intervention compared with analyses including all participants (Tierney 2005). There are notable examples of biased ‘per-protocol’ analyses, however (Melander 2003) and a review has found more exaggerated effect estimates from ‘per-protocol’ analyses compared with ‘ITT’ analyses of the same trials (Porta 2007). Interpretation of empirical studies is difficult because exclusions are poorly reported, particularly before 1996 in the pre-CONSORT era (Moher 2001b). For example, Schulz observed that the apparent lack of exclusions was associated with more ‘beneficial’ effect sizes as well as with less likelihood of adequate allocation concealment (Schulz 1996). Hence, failure to report exclusions in trials in Schulz’s study may have been a marker of poor trial conduct rather than true absence of any exclusions.

 

Empirical research has also investigated the adequacy with which incomplete outcome data are addressed in reports of trials. One study, of 71 trial reports from four general medical journals, concluded that missing data are common and often inadequately handled in the statistical analysis (Wood 2004).