Author + information
- Sonia Bouri, MBBS, BSc∗ (, )
- Zachary I. Whinnett, PhD,
- Graham D. Cole, MA, MB, BChir,
- Charlotte H. Manisty, MBBS, PhD,
- John G. Cleland, MD, PhD and
- Darrel P. Francis, MA
- ↵∗Dr. Sonia Bouri, International Centre for Circulatory Health, National Heart and Lung Institute, Imperial College London, 59-61 North Wharf Road, London W2 1LA, United Kingdom.
“Outcome,” “response,” and “effect” are not the same. Unfortunately, these terms are commonly used interchangeably in imaging research, which can lead to problems with study design and misinterpretation of results. We propose and illustrate simple definitions to allow these fundamentally different concepts to be distinguished for clear communication between authors and readers of scientific papers.
“Outcome” is a measured value, which can be a state of health or an event, subsequent to an intervention. Outcome data is easy to obtain, as it requires only 1 measurement and no knowledge of measurements made prior to intervention. It can be a useful and valid method for assessing whether health care needs have been met or not.
“Response” is the change in a measured variable from before to after an intervention. For example, it could be the change in ejection fraction after biventricular pacemaker insertion. Avoiding bias is challenging, because clinical staff know patient characteristics, whether the patient has had an intervention, and what the previous measurements have been. Response is an unreliable and often deceptive metric because, without a control group, it cannot distinguish among background variation, the natural history of disease, and the effects of an intervention.
The “effect” is the difference in response between patients that have undergone an intervention compared with the response of a control group and thus requires 4 measurements to be compared. This is more complex but is the key metric that should dominate clinical decisions about therapeutic interventions because it distinguishes the effect of an intervention from the natural history of disease. “Efficacy,” “effectiveness,” “advantage,” “net benefit,” or “net harm” are other terms that can be used instead of “effect.”
Outcome, response, and effect are not interchangeable
If 3 patient groups are undergoing an identical intervention, it is possible that the group that has the best outcome, the group that has the best response, and the group that receives the greatest effect are all different (Fig. 1).
Consider a hypothetical example of Agent X, which is known to dramatically increase left ventricular ejection fraction (LVEF), but only in patients with dilated cardiomyopathy and low LVEF. Imagine it being examined in 3 groups:
• Group 1: testing in healthy subjects to establish safety;
• Group 2: a first-in-man open-label study at the center that invented Agent X, in the early minutes after successful primary angioplasty for myocardial infarction; and
• Group 3: a double-blind placebo-controlled trial in patients with dilated cardiomyopathy and low LVEF.
Group 1 will have the best “outcome,” because these subjects have the highest baseline LVEF. Agent X does not change a normal LVEF. There is no response and no effect. Group 2 may have the best “response” because with prompt revascularization myocardial function will recover substantially in many patients, independent of Agent X. Group 3 may have the greatest increase in LVEF attributable to the intervention so it will have received the greatest “effect.”
If our community judged clinical usefulness on outcome or response, Agent X might be considered inappropriate for Group 3, even though they are the only ones who truly benefitted at all.
To illustrate this concept, look at studies of cardiac resynchronization therapy (CRT) by searching for “echocardiography and CRT” in Europe PubMed Central. Frequently, trials use the terms “outcome,” “response,” and “effect” (or “benefit”) without a clear separation in meaning (Fig. 2) and in some cases interchangeably.
One pitfall of reporting response is the phenomenon of “regression to the mean.” If enrollment requires a measurement below a certain threshold, using a test with an element of variability, such as LVEF, a measurement taken on 1 particular day might be lower than the patients' true average value. When the test is repeated (after the intervention), the measurement is likely to have risen closer to the patients' true average. This may give the false impression of a therapeutic improvement. Unless there is a control group for comparison, a reader may be misled into thinking that an intervention is effective. Describing an intervention as “effective” should be reserved for the findings of randomized controlled trials where there is a significant difference between the intervention and control groups.
The terms “outcome,” “response,” and “effect” are sometimes used interchangeably in imaging research. We suggest simple definitions to facilitate clear communication and avoid misinterpretation of findings and even of study design.
Please note: Drs. Whinnett (FS/13/44/30291) and Cole (FS/12/12/29294) and Professor Francis (FS/10/038) are supported by grants from British Heart. Professor Cleland has received speakers' honoraria from Medtronic, Biotronik, and Sorin; and a research grant from Sorin. All other authors have reported that they have no relationships relevant to the contents of this paper to disclose.