Friday, December 1, 2017

Assessing associations in observational studies

In the medical literature, it is very common to find variables associated with a specific outcome. For example, increased body mass index (the variable) might be associated with an increased risk of cancer (the outcome). However, an association does not always imply that one thing caused the other. It’s important to consider other possible interpretations.

Here are the 5 interpretations that you should consider when you read or hear about a reported association in observational studies:

1: The results were obtained by chance (random error)

The relationship between the variable and the outcome occurred by chance.

What’s really happening? There is no true association. It may be that two events appear to be related, just by coincidence.

Clues that this might be the case: The results can’t be replicated by repeating the study. We need to view the precision of the reported association.

For example, you find a reported association between watching TV and myopia in an observational study. However, at the time that you conduct a similar study you found no association, therefore the association could have occurred by chance.

2: Bias (systematic error)

There is no true cause-and-effect relationship, there just appears to be. However, the results are not due to chance, but due to bias.

What’s really happening? There are issues in the design and application of the study which give a false impression that there is a relationship between the variable and outcome of interest.

Clues that this might be the case: The results are inconsistent with other similar studies; there may be issues of bias in the study design or conduct (e.g. confounders), or in the analysis of the results. “The greater the error the less accurate the variable”.

For example, you found an association between vaginal breech delivery and developmental dysplasia of the hip. However, the paediatrician’s examination was more detailed in newborns with breech presentation than those with cephalic presentation.  There could be a diagnosis bias, in which certain perceptions alter the probability of diagnosing a certain disease between groups.

3: The cause-effect relationship is upside down (effect-cause)

There really is a causal relationship, but in the opposite direction from that reported, meaning that the interpretation of the relationship is incorrect.

What’s really happening? The supposed outcome is the real cause, and the supposed cause is the real outcome.

This can be a problem in study designs that don’t address temporality, e.g. cross-sectional and case-control studies.

For example, if you were to identify a relationship between taking non-steroidal anti-inflammatory drugs (NSAIDs) and a greater risk of spontaneous abortion, you may think that the NSAIDs caused the spontaneous abortion. However, it is also possible that the NSAIDs could be taken to relieve the pain due to early symptoms of the spontaneous abortion itself. This misunderstanding is known as protopathic bias (when a drug is initiated in response to the first symptoms of a disease which is, at this point, undiagnosed).

4: There is another (unmeasured) variable which explains the relationship (confounding)

There is an unmeasured variable which explains the association.

What’s really happening? There is an unknown variable that intervenes in the relation, it could be between the “cause” and the “effect”, or a single variable causes both “cause” and “effect”.

For example, if you were to identify a relationship between having a higher BMI and a greater risk of cancer, you might think that having a high BMI causes cancer. However, it would be important to consider whether other factors associated with having a higher BMI (e.g. poorer diet, less physical activity) could explain the increased cancer risk.

5: The variable truly causes the outcome (cause-effect)

There really is a cause-effect relationship.

You could also go further and evaluate your causality (e.g. using the Bradford Hill criteria;  a set of principles for assessing the likelihood that there is a causal relationship between a presumed cause and effect):

  • Strength of association
  • consistency
  • specificity
  • temporality
  • biological gradient (dose- response)
  • plausibility
  • coherence
  • experimental evidence
  • analogy

References

Hulley SB, Cummings SR, Browner WS, Grady DG and Newman TB (2013). Designing clinical research. Lippincott Williams & Wilkins.

Swaen G and van Amelsvoort L (2009). A weight of evidence approach to causal inference. Journal of clinical epidemiology; 62(3):270-277.

The post Assessing associations in observational studies appeared first on Students 4 Best Evidence.

No comments:

Post a Comment