Thursday, November 29, 2018

Heterogeneity: what is it and why does it matter?

Heterogeneity is not something to be afraid of, it just means that there is variability in your data. So, if one brings together different studies for analysing them or doing a meta-analysis, it is clear that there will be differences found. The opposite of heterogeneity is homogeneity meaning that all studies show the same effect.

It is important to note that there are different types of heterogeneity:

  • Clinical: Differences in participants, interventions or outcomes
  • Methodological: Differences in study design, risk of bias
  • Statistical: Variation in intervention effects or results

We are interested in these differences because they can indicate that our intervention may not be working in the same way every time it’s used. By investigating these differences, you can reach a much greater understanding of what factors influence the intervention, and what result you can expect next time the intervention is implemented.

Although clinical and methodological heterogeneity are important, this blog will be focusing on statistical heterogeneity.

How to identify and measure heterogeneity

Eyeball test

In your forest plot, have a look at overlapping confidence intervals, rather than on which side your effect estimates are. Whether the results are on either side of the line of no effect may not affect your assessment of whether heterogeneity is present, but it may influence your assessment of whether the heterogeneity matters.

With this in mind, take a look at the graph below and decide which plot is more homogeneous.

2 statistical forest plot diagrams

Of course, the more homogeneous one is the plot number 1 . The confidence intervals are all overlapping and in addition to that, all studies favour the control intervention.

For the people who love to measure things instead of just eyeballing them, don’t worry, there are still some statistical methods to help you seize the concept of heterogeneity.

Chi-squared (χ²) test

This test assumes the null hypothesis that all the studies are homogeneous, or that each study is measuring an identical effect, and gives us a p-value to test this hypothesis. If the p-value of the test is low we can reject the hypothesis and heterogeneity is present.

Because the test is often not sensitive enough and the wrong exclusion of heterogeneity happens quickly, a lot of scientists use a p-value of ≤ 0.1 instead of ≤ 0.05 as the cut-off.

This test was developed by Professor Julian Higgins and has a theory to measure the extent of heterogeneity rather than stating if it is present or not.

Thresholds for the interpretation of I² can be misleading, since the importance of inconsistency depends on several factors. A rough guide to interpretation is as follows:

  •  0% to 40%: might not be important
  • 30% to 60%: moderate heterogeneity
  • 50% to 90%: substantial heterogeneity
  • 75% to 100%: considerable heterogeneity

To understand the theory above have a look at the following example.

Example of the I2 test

We can see that the p-value of the chi-squared test is 0.11, confirming the null hypothesis and thus suggesting homogeneity. However, by looking at the interventions we can already see some heterogeneity in the results. Furthermore, the I² Value is 51% suggesting moderate to substantial heterogeneity.

This is a good example of how the χ² test can be misleading when there are only a few studies in the meta-analysis.

How to deal with heterogeneity?

Once you have detected variability in your results you need to deal with it. Here are some steps on how you can treat this issue:

  • Check your data for mistakes – Go back and see if you maybe typed in something wrong
  • Don’t do a meta-analysis if heterogeneity is too high – Not every systematic review needs a meta-analysis
  • Explore heterogeneity – This can be done by subgroup analysis or meta-regression
  • Perform a random effects meta-analysis – Bear in mind that this approach is for heterogeneity that cannot be explained because it’s due to chance
  • Changing the effect measures – Let’s say you use the Risk Difference and have high heterogeneity, then try out Risk Ratio or Odds Ratio

References

(1) Fletcher, J. What is heterogeneity and is it important? BMJ 2007; 334 :94

(2) Deeks JJ, Higgins JPT, Altman DG (editors). Chapter 9: Analysing data and undertaking meta-analyses. In: Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Available from www.cochrane-handbook.org.

(3) https://www.mathsisfun.com/data/chi-square-test.html

The post Heterogeneity: what is it and why does it matter? appeared first on Students 4 Best Evidence.

Wednesday, November 28, 2018

Monday, November 26, 2018

Monday, November 12, 2018

5 Ways to Get Through Black Friday with Your Budget Intact

If you're managing debt, trying to save for the future, or dealing with other expenses, don't bow to the pressure to spend more than you can.

The post 5 Ways to Get Through Black Friday with Your Budget Intact appeared first on Earnest Blog | Money Advice for Young Professionals.

Thursday, November 8, 2018

Implementation research: What is it, what do we know and how can we use it?

Implementation science and research is a growing field that focuses on the implementation of programs, treatments and policy – and the relevant factors and variables.  This represents a growing recognition that successful dissemination and implementation is an essential part of evidence-based practice and can have significant impact on outcomes in health care.

What is implementation research?

As mentioned, implementation research focuses on the actual implementation of policies, programmes and treatments.  It has been defined as:

the scientific inquiry into questions concerning implementation – the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices

(Peters, Adam, Alonge, Agyepong, & Tran, 2013, p. 1).

Implementation studies span from evaluations of evidence-based care and programs within mental health and health care, to social services, education, and a range of other fields.

The field of implementation science reflects an increasing attention and energy spent on factors that affect the successful implementation of practice and care. As noted by Proctor et al (2009), there has been (and is) a gap between what we know is effective (e.g. a specific therapy) and the care that is in fact delivered (e.g. in the hospital). Most research focuses on the effectiveness of different treatments in health care settings: fewer studies have focused on how this treatment is implemented in practice, and how it is experienced by consumers. A heavily cited American study from 2003, found results indicating that just about half of the participants received the care that was recommended for their medical conditions (McGlynn et al., 2003). Thus, this study and similar papers illustrate the (often) vast gap between research findings and knowledge about effective procedures and care, and what is actually implemented and delivered to patients and clients in the real world.

How does one do implementation research?

Implementation research spans from evaluations of cost-effectiveness to the efficacy of leadership strategies and other implementation variables. A range of different implementation outcomes can be assessed. Proctor et al. (2011) for instance, list 8 different outcome variables that can be used to evaluate the implementation of a treatment, program or service: Acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration, and sustainability. These are all measures that can say something about the degree to which a service or program has been successfully implemented in a practice or clinical context. The Proctor et al paper represents an attempt to develop a taxonomy of implementation outcomes, in order to clarify and advance research in this area.

A range of different research methods can be used in implementation research, from hybrid trials, that assess the effectiveness of both an intervention and an implementation strategy, to pragmatic trials and quality improvement studies (Peters et al., 2013). Many studies also use mixed methods designs to investigate implementation efforts, that combine qualitative and quantitative techniques. These research methods can be applied to evaluate the previously mentioned outcome variables, but also to investigate other relevant questions, like the cultural appropriateness of a program or therapy in different cultures and populations (e.g. Self-Brown et al., 2011).

How can we apply findings from implementation research in real-world settings?

Findings and knowledge produced by implementation research can have large implications for the fields of study, and is of great importance in the contexts of health care and mental health. This area of research focuses on understanding and knowledge of implementation in real world settings and under real world conditions. While much of basic science can be said to be at quite a distance from clinical realities, implementation research is often much more practically oriented. Thus, findings within this area of research can be of great utility for, in particular, clinicians and those implementing evidence-based programs, policies and practice. Additionally, successful implementation is beneficial for the recipients of these programs and care.

References

The post Implementation research: What is it, what do we know and how can we use it? appeared first on Students 4 Best Evidence.

Wednesday, November 7, 2018

Tuesday, November 6, 2018

Cohort studies: prospective, retrospective and ambidirectional designs

In epidemiology, the term “cohort” is used to define a set of people followed for a certain period of time. W. H. Frost, a 20th century epidemiologist, was the first to adopt the term in a 1935 publication, when he assessed age-specific and tuberculosis-specific mortality rates. The epidemiological definition of the word currently means “a group of people with certain characteristics, followed up in order to determine incidence or mortality by any specific disease, all causes of death or some other outcome.” [1]

Cohort studies design is observational, unlike clinical studies, there is no intervention. [2] Because exposure is identified before outcome, cohort studies have a temporal framework to assess causality and thus have the potential to provide strong scientific evidence. [1] A fundamental characteristic of the study is that at the starting point subjects are identified and exposure to a risk factor is assessed. Subsequently, the frequency of the outcome, usually the incidence of disease or death over a period of time, is measured and related to exposure status. [3]

Advantages of cohort study include the possibility of assessing causality; examine multiple results from a given exposure; investigate rare exposures and determine disease rates in exposed and unexposed individuals over time. Disadvantages are the need for large samples, the susceptibility to selection bias and the long follow-up time. Cohort studies may be prospective, retrospective [1] or ambidirectional. [4]

 

Prospective cohort studies

Prospective cohort studies are characterized by the selection of the cohort and the measurement of risk factors or exposures before the outcome occurs, thus establishing temporality an important factor in determining causality. This design provides a different advantage over case-control studies in which exposure and disease are assessed at the same time. [5]

Main disadvantage to prospective cohort study is the cost. It requires a large number of individuals to be followed up for long periods of time, [5] that can be difficult due to loss to follow-up or withdrawal by the individuals studied. [1] Biases may occur, especially if there is significant loss during follow-up. [5]

It is important to minimize loss of follow-up, a situation in which the researcher loses contact with the individual, resulting in missing data. When loss to follow-up of many individuals occurs, the internal validity of the study is reduced. A general rule requires that the loss rate does not exceed 20% of the sample. Any systematic differences related to the outcome or exposure of risk factors for those who drop and those who remain in the study should be examined, if possible, by comparing individuals who remain in the study and those who were loss to follow-up or dropped out. Therefore, it is important to select individuals who can be followed for the entire duration of the cohort study. Strategies to avoid loss to follow-up are to exclude individuals who are likely to be lost, such as those who plan to move, to obtain information to enable future tracking and to maintain periodic contact. [1]

Prospective design is inefficient and inappropriate for the study of rare diseases, but it becomes efficient when there is an increase in the frequency of the disease in the population. [5]

 

Retrospective cohort studies

Cohort studies may also have retrospective design. Retrospective cohorts are also called historical cohorts. [1,6] A retrospective cohort study considers events that have already occurred. Health records of a certain group of patients would already have been collected and stored in a database, so it is possible to identify a group of patients – the cohort – and reconstruct their experience as if it had been prospectively followed up. Although patient information was probably collected prospectively, the cohort would not have initially identified the goal of following individuals and investigating the association between risk factor and outcome. In a retrospective study, it is likely that not all relevant risk factors have been recorded. This may affect the validity of a reported association between risk factor and outcome when adjusted for confounding. In addition, it is possible that the measurement of risk factors and outcomes would not have been as accurate as in a prospective cohort study. [2]

Many of the advantages and disadvantages of retrospective cohort studies are similar to those of prospective studies. As previously described, retrospective cohort studies are typically constructed from previously collected records, in contrast to prospective design, which involves identification of a prospectively followed group, with the objective of investigating the association between one or more risk factors and outcome. However, an advantage to both study designs is that exposure to risk factors is recorded before the outcome occurs. This is important because it allows the sequence of risk and outcome factors to be evaluated. [6]

Use of previously collected and stored records in a database indicates that the retrospective cohort study is relatively inexpensive, quick and easy to perform. However, a consequence of retrospective cohort is that possibly not all relevant risk factors have been identified and recorded. Another disadvantage is that many health professionals will have become involved in patient care, making the measurement of risk factors and outcomes less consistent than that achieved with a prospective study design. [6]

 

Ambidirectional cohort studies

 A cohort study may also be ambidirectional or ambispective, which means that there are prospective and retrospective phases in the study. Ambidirectional studies are less common than either prospective or retrospective studies, but are conceptually consistent and share the advantages and disadvantages of both types of studies. [4]

Ambidirectional cohorts present retrospective and prospective components. Analysis of an outcome shortly after exposure can be approached retrospectively. In a same study, other outcomes that may not appear until a certain time after exposure may be followed prospectively to see if they have a higher incidence. [4]

As with prospective and retrospective designs, none of the followed subjects has the outcome of interest at the beginning of the follow-up period, the compared groups differ in the exposure status, and incidence of the outcome is measured and compared to determine whether there was an association between exposure and outcome. [4]

 

Conclusions

  • Cohort studies are appropriate studies to evaluate associations between multiple exposures and multiple outcomes.
  • Main advantages of prospective and retrospective designs are, respectively, higher accuracy and higher efficiency.

 

References

  1. Song JW, Chung KC. Observational Studies: Cohort and Case-Control Studies. Plast Reconstr Surg. 2010;126(6):2234–2242 DOI: 10.1097/PRS.0b013e3181f44abc.
  2. Sedgwick P. Prospective cohort studies: advantages and disadvantages. BMJ 2013;347:f6726-f6727 DOI: 10.1136/bmj.f6726
  3. Euser AM, Zoccali C, Jager KJ, Dekker F. Cohort Studies: Prospective versus Retrospective. Nephron Clin Pract. 2009;113:c214–c217 DOI: 10.1159/000235241
  4. Boston University School of Public Health. Prospective and Retrospective Cohort Studies. Disponível em: <https://http://sphweb.bumc.bu.edu/otlt/MPH-Modules/QuantCore/PH717_CohortStudies/PH717_CohortStudies3.html/> Acesso em: 06 de agosto de 2018
  5. Firestein GS, Budd R, Gabriel SE, McInnes IB, O’Dell JR. Kelley and Firestein’s Textbook of Rheumatology. 10 ed. Philadelphia: Elsevier, 2017. E-book. ISBN: 978-0-323-31696-5. Disponível em: <https://books.google.com.br/books?isbn=032341494X>. Acesso em: 06 de agosto de 2018
  6. Sedgwick P. Retrospective cohort studies: advantages and disadvantages. BMJ 2014;348:g1072-g1072 DOI: 10.1136/bmj.g1072

The post Cohort studies: prospective, retrospective and ambidirectional designs appeared first on Students 4 Best Evidence.

Monday, November 5, 2018

How to Manage Your Money Around the Holidays

This year, you should commit to a better money management strategy for the holidays. Here’s how to do it.

The post How to Manage Your Money Around the Holidays appeared first on Earnest Blog | Money Advice for Young Professionals.

Friday, November 2, 2018

Electric vs Manual Toothbrushes: what’s the evidence?

A 2014 Cochrane Review published in the Cochrane Database of Systematic Reviews compared the effects of using a manual toothbrush with an electric toothbrush for maintaining oral health. Why is this important for dental, nursing and medical students to be aware of?

In our day to day oral care, most of the time we may make decisions based on our background, culture and education. But are these choices right?

To answer a question like: ‘which is better, brushing your teeth with an electric or manual toothbrush?’ we need to consider the evidence. This is particularly important because electric toothbrushes are widely advertised, recommended by professionals and can be expensive compared to manual toothbrushes.

What is an electric (or ‘powered’) toothbrush?

Electric (or ‘powered’) toothbrushes can be classified into two categories based on their action: vibration or rotation-oscillation. Vibration supports a technique similar to the manual one whereas the rotating-oscillating version focuses on moving the brush slowly from tooth to tooth. Another classification can be made on the speed of their movements as standard power toothbrushes, sonic toothbrushes (20 Hz to 20,000 Hz) or ultrasonic toothbrushes. 

Oral health

The Cochrane Review evaluated the effects of brushing with a manual vs. powered toothbrush on two main outcomes: plaque (a sticky film containing bacteria) and gingivitis (gum inflammation). The review also explored whether there were any adverse effects of brushing with an electric vs. manual toothbrush.

Dental plaque is the primary cause of gingivitis and can lead to periodontitis, a more serious form of gum disease, affecting 11% of the global population. The build up of plaque can also lead to caries (decay) in permanent teeth. Tooth decay is the most prevalent disease worldwide, with a global prevalence of 35% for all ages combined. Whilst in high‐income countries the prevalence of caries has decreased over the past decade, in lower‐ and middle‐income countries (LMICs) the incidence is increasing due to population growth, an ageing population, changing diets and inadequate exposure to fluorides.

So, removing plaque and reducing gingivitis have important roles in preventing gum disease and tooth decay and are of major public health importance.

Evidence

This Cochrane Review included 56 trials with 5068 participants. Fifty one of these trials, including 4624 participants, provided data for the meta-analysis. Participants were randomized to receive either a powered toothbrush or a manual toothbrush. Only five trials were at low risk of bias, five were at high risk of bias and 46 were at unclear risk of bias.

There is moderate quality evidence that powered toothbrushes provide a statistically significant benefit compared with manual toothbrushes in the reduction of plaque. There was an 11% reduction in plaque at one to three months of use, and a 21% reduction when assessed after three months of use.

With regard to the reduction of gingivitis, there is moderate quality evidence that powered toothbrushes again provide a statistically significant benefit compared with manual toothbrushes. There was a 6% reduction in gingivitis at one to three months of use and an 11% reduction when assessed after three months of use.

There did not appear to be a difference in the number of adverse effects between using an electric vs. a manual toothbrush. This may be because very few adverse effects were reported in the included studies.

The number of trials for each type of powered toothbrush varied: side to side (10 trials), counter oscillation (five trials), rotation oscillation (27 trials), circular (two trials), ultrasonic (seven trials), ionic (four trials) and unknown (five trials). The greatest body of evidence was for rotation oscillation brushes which demonstrated a statistically significant reduction in plaque and gingivitis at both time points.

Implications

What has this systematic review taught us?

It is worth highlighting that the differences found between manual and powered toothbrushes were of statistical significance. We can say that there is moderate evidence that powered toothbrushes are statistically significantly more effective at reducing plaque and gingivitis than manual toothbrushing in the short term. However, the clinical importance of these findings remains unclear.

Statistical significance tells us how likely an effect is a chance finding based on the researcher’s predetermined significance level. Many factors impact statistical power.  For example, very small differences between the groups being compared can be found to be statistically significant if you have a very large sample. Research findings may not be important enough to fundamentally change a prescribing practice or treatment choice, even if found to be statistically significant.

Clinical significance tells us how effective or meaningful the research finding might be to patients. It is important to remember that the determination of clinical significance can be a more subjective decision. It will depend on, among other things, which disease process or condition is being studied and how many people are affected by the condition.

One key reason why we can’t be sure of the clinical importance of the findings of this review is because the long-term benefits for dental health are unclear. Few of the included studies reported data over more than three months. So, these findings appear promising for electric vs. manual toothbrushes. Nonetheless, further longer-term trials are needed to assess whether these benefits lead to a reduction in important, longer-term outcomes such as caries and gum disease.

References

Yaacob  M, Worthington  HV, Deacon  SA, Deery  C, Walmsley  AD, Robinson  PG, Glenny  AM. Powered versus manual toothbrushing for oral health. Cochrane Database of Systematic Reviews 2014, Issue 6. Art. No.: CD002281. DOI: 10.1002/14651858.CD002281.pub3.

Farina R, Tomasi C, Trombelli L. The bleeding site: a multi‐level analysis of associated factors. Journal of Clinical Periodontology 2013;40(8):735‐42. DOI: 10.1111/jcpe.12118.

Marcenes W, Kassebaum NJ, Bernabé E, Flaxman A, Naghavi M, Lopez A, et al. Global burden of oral conditions in 1990‐2010: a systematic analysis. Journal of Dental Research 2013;92(7):592-7. doi:10.1177/0022034513490168

 

The post Electric vs Manual Toothbrushes: what’s the evidence? appeared first on Students 4 Best Evidence.