Wednesday, November 29, 2017

Friday, November 24, 2017

Wading Through Conflicting Literature on G6PD Deficiency

What is G6PD deficiency?

Glucose-6-phosphate dehydrogenase (G6PD) is an enzyme which plays a part in protecting the red cells from oxidative damage [1].  When there is a reduction in G6PD activity, the red cells break down in the presence of oxidative stress. G6PD deficiency is inherited in an X-linked recessive pattern, therefore it is more commonly found in boys who have only one copy of X chromosome. Around 400 million people worldwide are affected by the enzyme deficiency [2]. In our part of the world, G6PD deficiency is a relatively common condition affecting 3 to 7% of the population [3]. We come across this condition quite often as medical students in Malaysia. What I did not know was that all babies in Malaysia get screened for this at birth and their parents are counselled if they are found to be G6PD deficient – that is until I came across a little baby boy who was screened to be deficient.

What is the issue?

The mother of the baby boy was a sweet, young lady who was kind enough to share her experience with me. When she was told her newborn baby suffers from G6PD deficiency, she was at a loss and did not know what to expect. She was then counselled and given a pamphlet with details of what G6PD deficiency is and what she could do for her baby. This included a list of chemicals and medications that her baby should avoid in the future.

The encounter made me think about what advice could be given to the parents of G6PD deficient children. Our knowledge of G6PD deficiency is limited although it is a fairly common inherited disease in Malaysia. While reading around the topic, I found that the lists of medications to avoid in G6PD deficiency vary from one source to another. Some lists are so extensive that they include over 100 medications. An example is available on the g6pddeficiency.org website [4]. Other lists include a few more ‘notorious’ ones, such as antimalarial drugs and non-steroidal anti-inflammatory drugs (NSAIDs). One such list from the Malaysia Ministry of Health is available at the website, www.myhealth.gov.my[2]. Another similar list is available in the Paediatric Protocols for Malaysian Hospitals [5].

It is plain to see that the above lists differ from one another. For example, NSAIDs like paracetamol and aspirin were listed as harmful on g6pddeficiency.org and myhealth.gov.my, but paracetamol and aspirin are considered safe to be given in therapeutic dosage in the Paediatric Protocols for Malaysian Hospitals. I thought it must have been confusing for G6PD deficient individuals and their healthcare providers to decide which lists to use. Assuming those who wish to ‘play safe’ go for the more extensive lists, they would have missed the benefits of using some medications that with more extensive research, may actually prove to be harmless. On the other hand, those who go for the simpler lists may unintentionally use medications that have not been included but could cause harm.

How the literature search began

I decided to look up reviews and research that could shed light on the issue. That was when I came across a review article by Youngster et al which was published in 2010. This review included studies and case reports from as far back as 1950. The haemolytic potential of various medications in G6PD deficient individuals were reviewed. Surprisingly, there were only seven medications with solid evidence of drug-induced haemolysis, including dapsone, methylthioninium chloride (methylene blue), nitrofurantoin, phenazopyridine, primaquine, rasburicase and tolonium chloride (toluidine blue)[6]. Some medications like paracetamol, aspirin, ciprofloxacin, co-trimoxazole, nalidixic acid and a few other sulfa drugs that have been considered unsafe by various sources, did not have sufficient evidence of harm [6].

Forming the PICO

Given the widespread use of analgesics like paracetamol, I narrowed down my search to this class of drugs. There had been conflicting evidence on the safety of analgesic use in G6PD deficiency. For instance, some earlier studies had reported haemolysis following paracetamol use in G6PD deficient patients [7,8,9].  However, they were often attributed to overdose or confounded by an underlying infection or concurrent medication use. Thinking of the baby I met, and his possibility of using analgesics in the future, I formed my PICO question:

Appraising and applying

The search on PubMed generated two results and one of the articles was selected for use. It was a study titled “Potential Risks of Hemolysis after Short-Term Administration of Analgesics in Children with Glucose-6-Phosphate Dehydrogenase Deficiency” by Najafi et al [10].


Are analgesics safe in G6PD deficient children?

It was a prospective cohort study which sought to evaluate the risks of haemolysis after short-term use of analgesics in ten male children with G6PD deficiency. Laboratory and clinical findings of haemolysis were evaluated throughout the 7-day study. The results of this study showed that it may be safe to administer analgesics within therapeutic range to G6PD deficient patients, which is consistent with the review by Youngster et al [10].

However, because of the small sample size and the lack of comparison group, larger clinical trials would be needed before the results of this study can have a larger impact on clinical practice. In the meantime, I agree that short-term analgesics within therapeutic dosage could be administered to G6PD deficient children but close monitoring for development of haemolysis may be necessary. If I were to give advice to the concerned mother I met recently, I would highlight the fact that we have some evidence, although still inconclusive, to prove that analgesics that are given in therapeutic dosages can be safely administered to her child if her child ever needs it.

What is the moral of the story?

Is evidence-based medicine a fairy tale? Yes, and what a shocking fairy tale this is, without a happy ever after. To find that 400 million G6PD deficient individuals worldwide are depending largely on isolated case reports, case series and a few reviews was a shocking truth indeed. There is an urgent need of further clinical trials with larger sample size so that these 400 million individuals do not avoid medications that could in fact benefit them.

On the whole, this literature search highlights the importance of better evidence for safer medicine. As the first rule of medicine goes “Primum non nocere (First, do no harm)”, it is imperative that all healthcare professionals regularly search for evidence before formulating a management plan. Practising medicine without good evidence is like sailing an uncharted sea, we never know where we are going or when we could cause harm. Hence, the next time we meet a patient, before noting down the management plan, let’s pause, search and apply!

References

The post Wading Through Conflicting Literature on G6PD Deficiency appeared first on Students 4 Best Evidence.

Friday, November 17, 2017

Coronary heart disease: what is the evidence?

Nobody wants to experience the pain from a heart attack. It can be a crippling event in a patient’s life.  As a cardiology intern, I encountered a 62 year old gentleman who was a chronic smoker with a deranged lipid profile. He was admitted with acute typical angina which occurred at rest and continued for several hours. The ECG changes, echocardiogram and biochemistry results suggested myocardial infarction, commonly known as a heart attack.

It was his ninth day of admission, he was clinically stable and his laboratory results were normal. He was going home today and I was assigned to write a discharge note for him. As I was writing the note, I explained to him about the nature of his medical problem.

What is Coronary heart disease (CHD)?

CHD is caused by plaque buildup in the wall of arteries that supply blood to the heart. Plaque is made up of deposits of cholesterol and other substances.  Plaque buildup causes arteries to narrow which could block blood flow. It is the single most common cause of death globally. As the rate of death due to CHD is decreasing, the number of people living with CHD is increasing day by day and these people need support to manage their symptoms and reduce the chances of future complications such as another heart attack.

I recalled what I was taught in medical school. My professors taught us that treating people in hospital with certain medications can alleviate their acute symptoms, however, the long term sequelae of the disease cannot simply be treated with the same medications. In my cardiology training, I have learned about the role that education, exercise training and psychological support can play in improving the quality of life in people with CHD.

What does the evidence show?

Patient education is the process by which health professionals impart information to patients and their caregivers that will alter their health behaviours or improve their health status. A recent Cochrane review tells us that, currently, there is little evidence that patient education, as part of a cardiac rehabilitation programme, lowers the heart-related events and improves health-related quality of life [1]. Indeed there is insufficient information at present to fully understand the benefits or harms of patient education for people with heart disease. Nonetheless, the findings tentatively suggest that people with heart disease should receive comprehensive rehabilitation that includes education [1].

A second component of rehabilitation is exercise. I explained to my patient the importance of exercise for his short-term and long-term outcomes. Another recent Cochrane review has shown that exercise-based cardiac rehabilitation reduces the risk of death due to cardiovascular cause, decreases the duration of hospital stay and improves health-related quality of life when compared with those not undertaking exercise [2].

Heart attacks and cardiac surgery may be frightening and traumatic, potentially causing increased psychological distress. A third recent Cochrane review assessed the effectiveness of psychological interventions for CHD. Although the evidence has shown that psychological interventions, as part of cardiac rehabilitation, do not reduce the total mortality of people with CHD, there is some evidence that they can alleviate the patient-reported symptoms of depression, anxiety and stress [3]. To prevent and control these symptoms in future, I recommended that my patient underwent regular psychological therapy. Even though it may have no role in reducing his risk of mortality due to CHD, it could improve the other important psychological symptoms.

Summary

In summary, cardiac rehabilitation may consist of three things: patient education, exercise and psychological support. Exercise has been shown to reduce future cardiac related events, including death, in people with existing CHD. Regarding patient education, there is insufficient data to understand its benefits and harms on patients with CHD. Psychological interventions may alleviate the patient’s reported symptoms of anxiety or depression, therefore improving the quality of life in people with CHD, and thus remaining an important part of cardiac rehabilitation.

References

  1. Anderson LBrown JPRClark AMDalal HRossau HKBridges C et al. Patient education in the management of coronary heart disease. Cochrane Database of Systematic Reviews 2017, Issue 6. Art. No.: CD008895. DOI: 10.1002/14651858.CD008895.pub3
  2. Anderson LThompson DROldridge NZwisler ADRees KMartin N et al. Exercise-based cardiac rehabilitation for coronary heart disease. Cochrane Database of Systematic Reviews 2016, Issue 1. Art. No.: CD001800. DOI: 10.1002/14651858.CD001800.pub3
  3. Richards SHAnderson LJenkinson CEWhalley BRees KDavies P et al. Psychological interventions for coronary heart diseaseCochrane Database of Systematic Reviews 2017, Issue 4. Art. No.: CD002902. DOI: 10.1002/14651858.CD002902.pub4

The post Coronary heart disease: what is the evidence? appeared first on Students 4 Best Evidence.

Thursday, November 16, 2017

Wednesday, November 15, 2017

The Delphi Technique

 

What is it?

The Delphi technique (also referred to as Delphi procedure or process), is a method of congregating expert opinion through a series of iterative questionnaires, with a goal of coming to a group consensus. In fact, in 150 studies that used the Delphi technique, there was no universally agreed upon working definition of the technique. There are many variants used, some of which have departed widely from the original Delphi technique.

Since its development in the 1950’s by the RAND Corporation, several refinements and modifications have been made, such as specific strategies for different fields, including business, government, and healthcare.

There are four characteristic features of the Delphi technique that distinguish it from other group decision making processes. These are: anonymity, iteration with controlled feedback, statistical group response, and expert input.

When is it used?

The Delphi Technique can be an especially useful research methodology when there is no true or knowable answer, such as decision-making, policy, or long-range forecasting. A wide range of opinions can be included, which can be useful in cases where relying on a single expert would lead to bias.

The Delphi technique was recommended as the method of choice when:

  • subjective statements made on a collective basis are desired;
  • face-to-face interaction is difficult due to large sample sizes;
  • anonymity is preferred, such as when issues are intractable or political; and
  • there is a chance of domination of a group discuss by one person.

How is it done?

  1. Survey Development
    • Define the research problem/questions and develop the first-round survey
    • Pilot the survey with a small group to ensure the responses will elicit appropriate answers to the research question
    • Round one is like a ‘brainstorming’ round, and allows participants to provide their own responses to the question. The responses are categorized by the researchers to provide the response options in future rounds. In the first round, participants may be asked to limit themselves to one response, or answer as many times as they would like depending on the research question and number of participants. Alternatively, pre-existing options could be provided for ranking or response, however, this approach could bias the responses or limit the available options
  2. Participant Recruitment
    • Research has indicated that participant subject matter knowledge (i.e. being an “expert”) may not have a substantial impact on study results, so it might be best to choose participants who have some understanding of the topic and an interest in the outcome of the study to limit attrition and encourage thoughtful responses to the surveys
    • Often, participants are selected via non-probability sampling techniques (either purposive sampling or criterion sampling), to save resources and ensure appropriate participants are selected
  3. Data Analysis
    • The first round of the Delphi technique involves participants providing answers to the research question, which will then be ranked in future rounds. In a classic Delphi, no items should be added or removed, and the wording used by participants should be kept for round two. However, this may be difficult or not feasible depending on the number and types of responses provided, and content analysis can be used to group similar themes prior to the second round. An informal literature review can also be used to identify further items depending on the research question
    • In subsequent rounds, participants are asked to rank/respond to the analysed options from round one. Between rounds the group’s responses are analysed, summarised, and communicated back to the participants, a process called controlled feedback. This is repeated until consensus is reached, or for a planned number of rounds depending on the research question
    • Subsequent rounds are analysed to identify convergence of participant responses, and to provide controlled feedback. Central tendencies (mean, median, and mode) and levels of dispersion (standard deviation and the inter-quartile range) are often used. These results are fed back to participants in the next round, although no consistent method for reporting exists
  4. Ending the Delphi Process
    • The process typically ends once acceptable level of consensus has been reached, however, there is no universally agreed cut-off. The level of agreement reached depends on sample numbers (i.e. high attrition or participant burnout), aim of the research (i.e. if complete consensus is required), and available resources
    • Two to four rounds will typically be conducted to ensure study goals are met but to avoid sample fatigue and unnecessarily use of resources

Pros and Cons of the Delphi Technique:

Pros:

  • Allows use of a “committee” with fewer drawbacks (scheduling, travel/space requirements, lengthy discussions)
  • Anonymity reduces impact of dominant individuals and helps reduce peer pressure to conform, and allows opinions to be considered in a non-adversarial manner
  • Responses are weighted equally so no one person can shift the opinions of the group
  • Providing controlled feedback on the group opinion reduces noise and allows participants to reconsider based on others’ rankings

Cons:

  • There are a lack of clear methodological guidelines
  • Continued commitment is required from participants who are being asked a similar question multiple times
  • There is no evidence of reliability (i.e. if two panels received the same question they may not come to the same consensus)
  • Does not allow participant discussion and there is no opportunity for participants to elaborate on their views
  • The existence of a consensus does not necessarily mean that the correct answer, opinion, or judgement has been found, it merely helps to identify areas that one group of participants or experts consider important in relation to that topic

Similar/Alternative Methodologies:

Brainstorming and nominal group technique are similar techniques that allow incorporation of many individual perspectives.

 

References

Goodman CM. The Delphi technique: a critique. Journal of Advanced Nursing. 1987 Nov;12(6):729–734. doi:10.1111/j.1365-2648.1987.tb01376.x. Available from: http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2648.1987.tb01376.x/full

Sackman H. (1974). Delphi critique; expert opinion, forecasting, and group process. Lexington, Mass: Lexington Books.

Hasson F, Keeney S, and McKenna H. Research guidelines for the Delphi survey technique. Journal of Advanced Nursing. 2000 Oct;32(4):1008–1015. doi:10.1046/j.1365-2648.2000.t01-1-01567.x. Available from: http://onlinelibrary.wiley.com/doi/10.1046/j.1365-2648.2000.t01-1-01567.x/full

Hsu C & Sandford BA. The Delphi Technique: Making Sense Of Consensus. Practical Assessment, Research & Evaluation. 2007 Aug;12(10). Available from: http://pareonline.net/getvn.asp?v=12&n=10

McKenna HP. The Delphi technique: a worthwhile research approach for nursing? Journal of Advanced Nursing. 1994 June;19(6):1221–1225. doi:10.1111/j.1365-2648.1994.tb01207.x. Available from: http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2648.1994.tb01207.x/full

Linstone HA, Turoff M. (1975). The Delphi Method: Techniques and Applications. Addison-Wesley Pub. Co., Advanced Book Program.

 

The post The Delphi Technique appeared first on Students 4 Best Evidence.

Wednesday, November 8, 2017

Should you intercalate?

Introduction

An intercalated degree is an opportunity to learn more about a particular topic, to develop transferable skills and/or participate in a more in-depth research project than previously available as part of a medical degree. The number of students that choose to intercalate varies from university to university, however, with the increase in academic focus of most training programmes, more and more variety of intercalations are becoming available.

How it works

Most UK universities will offer the opportunity to do an intercalated degree towards the end of your medical degree. This is usually after the 3rd or 4th year, but there are exceptions. Some places, such as Oxford, Cambridge and Edinburgh, include an intercalated year into their standard medical programmes, making them a 6-year degree. An intercalation typically lasts one year.

What’s on offer?

In the UK, it is possible to intercalate in a BSc, MSc, MRes or MPhil programme. Other degrees are available; however these are the most common.

First, it is important to decide whether or not you would like to intercalate. This is an important decision and requires careful thought. Next, consider whether you would like to intercalate at your own university or whether you would like to go elsewhere. Things to think about include exploring different areas of the UK, varying opportunities in the different specialities and the selection of types of degrees on offer. It’s useful to explore university websites and contact programme co-ordinators early to find as much information out about how the course is assessed and what projects are available. Depending on the course, it may be possible to contact potential supervisors ahead of time, to express your interest and plan some research in advance – all of which can maximise the likelihood of success during the intercalation year.

You can search a wide variety of available intercalated degrees through the following link >> http://intercalate.hyms.ac.uk/

Advantages

  • Gain an additional degree for only one year of study
  • Gain new transferable skills
  • Show commitment to a speciality you are interested in pursuing
  • Opportunity to gain presentations and publications
  • Opportunity to network with new people
  • Opportunity to live and explore a new area of the UK
  • Additional points for Foundation Programme applications

Disadvantages

  • Additional costs (living expenses, tuition fees etc)
  • Additional time, meaning you finish later and may not graduate with all your peers
  • Potential to lose a lot of medical information that is crucial for finals as well as your job!

Other key points

It is true that an intercalated degree can give you more points towards your Foundation Programme application. Nevertheless, intercalating for this sole reason is not advised. The points that can be attained for the Foundation Programme application for additional degrees can be found below:

Table 1. Additional qualification points table for the Foundation Programme. Adapted from the UKFPO Handbook 2018.

Although an intercalation year can induce additional debts, the NHS will cover the cost of tuition fees for the 5th/6th year of medical school study. Moreover, the NHS provides a £1000 non-means tested grant, with further means-tested funds available. Scholarships and funding from research bodies can also support an intercalation year, however, these require application systems themselves. (this information was based on the application year 2017 and it may change in the future).

Contacting people who have intercalated in the past is useful to help you decide and get further advice, especially if you can find people who are intercalating at the university/degree you wish to pursue.

Summary

To clarify, an intercalated degree is an excellent opportunity to develop skills and can be beneficial to future career applications. However, the decision as to whether this is right for you should be a personal choice.

It is recommended that students should research the prospect of an intercalation and the opportunities available, whether they would be able to handle an additional year of study, and whether this will benefit their future intended career path.

The post Should you intercalate? appeared first on Students 4 Best Evidence.

Tuesday, November 7, 2017

A State-by-State Guide to the 2018 Health Insurance Open Enrollment Period

The Affordable Care Act, aka Obamacare, has a single period – open enrollment – when most people will be eligible for buying health insurance for the upcoming year.

The post A State-by-State Guide to the 2018 Health Insurance Open Enrollment Period appeared first on Earnest Blog | Money Advice for Young Professionals.

Friday, November 3, 2017

Comparing the validity and robustness of different statistical methods for meta-analysis of rare event data

Meta-analyses of rare events or sparse data should employ different methods compared to regular data.

Assumptions for normal meta-analysis methods do not hold true in these scenarios, as there can be no events in one or both of the comparison arms. This is often the case with serious but uncommon adverse events, making it essential to get the analysis method correct as studies have shown that using a different meta-analysis method can change the final effect estimate considerably(1).

The Cochrane handbook has a separate chapter on special statistics (chapter 16), which contains a devoted section on reviews dealing with rare events (section16.9). Though it discusses the validity of the different methods available within the literature, the accompanying RevMan software only permits the use of Mantel-Haenszel odds ratio (OR) method using a 0.5 zero-cell corrections or Peto’s OR, as also acknowledged within the manual(2).

Mantel-Haenszel vs Peto odds ratio

The Mantel-Haenszel OR, using the 0.5 zero-cell corrections, have been repeatedly shown to give biased results (2,3) while the Peto´s OR has generally been considered robust when there is not a large imbalance in the total group size between the different comparison arms, and the expected effects are not large (1-3). This has meant the Peto OR has become the method of choice for most.

However, if the latter assumptions are not met, which is often the case, then the results have a higher potential to be biased. Moreover, the Peto OR model is unable to handle studies where both the arms have no events, and effectively removes them from the analysis. Therefore this further limits its applicability and ethically, patients who have been recruited to these double-zero studies, have a right that their data is included in meta-analyses (4-6).

We therefore undertook a study to identify the validity and robustness of effect estimates across different meta-analysis methods for rare binary events data, using the example of serious rare adverse events from antidepressants trials. We compared the four rare adverse outcomes of all-cause mortality, suicidality, aggressive behaviour and akathisia across the Peto method, the generalized linear mixed model (GLMM), conditional logistic regression, a Bayesian approach using Markov Chain Monte Carlo (MCMC) method and finally the beta-binomial method.

Our results, recently published online (7) in the Journal of Clinical Epidemiology, showed that though the estimates for the four outcomes did not change substantially across the different analysis methods, the Peto method underestimated the treatment harm and overestimated its precision, especially when the estimated OR deviated greatly from 1. For example, the OR for suicidality for children and adolescents was 2.39 (95% CI 1.32 to 4.33) using the Peto method, but increased to 2.64 (1.33 to 5.26) using conditional logistic regression, to 2.69 (1.19 to 6.09) using beta-binomial, to 2.73 (1.37 to 5.42) using the GLMM and finally to 2.87 (1.42 to 5.98) using the MCMC approach. Moreover when we consider absolute numbers and that these are very serious harms, the minor changes in the effect estimates make a difference.

Conclusion

Therefore, the method used for meta-analysis of rare events data influences the estimates obtained, and the exclusion of double zero-event studies can give misleading results. To ensure reduction of bias and erroneous inferences, sensitivity analyses should be performed using different methods. Other methods, in particular the beta-binomial method that was shown to be superior through simulation studies (4,7), should be considered as an appropriate alternative.

References

  1. Sweeting M, Sutton A, Lambert P. What to add to nothing? Use and avoidance of continuity corrections in meta-analysis of sparse data. Statistics in Medicine 2004; 23(9): 1351-75.
  2. Higgins JG, S (editors). Cochrane Handbook for Systematic Reviews of Interventions [updated March 2011]. Available from www.cochrane-handbook.org.: The Cochrane Collaboration; 2011, (accessed May 2016).
  3. Bradburn M, Deeks J, Berlin J, Russell Localio A. Much ado about nothing: a comparison of the performance of meta-analytical methods with rare events. Statistics in Medicine 2007; 26(1): 53-77.
  4. Kuss O. Statistical methods for meta-analyses including information from studies without any events—add nothing to nothing and succeed nevertheless. Statistics in Medicine 2015; 34(7): 1097-116.
  5. Friedrich JO, Adhikari NK, Beyene J. Inclusion of zero total event trials in meta-analyses maintains analytic consistency and incorporates all available data. BMC Medical Research Methodology 2007; 7(1): 1-6.
  6. Keus F, Wetterslev J, Gluud C, Gooszen HG, van Laarhoven CJHM. Robustness Assessments Are Needed to Reduce Bias in Meta-Analyses That Include Zero-Event Randomized Trials. Am J Gastroenterol 2009; 104(3): 546-51.
  7. Sharma T, Gøtzsche PC, Kuss O. The Yusuf-Peto method was not a robust method for meta-analyses of rare events data from antidepressant trials. Journal of Clinical Epidemiology. 2017; Aug 9. pii: S0895-4356(17)30785-0.  doi: 10.1016/j.clinepi.2017.07.006. 

The post Comparing the validity and robustness of different statistical methods for meta-analysis of rare event data appeared first on Students 4 Best Evidence.

Wednesday, November 1, 2017

Evidence-based health practice: a fairytale or reality?

This blog, written by Leonard Goh, was the winner of Cochrane Malaysia and Penang Medical College’s recent evidence-based medicine blog writing competition. Leonard has written an insightful and informative piece to answer the question: ‘Evidence-based health practice: a fairytale or reality’.

Details of the other winners are here – many congratulations to you all.

The philosophy of EBM has its origins in the 19th century, though it was only relatively recently in 1991 that Gordon Guyatt first coined the term in his editorial, espousing the importance of a clinician’s ability to critically review literature and synthesize the new findings into his practice [1].  His comments sparked off the EBM movement in a medical fraternity that was increasingly unsatisfied with basing clinical practice on anecdotal testimonials, leading to the worldwide incorporation of EBM classes into both undergraduate and postgraduate programmes, as well as workshops for already-practicing clinicians.

It has to date percolated into the public arena, to the point where it is not uncommon to hear patients asking, “so what is the evidence for this treatment you are proposing?”

EBM was formally defined as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” in a 1996 article [2], and subsequently revised in 2000 to mean “a systematic approach to clinical problem solving which allows the integration of the best available research evidence with clinical expertise and patient values” [3]. The revision attempted to reflect equal emphases on the clinician’s individual clinical competency and cultural competency, addressing the common criticism of its prior definition that it was “cookbook medicine” that negated individual clinical expertise and the choice of patients.

There is a constant struggle to balance clinical decisions in accordance with the three ideals, especially if current best evidence contradicts individual experience and patient choice. But the main issue with EBM is, in my opinion, the validity and applicability of research results to the real world. Research conclusions are drawn completely from statistical analysis, yet a vast majority of clinicians lack a proper in-depth understanding of statistics to come to a correct conclusion. 

Take the p-value for instance.  It is commonly understood that results are statistically significant if the corresponding p-value is below 0.05, and statistically insignificant if the p-value is above 0.05.  However, this dichotomy is simply not true.

The p-value threshold of 0.05 is an arbitrarily defined convention that was first proposed by RA Fisher in 1925 and has since been used for convenience sake. Furthermore, p-value significance should in fact be seen as a continuum; results with a p-value of 0.049 is not markedly more significant that one with a p-value of 0.051, yet we proclaim the former a significant finding and disregard the latter.  This invariably leads to faulty reporting of study conclusions [4].

Digging deeper into the principles of p-value, it becomes apparent that there is a subtle difference between our commonly conceived notion of it and its exact meaning that can elude even the most astute of clinicians – the p-value does not comment on the truth of a null hypothesis, but rather is about the compatibility of the research data with the hypothesis. To remedy this, the American Statistical Association published a statement on statistical significance and p-values [5], but we can be sure it will take a significant (pun unintended) amount of time before these deeply entrenched misunderstandings are rectified.

This insufficient understanding of statistics means that research studies are vulnerable to statistical manipulation, whether unintentional or otherwise.  Indeed, Stephen Ziliak and Deidre McCloskey, authors of the book The Cult of Statistical Significance, estimate that between 80% and 90% of journal articles have serious flaws in their usage of statistics; even papers published by the New England Journal of Medicine were not spared.

On top of that, huge numbers of papers published every day, making it a Herculean task to determine what constitutes “current best evidence” – it takes time to do a literature search, identify papers, and evaluate them, time which our clinicians simply do not have. Even if correct methodologies are in place, there is concern that claimed research findings may possibly be merely accurate measurements of the prevailing bias [6].

Adding insult to injury is the presence of predatory open access journals in the publishing industry. Neuroskeptic, an anonymous neuroscientist-blogger, illustrated this by submitting a Star Wars-themed spoof manuscript that was absolutely devoid of scientific rigor to nine journals, of which the American Journal of Medical and Biological Research accepted, and the International Journal of Molecular Biology: Open Access, Austin Journal of Pharmacology and Therapeutics, and American Research Journal of Biosciences published [7].

While the Neuroskeptic’s intent was not to make a statement regarding the brokenness of scientific publishing but rather to remind us that some journals falsely claim to be peer-reviewed, this concomitantly highlights the very real probability of better-concealed, intellectually-dishonest papers masquerading as legitimate science, impeding efforts to uphold an evidence-based health practice.

Is evidence-based medicine then an unrealisable fairytale?

To conclude as such would perhaps be exceedingly harsh. Yes, our practice of evidence based medicine is flawed, as pointed out in the preceding paragraphs. That, however, is not to suggest that we should cease striving to improve upon it. Evidence based medicine represents our best hope in ascertaining that we provide our patients with the best available treatment options, and we should persevere in our endeavours to further actualise this fairytale into reality.

But should this be our utmost priority?

In today’s world, where our medical profession is increasingly governed by statistics and algorithms, it is easy to mistake evidence based medicine as a panacea. We would however do well to remember that as much as it is a science, medicine is also an art. It is crucial that we do not lose sight of our raison d’être – to cure sometimes, relieve often, and comfort always.

 

References

The post Evidence-based health practice: a fairytale or reality? appeared first on Students 4 Best Evidence.