Wednesday, December 20, 2017

Tuesday, December 19, 2017

How I Paid Off $54,000 in Debt

Three years after I started my journey, I successfully paid off approximately $54,000 of debt. Here are the strategies that helped me achieve debt freedom.

The post How I Paid Off $54,000 in Debt appeared first on Earnest Blog | Money Advice for Young Professionals.

Friday, December 15, 2017

Prescription opioids and Canada’s opioid crisis: A call for broadened research

Canada is in the midst of an opioid crisis and prescriptions have something to do with it. The question is, what?

In the October 2017 Canadian Agency for Drugs and Technologies in Health (CADTH) webinar lecture series, Dr. Hakique Virani presented “Canada’s Opioid Crisis: The Changing Reality Between Exam Rooms and Ivory Towers”. Here, Virani discussed the history, complexities, and current state of the Canadian opioid crisis, outlining a striking metaphor for the way in which researchers have struggled to explain its causes and outcomes.

Midway through the lecture, Virani plays a video of two teams clad in black and white jerseys, each passing a basketball between them. “You are responsible for keeping an eye on the ball carried by the white team and counting how many passes that white team makes”. At the end of the video he asks the audience for the number – “Did everyone get 13?” Following a muffled yes from the crowd, he continues, “Okay. Did you see the moon-walking bear?” At first there is a quiet laugh at the absurdity of the question. But low-and-behold, when the video is played again, a man dressed as a bear walks into the centre of the frame and begins to moonwalk. It had been there the whole time, we just missed it. And why? “It’s easy to miss something you’re not looking for” (1).

With this metaphor, Virani describes what he perceives as an overemphasis on opioid prescribing in research addressing the epidemic. The passing of the ball symbolizes prescription opioid data, the audience symbolizes Canada’s researchers investigating the crisis, and the dancing bear symbolizes the truth underlying the rise in addiction and overdose. According to Virani, researchers have been so preoccupied with establishing links between the crisis and prescribing data that they have missed the real-time changes in opioid-related deaths.

However, following this metaphor, the question remains: what exactly is the moon-walking bear?

That is, what is the information that we’re missing to aid us in understanding the stark rise in overdose in the last 5 years?  Virani seems to suggest that the answer lies in moving away from the investigation of prescripton opioids. But perhaps it doesn’t (at least not entirely). Researchers may simply need to shift exactly what questions about prescription opioids they’re asking.

It is no secret that with the rise of the opioid crisis, there has been a rise in opioid prescriptions, and Virani acknowledges this. It has been found that physicians who prescribe more opioids are more likely to have prescribed the final opioid before an individual’s overdose death (2), that deaths from opioid overdose are more common in areas where opioids are more often prescribed (2-4), and that higher-doses and longer durations are correlated with increased drug-related mortality (5,6). Moreover, recent data suggests that prescriptions in Canada are continuing to increase (7). It is no question that opioid prescribing is tied to the Canadian opioid crisis (8). The question that might be missed, however, is how. As many links have been established between opioid prescribing and addiction, it is still uncertain exactly how prescriptions are having this impact.

This question is particularly confusing in light of the reported low rates of addiction amongst patients actually prescribed opioids (9). A 2012 systematic review found that a mere 0.5% of all opioid-prescribed patients developed an addiction (10). Other reviews have found incidences ranging from 0.8-26% (11).

So how are prescription opioids really influencing rates of overdose and opioid use disorder (OUD)?

There are a number of plausible answers to this question: diversion (8, 12), inadequate pain care (13), premature discontinuation of prescription opioids (14), doctor shopping (12) – however, research has not adequately examined which of these avenues is playing the greatest role in exacerbating the observed rise in addiction and overdose.

We need reviews aimed at investigating the primary ways in which prescription opioids enter and influence the lives of not only those who are prescribed opioids, but those that are not. These investigations are particularly important if we hope to introduce policy, programs, and healthcare training that effectively balance the need for improved pain care and safe opioid prescribing. Researchers need to refocus their attention onto this moon-walking bear.

Click here for References

 

The post Prescription opioids and Canada’s opioid crisis: A call for broadened research appeared first on Students 4 Best Evidence.

Friday, December 8, 2017

Wednesday, December 6, 2017

Case-control and Cohort studies: A brief overview

Introduction

Case-control and cohort studies are observational studies that lie near the middle of the hierarchy of evidence. These types of studies, along with randomised controlled trials, constitute analytical studies, whereas case reports and case series define descriptive studies (1). Although these studies are not ranked as highly as randomised controlled trials, they can provide strong evidence if designed appropriately.

Case-control studies

Case-control studies are retrospective. They clearly define two groups at the start: one with the outcome/disease and one without the outcome/disease. They look back to assess whether there is a statistically significant difference in the rates of exposure to a defined risk factor between the groups. See Figure 1 for a pictorial representation of a case-control study design. This can suggest associations between the risk factor and development of the disease in question, although no definitive causality can be drawn. The main outcome measure in case-control studies is odds ratio (OR).

Figure 1. Case-control study design.

Cases should be selected based on objective inclusion and exclusion criteria from a reliable source such as a disease registry. An inherent issue with selecting cases is that a certain proportion of those with the disease would not have a formal diagnosis, may not present for medical care, may be misdiagnosed or may have died before getting a diagnosis. Regardless of how the cases are selected, they should be representative of the broader disease population that you are investigating to ensure generalisability.

Case-control studies should include two groups that are identical EXCEPT for their outcome / disease status.

As such, controls should also be selected carefully. It is possible to match controls to the cases selected on the basis of various factors (e.g. age, sex) to ensure these do not confound the study results. It may even increase statistical power and study precision by choosing up to three or four controls per case (2).

Case-controls can provide fast results and they are cheaper to perform than most other studies. The fact that the analysis is retrospective, allows rare diseases or diseases with long latency periods to be investigated. Furthermore, you can assess multiple exposures to get a better understanding of possible risk factors for the defined outcome / disease.

Nevertheless, as case-controls are retrospective, they are more prone to bias. One of the main examples is recall bias. Often case-control studies require the participants to self-report their exposure to a certain factor. Recall bias is the systematic difference in how the two groups may recall past events e.g. in a study investigating stillbirth, a mother who experienced this may recall the possible contributing factors a lot more vividly than a mother who had a healthy birth.

A summary of the pros and cons of cohort studies are provided in Table 2.

Table 1. Advantages and disadvantages of case-control studies.

Cohort studies

Cohort studies can be retrospective or prospective. Retrospective cohort studies are NOT the same as case-control studies.

In retrospective cohort studies, the exposure and outcomes have already happened. They are usually conducted on data that already exists (from prospective studies) and the exposures are defined before looking at the existing outcome data to see whether exposure to a risk factor is associated with a statistically significant difference in the outcome development rate.

Prospective cohort studies are more common. These studies define an exposure and recruit participants into two groups – those that have been subjected to it and those that have not. The study then follows these participants for a defined period to assess the proportion that develop the outcome/disease of interest. See Figure 2 for a pictorial representation of a cohort study design. Therefore, cohort studies are good for assessing prognosis, risk factors and harm. The outcome measure in cohort studies is usually a risk ratio / relative risk (RR).

Figure 2. Cohort study design.

Cohort studies should include two groups that are identical EXCEPT for their exposure status.

As a result, both exposed and unexposed groups should be recruited from the same source population. Another important consideration is attrition. If a significant number of participants are not followed up (lost, death, dropped out) then this may impact the validity of the study. Not only does it decrease the study’s power, but there may be attrition bias – a significant difference between the groups of those that did not complete the study.

Cohort studies can assess a range of outcomes allowing an exposure to be rigorously assessed for its impact in developing disease. Additionally, they are good for rare exposures, e.g. contact with a chemical radiation blast.

Whilst cohort studies are useful, they can be expensive and time-consuming, especially if a long follow-up period is chosen or the disease itself is rare or has a long latency.

A summary of the pros and cons of case-controls are provided in Table 1.


Table 2. Advantages and disadvantages of cohort studies.

The Strengthening of Reporting of Observational Studies in Epidemiology Statement (STROBE)

STROBE provides a checklist of important steps for conducting these types of studies, as well as acting as best-practice reporting guidelines (3). Both case-control and cohort studies are observational, with varying advantages and disadvantages. However, the most important factor to the quality of evidence these studies provide, is their methodological quality.

 

References

  1. Song, J. and Chung, K. Observational Studies: Cohort and Case-Control StudiesPlastic and Reconstructive Surgery. 2010 Dec;126(6):2234-2242.
  2. Ury HK. Efficiency of case-control studies with multiple controls per case: Continuous or dichotomous dataBiometrics. 1975 Sep;31(3):643–649.
  3. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP; STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet 2007 Oct;370(9596):1453-14577. PMID: 18064739.

The post Case-control and Cohort studies: A brief overview appeared first on Students 4 Best Evidence.

Tuesday, December 5, 2017

Misdiagnosis – Impact On Gifted Kids

Misdiagnosis – a medical error that leads to no treatment or simply incorrect treatment. How does this impact kids-especially little-gifted ones? In today’s fast-paced world, parents have no patience, or time to read, understand and educate themselves on facts that could change a kid’s life forever. Everyone looks for quick fixes, whether it’s home, workplace, or their own child. If a kid tends to have a restless personality, a general mentality would be to get a quick fix by dropping into physician’s office; taking the diagnosis at face value, and starting the kid on medication. No one today has time to dwell in depth on the causes of certain behaviors, leave alone giving much thought to the side effects of medication given to kids.

ADD/ADHD

Attention Deficit Disorder (ADD), and Attention Deficit/Hyperactivity Disorder (ADHD) are medical terms that are often used interchangeably, although the current correct medical terminology is ADHD or Attention Deficit/Hyperactivity Disorder. ADHD- a highly genetic, brain-based syndrome has to do with the regulation of a particular set of brain functions and related behaviors. “These brain operations are collectively referred to as “executive functioning skills” and include important functions such as attention, concentration, memory, motivation, and effort, learning from mistakes, impulsivity, hyperactivity, organization, and social skills”(Attention Deficient Disorder Association [ADDA],n.d).

Characteristics of gifted kids are frightfully similar. High degrees of intensity, sensitivity, and overexcitability are characteristics that most gifted children have in common. “They may love movement for its own sake and show a surplus of energy exhibited by rapid speech, wild enthusiasm, intense physical activity, and a need for action. This behavior can be misdiagnosed as ADD/ADHD” (Carlstrom,2011,para.5).

More information on possible problems that may be associated with characteristic strengths of gifted children, can be found in article “Misdiagnosis and Dual Diagnosis of Gifted Children” (J.Webb, Amend, N.Webb, Goerss, Beljan & Olenchak,2011, para.10)

How can parents as adults help avoid misdiagnosis?

In order to give your child a life they deserve; you need to become their first advocate. Follow the tips below to avoid misdiagnosis:

  • Always get a comprehensive diagnosis done by a psychologist who is trained in giftedness and learning disorder.
  • Never rely completely on school checklist. Question your child’s school about the curriculum they implement, and how is instruction differentiated in your child’s class.
  • Don’t be satisfied by a 15-minute interview with a general practitioner or pediatrician. Ask for more information on comprehensive diagnosis.
  • Understand your child’s underlying problem before starting them on medication for life.

Misdiagnosis can be devastating, altering a child’s  life forever. Being exceptional is a “gift” from nature, which demands adequate nurturing. If left unattended, and misdiagnosed; it becomes self-destructive, and a “talent” wasted.

Watch the SENG video: The Misdiagnosis of Gifted Children, for more information.

The post Misdiagnosis – Impact On Gifted Kids appeared first on .

Monday, December 4, 2017

Does Evidence-Based Medicine Imply Utilitarianism?

In this blog I want to explore the question of what moral values underpin or justify the practice of evidence-based medicine (EBM). For example, we might be interested in patient outcomes, patient choice, economic factors, public health, or a combination of these. It matters because this provides the standard for evaluating the success of EBM, and informs us about how we can make EBM better. In particular, I want to respond to a recent paper by Anjum and Mumford on ‘A philosophical argument against EBM’ [1], which argues that the values underpinning EBM inevitably collapse.

According to Anjum and Mumford, “the policy side of evidence-based medicine is basically a form of rule utilitarianism” (p1045)

Utilitarianism is the view that, when faced with a moral dilemma, we ought to act according to which of our options causes the greatest amount of overall wellbeing or happiness, and the least amount of suffering. Rule utilitarianism specifically looks at which rules, heuristics or policies are able to do this, rather than looking at each action individually [2]. In the context of medicine, this means we should aim to create healthcare policies which promote the best standard of health for the greatest number of patients.

An important aspect to this approach is that these policies do not always create the best possible benefit for the patient. In some cases, the guidelines will be ineffective. For example, a given treatment may be recommended in general cases of patients with an illness, but in the case of a particular patient we know it would be harmful. It’s just that having this policy in place for all patients is worthwhile overall. How we respond to such scenarios poses a problem for the rule utilitarian.

Here, we might still say that the treatment recommendation is a good guideline (because it maximises patient health) but in this case, it would seem unethical to prescribe the treatment in the knowledge that it will cause harm. We therefore probably want to say that even good evidence-based guidelines have exceptions. However, this risks compromising the whole point of rule utilitarianism – if we have a set of rules which determine how we should act, but we can contradict or find exception to these rules whenever we need to, what’s the point of having those rules at all? It seems we haven’t said anything that won’t dissolve back down into the more general utilitarian principle of maximising health, regardless of what rules/policies we create [3].

One response that Anjum and Mumford suggest is to look at EBM policies not so much as ‘rules’ for how to act, but rather ‘codes’ for how we can act [4]

This way, policies which are based on EBM can offer us guidance for how a practitioner should act, but nonetheless require a practitioner to use their own judgement and common sense in applying them.

I want to respond to this paper by contesting the authors’ initial premise that EBM implies a kind of rule utilitarianism. I would suggest that, if we seriously look at our medical policies, conventions and laws, the picture is in reality far more complicated than this.

To see why this is the case we need to bear in mind that utilitarianism is not the view that ‘consequences matter’. Everyone cares about what the outcomes of their actions are, and pretty much everyone agrees that it’s generally better to cause happiness rather than suffering. What makes utilitarianism unique is the view that only these consequences matter, meaning there are no values that should influence our actions other than the impact that the action will have on other peoples’ lives. For example, this suggests that there is nothing wrong with lying, coercion, torture or manipulation, except for the fact that they can have bad consequences.

Whether these non-utilitarian values should have any significance from a moral perspective is beyond the scope of this article.

What I do want to demonstrate, is that the practice and justification of many medical policies (including EBM ones) implies non-utilitarian values. Consider the following scenario…

An adult patient requires medication for a fatal illness that they are at significant risk of contracting. However, due to their religious beliefs they refuse to take this medication because it contains an ingredient derived from animals. This refusal is clearly bad for them – they have a high chance of dying if they don’t take the medication. A week after the patient saw her doctor and refused to take this medication, she has a small accident and is taken to hospital unconscious.

In a stroke of luck, the same doctor who saw her a week before is passing her ward. The doctor knows the patient’s medical history and knows there is no chance of the patient having an adverse reaction to the medication which she refused. The doctor (a utilitarian) decides to take the opportunity, while the patient is unconscious and while there are no other patients around, to administer the medication to her, without her consent. The doctor has done something good for the patient – she has potentially saved her life, and there is no chance of being found out.

I hope we would agree that in this case the doctor has done something unethical. She has clearly ignored the patient’s own wishes and values, violated her right to consent and openly deceived her. Of course, a rule utilitarian could always avoid stating the uncomfortable conclusion that the doctor was ethical by deferring to policies – it’s better for everyone if we have policies and regulations against doctors deceiving patients, for example. This conclusion seems pretty unsatisfactory however. This suggests that the only reason this doctor’s actions are unethical is because she has violated hospital regulations. There would be nothing wrong, in this view, with creating a law which allowed doctors to deceive patients if only it had desirable consequences for the overall health of patients.

The values of honesty and consent seem to run far deeper than merely pragmatic rules or regulations

What’s ultimately at issue here is the patient’s right to decide how to live her own life – according to her own values, judgements and preferences, which may not always align with a medical model of what a healthy patient looks like. The role of the doctor is not to decide on a set of desirable outcomes for the patient and enforce them on her; rather, it should be to help the patient to determine her own ends, insofar as her health affects this.

These non-utilitarian values also play a role in the literature on EBM specifically. For example, an article from the Evidence-Based Medicine Working Group in 1992 defends EBM on the grounds that it gives patients a clearer understanding of their prognosis, diagnosis and treatment/s [5]. According to their argument, deferring to clinical intuition or expertise, risks leaving patients “in a state of vague trepidation” about their health prospects and choices. By contrast, the openness about evidence which EBM encourages, offers the patient a more transparent picture of their expected outcomes and options. EBM in this way doesn’t just aim at increasing positive utilitarian outcomes, it can also have benefits from the perspective of the patients’ rights, autonomy, and choice.

References

[1] Anjum RL and Mumford SD. A philosophical argument against evidence-based policy: Philosophical argument against EBP. Journal of Evaluation in Clinical Practice. 2017,October;23(5): 1045–1050. doi:10.1111/jep.12578

[2] For an explanation of the difference between act and rule utilitarianism see: Utilitarianism, Act and Rule | Internet Encyclopedia of Philosophy

[3] This is a version of an argument by Smart. Smart JCC (1973) An outline of a system of utilitarian ethics. In Utilitarianism: For and Against (eds J. C. C. Smart & B. Williams), pp. 1–74. Cambridge: Cambridge University Press.

[4] Hooker B. (1995) Rule-consequentialism, incoherence and fairness.  Proceedings of the Aristotelian Society;95:19–35.

[5] Guyatt G, Cairns J, Churchill D, et al.  Evidence-Based Medicine A New Approach to Teaching the Practice of MedicineJAMA. 1992 November;268(17):2420–2425. doi:10.1001/jama.1992.03490170092032

The post Does Evidence-Based Medicine Imply Utilitarianism? appeared first on Students 4 Best Evidence.

Friday, December 1, 2017

Assessing associations in observational studies

In the medical literature, it is very common to find variables associated with a specific outcome. For example, increased body mass index (the variable) might be associated with an increased risk of cancer (the outcome). However, an association does not always imply that one thing caused the other. It’s important to consider other possible interpretations.

Here are the 5 interpretations that you should consider when you read or hear about a reported association in observational studies:

1: The results were obtained by chance (random error)

The relationship between the variable and the outcome occurred by chance.

What’s really happening? There is no true association. It may be that two events appear to be related, just by coincidence.

Clues that this might be the case: The results can’t be replicated by repeating the study. We need to view the precision of the reported association.

For example, you find a reported association between watching TV and myopia in an observational study. However, at the time that you conduct a similar study you found no association, therefore the association could have occurred by chance.

2: Bias (systematic error)

There is no true cause-and-effect relationship, there just appears to be. However, the results are not due to chance, but due to bias.

What’s really happening? There are issues in the design and application of the study which give a false impression that there is a relationship between the variable and outcome of interest.

Clues that this might be the case: The results are inconsistent with other similar studies; there may be issues of bias in the study design or conduct (e.g. confounders), or in the analysis of the results. “The greater the error the less accurate the variable”.

For example, you found an association between vaginal breech delivery and developmental dysplasia of the hip. However, the paediatrician’s examination was more detailed in newborns with breech presentation than those with cephalic presentation.  There could be a diagnosis bias, in which certain perceptions alter the probability of diagnosing a certain disease between groups.

3: The cause-effect relationship is upside down (effect-cause)

There really is a causal relationship, but in the opposite direction from that reported, meaning that the interpretation of the relationship is incorrect.

What’s really happening? The supposed outcome is the real cause, and the supposed cause is the real outcome.

This can be a problem in study designs that don’t address temporality, e.g. cross-sectional and case-control studies.

For example, if you were to identify a relationship between taking non-steroidal anti-inflammatory drugs (NSAIDs) and a greater risk of spontaneous abortion, you may think that the NSAIDs caused the spontaneous abortion. However, it is also possible that the NSAIDs could be taken to relieve the pain due to early symptoms of the spontaneous abortion itself. This misunderstanding is known as protopathic bias (when a drug is initiated in response to the first symptoms of a disease which is, at this point, undiagnosed).

4: There is another (unmeasured) variable which explains the relationship (confounding)

There is an unmeasured variable which explains the association.

What’s really happening? There is an unknown variable that intervenes in the relation, it could be between the “cause” and the “effect”, or a single variable causes both “cause” and “effect”.

For example, if you were to identify a relationship between having a higher BMI and a greater risk of cancer, you might think that having a high BMI causes cancer. However, it would be important to consider whether other factors associated with having a higher BMI (e.g. poorer diet, less physical activity) could explain the increased cancer risk.

5: The variable truly causes the outcome (cause-effect)

There really is a cause-effect relationship.

You could also go further and evaluate your causality (e.g. using the Bradford Hill criteria;  a set of principles for assessing the likelihood that there is a causal relationship between a presumed cause and effect):

  • Strength of association
  • consistency
  • specificity
  • temporality
  • biological gradient (dose- response)
  • plausibility
  • coherence
  • experimental evidence
  • analogy

References

Hulley SB, Cummings SR, Browner WS, Grady DG and Newman TB (2013). Designing clinical research. Lippincott Williams & Wilkins.

Swaen G and van Amelsvoort L (2009). A weight of evidence approach to causal inference. Journal of clinical epidemiology; 62(3):270-277.

The post Assessing associations in observational studies appeared first on Students 4 Best Evidence.

Wednesday, November 29, 2017

Why You’re an Adult Long Before You Buy Your First House

You’re most likely adulting even if you have not hit some of the big life milestones, like buying a home. Here's why.

The post Why You’re an Adult Long Before You Buy Your First House appeared first on Earnest Blog | Money Advice for Young Professionals.

Friday, November 24, 2017

Wading Through Conflicting Literature on G6PD Deficiency

What is G6PD deficiency?

Glucose-6-phosphate dehydrogenase (G6PD) is an enzyme which plays a part in protecting the red cells from oxidative damage [1].  When there is a reduction in G6PD activity, the red cells break down in the presence of oxidative stress. G6PD deficiency is inherited in an X-linked recessive pattern, therefore it is more commonly found in boys who have only one copy of X chromosome. Around 400 million people worldwide are affected by the enzyme deficiency [2]. In our part of the world, G6PD deficiency is a relatively common condition affecting 3 to 7% of the population [3]. We come across this condition quite often as medical students in Malaysia. What I did not know was that all babies in Malaysia get screened for this at birth and their parents are counselled if they are found to be G6PD deficient – that is until I came across a little baby boy who was screened to be deficient.

What is the issue?

The mother of the baby boy was a sweet, young lady who was kind enough to share her experience with me. When she was told her newborn baby suffers from G6PD deficiency, she was at a loss and did not know what to expect. She was then counselled and given a pamphlet with details of what G6PD deficiency is and what she could do for her baby. This included a list of chemicals and medications that her baby should avoid in the future.

The encounter made me think about what advice could be given to the parents of G6PD deficient children. Our knowledge of G6PD deficiency is limited although it is a fairly common inherited disease in Malaysia. While reading around the topic, I found that the lists of medications to avoid in G6PD deficiency vary from one source to another. Some lists are so extensive that they include over 100 medications. An example is available on the g6pddeficiency.org website [4]. Other lists include a few more ‘notorious’ ones, such as antimalarial drugs and non-steroidal anti-inflammatory drugs (NSAIDs). One such list from the Malaysia Ministry of Health is available at the website, www.myhealth.gov.my[2]. Another similar list is available in the Paediatric Protocols for Malaysian Hospitals [5].

It is plain to see that the above lists differ from one another. For example, NSAIDs like paracetamol and aspirin were listed as harmful on g6pddeficiency.org and myhealth.gov.my, but paracetamol and aspirin are considered safe to be given in therapeutic dosage in the Paediatric Protocols for Malaysian Hospitals. I thought it must have been confusing for G6PD deficient individuals and their healthcare providers to decide which lists to use. Assuming those who wish to ‘play safe’ go for the more extensive lists, they would have missed the benefits of using some medications that with more extensive research, may actually prove to be harmless. On the other hand, those who go for the simpler lists may unintentionally use medications that have not been included but could cause harm.

How the literature search began

I decided to look up reviews and research that could shed light on the issue. That was when I came across a review article by Youngster et al which was published in 2010. This review included studies and case reports from as far back as 1950. The haemolytic potential of various medications in G6PD deficient individuals were reviewed. Surprisingly, there were only seven medications with solid evidence of drug-induced haemolysis, including dapsone, methylthioninium chloride (methylene blue), nitrofurantoin, phenazopyridine, primaquine, rasburicase and tolonium chloride (toluidine blue)[6]. Some medications like paracetamol, aspirin, ciprofloxacin, co-trimoxazole, nalidixic acid and a few other sulfa drugs that have been considered unsafe by various sources, did not have sufficient evidence of harm [6].

Forming the PICO

Given the widespread use of analgesics like paracetamol, I narrowed down my search to this class of drugs. There had been conflicting evidence on the safety of analgesic use in G6PD deficiency. For instance, some earlier studies had reported haemolysis following paracetamol use in G6PD deficient patients [7,8,9].  However, they were often attributed to overdose or confounded by an underlying infection or concurrent medication use. Thinking of the baby I met, and his possibility of using analgesics in the future, I formed my PICO question:

Appraising and applying

The search on PubMed generated two results and one of the articles was selected for use. It was a study titled “Potential Risks of Hemolysis after Short-Term Administration of Analgesics in Children with Glucose-6-Phosphate Dehydrogenase Deficiency” by Najafi et al [10].


Are analgesics safe in G6PD deficient children?

It was a prospective cohort study which sought to evaluate the risks of haemolysis after short-term use of analgesics in ten male children with G6PD deficiency. Laboratory and clinical findings of haemolysis were evaluated throughout the 7-day study. The results of this study showed that it may be safe to administer analgesics within therapeutic range to G6PD deficient patients, which is consistent with the review by Youngster et al [10].

However, because of the small sample size and the lack of comparison group, larger clinical trials would be needed before the results of this study can have a larger impact on clinical practice. In the meantime, I agree that short-term analgesics within therapeutic dosage could be administered to G6PD deficient children but close monitoring for development of haemolysis may be necessary. If I were to give advice to the concerned mother I met recently, I would highlight the fact that we have some evidence, although still inconclusive, to prove that analgesics that are given in therapeutic dosages can be safely administered to her child if her child ever needs it.

What is the moral of the story?

Is evidence-based medicine a fairy tale? Yes, and what a shocking fairy tale this is, without a happy ever after. To find that 400 million G6PD deficient individuals worldwide are depending largely on isolated case reports, case series and a few reviews was a shocking truth indeed. There is an urgent need of further clinical trials with larger sample size so that these 400 million individuals do not avoid medications that could in fact benefit them.

On the whole, this literature search highlights the importance of better evidence for safer medicine. As the first rule of medicine goes “Primum non nocere (First, do no harm)”, it is imperative that all healthcare professionals regularly search for evidence before formulating a management plan. Practising medicine without good evidence is like sailing an uncharted sea, we never know where we are going or when we could cause harm. Hence, the next time we meet a patient, before noting down the management plan, let’s pause, search and apply!

References

The post Wading Through Conflicting Literature on G6PD Deficiency appeared first on Students 4 Best Evidence.

Friday, November 17, 2017

Coronary heart disease: what is the evidence?

Nobody wants to experience the pain from a heart attack. It can be a crippling event in a patient’s life.  As a cardiology intern, I encountered a 62 year old gentleman who was a chronic smoker with a deranged lipid profile. He was admitted with acute typical angina which occurred at rest and continued for several hours. The ECG changes, echocardiogram and biochemistry results suggested myocardial infarction, commonly known as a heart attack.

It was his ninth day of admission, he was clinically stable and his laboratory results were normal. He was going home today and I was assigned to write a discharge note for him. As I was writing the note, I explained to him about the nature of his medical problem.

What is Coronary heart disease (CHD)?

CHD is caused by plaque buildup in the wall of arteries that supply blood to the heart. Plaque is made up of deposits of cholesterol and other substances.  Plaque buildup causes arteries to narrow which could block blood flow. It is the single most common cause of death globally. As the rate of death due to CHD is decreasing, the number of people living with CHD is increasing day by day and these people need support to manage their symptoms and reduce the chances of future complications such as another heart attack.

I recalled what I was taught in medical school. My professors taught us that treating people in hospital with certain medications can alleviate their acute symptoms, however, the long term sequelae of the disease cannot simply be treated with the same medications. In my cardiology training, I have learned about the role that education, exercise training and psychological support can play in improving the quality of life in people with CHD.

What does the evidence show?

Patient education is the process by which health professionals impart information to patients and their caregivers that will alter their health behaviours or improve their health status. A recent Cochrane review tells us that, currently, there is little evidence that patient education, as part of a cardiac rehabilitation programme, lowers the heart-related events and improves health-related quality of life [1]. Indeed there is insufficient information at present to fully understand the benefits or harms of patient education for people with heart disease. Nonetheless, the findings tentatively suggest that people with heart disease should receive comprehensive rehabilitation that includes education [1].

A second component of rehabilitation is exercise. I explained to my patient the importance of exercise for his short-term and long-term outcomes. Another recent Cochrane review has shown that exercise-based cardiac rehabilitation reduces the risk of death due to cardiovascular cause, decreases the duration of hospital stay and improves health-related quality of life when compared with those not undertaking exercise [2].

Heart attacks and cardiac surgery may be frightening and traumatic, potentially causing increased psychological distress. A third recent Cochrane review assessed the effectiveness of psychological interventions for CHD. Although the evidence has shown that psychological interventions, as part of cardiac rehabilitation, do not reduce the total mortality of people with CHD, there is some evidence that they can alleviate the patient-reported symptoms of depression, anxiety and stress [3]. To prevent and control these symptoms in future, I recommended that my patient underwent regular psychological therapy. Even though it may have no role in reducing his risk of mortality due to CHD, it could improve the other important psychological symptoms.

Summary

In summary, cardiac rehabilitation may consist of three things: patient education, exercise and psychological support. Exercise has been shown to reduce future cardiac related events, including death, in people with existing CHD. Regarding patient education, there is insufficient data to understand its benefits and harms on patients with CHD. Psychological interventions may alleviate the patient’s reported symptoms of anxiety or depression, therefore improving the quality of life in people with CHD, and thus remaining an important part of cardiac rehabilitation.

References

  1. Anderson LBrown JPRClark AMDalal HRossau HKBridges C et al. Patient education in the management of coronary heart disease. Cochrane Database of Systematic Reviews 2017, Issue 6. Art. No.: CD008895. DOI: 10.1002/14651858.CD008895.pub3
  2. Anderson LThompson DROldridge NZwisler ADRees KMartin N et al. Exercise-based cardiac rehabilitation for coronary heart disease. Cochrane Database of Systematic Reviews 2016, Issue 1. Art. No.: CD001800. DOI: 10.1002/14651858.CD001800.pub3
  3. Richards SHAnderson LJenkinson CEWhalley BRees KDavies P et al. Psychological interventions for coronary heart diseaseCochrane Database of Systematic Reviews 2017, Issue 4. Art. No.: CD002902. DOI: 10.1002/14651858.CD002902.pub4

The post Coronary heart disease: what is the evidence? appeared first on Students 4 Best Evidence.

Thursday, November 16, 2017

Automatic Bill Payments: Helping or Hurting Your Budget?

It's not even the dollar amount that is frustrating, but rather the irritation at forgetting you are being charged.

The post Automatic Bill Payments: Helping or Hurting Your Budget? appeared first on Earnest Blog | Money Advice for Young Professionals.

Wednesday, November 15, 2017

The Delphi Technique

 

What is it?

The Delphi technique (also referred to as Delphi procedure or process), is a method of congregating expert opinion through a series of iterative questionnaires, with a goal of coming to a group consensus. In fact, in 150 studies that used the Delphi technique, there was no universally agreed upon working definition of the technique. There are many variants used, some of which have departed widely from the original Delphi technique.

Since its development in the 1950’s by the RAND Corporation, several refinements and modifications have been made, such as specific strategies for different fields, including business, government, and healthcare.

There are four characteristic features of the Delphi technique that distinguish it from other group decision making processes. These are: anonymity, iteration with controlled feedback, statistical group response, and expert input.

When is it used?

The Delphi Technique can be an especially useful research methodology when there is no true or knowable answer, such as decision-making, policy, or long-range forecasting. A wide range of opinions can be included, which can be useful in cases where relying on a single expert would lead to bias.

The Delphi technique was recommended as the method of choice when:

  • subjective statements made on a collective basis are desired;
  • face-to-face interaction is difficult due to large sample sizes;
  • anonymity is preferred, such as when issues are intractable or political; and
  • there is a chance of domination of a group discuss by one person.

How is it done?

  1. Survey Development
    • Define the research problem/questions and develop the first-round survey
    • Pilot the survey with a small group to ensure the responses will elicit appropriate answers to the research question
    • Round one is like a ‘brainstorming’ round, and allows participants to provide their own responses to the question. The responses are categorized by the researchers to provide the response options in future rounds. In the first round, participants may be asked to limit themselves to one response, or answer as many times as they would like depending on the research question and number of participants. Alternatively, pre-existing options could be provided for ranking or response, however, this approach could bias the responses or limit the available options
  2. Participant Recruitment
    • Research has indicated that participant subject matter knowledge (i.e. being an “expert”) may not have a substantial impact on study results, so it might be best to choose participants who have some understanding of the topic and an interest in the outcome of the study to limit attrition and encourage thoughtful responses to the surveys
    • Often, participants are selected via non-probability sampling techniques (either purposive sampling or criterion sampling), to save resources and ensure appropriate participants are selected
  3. Data Analysis
    • The first round of the Delphi technique involves participants providing answers to the research question, which will then be ranked in future rounds. In a classic Delphi, no items should be added or removed, and the wording used by participants should be kept for round two. However, this may be difficult or not feasible depending on the number and types of responses provided, and content analysis can be used to group similar themes prior to the second round. An informal literature review can also be used to identify further items depending on the research question
    • In subsequent rounds, participants are asked to rank/respond to the analysed options from round one. Between rounds the group’s responses are analysed, summarised, and communicated back to the participants, a process called controlled feedback. This is repeated until consensus is reached, or for a planned number of rounds depending on the research question
    • Subsequent rounds are analysed to identify convergence of participant responses, and to provide controlled feedback. Central tendencies (mean, median, and mode) and levels of dispersion (standard deviation and the inter-quartile range) are often used. These results are fed back to participants in the next round, although no consistent method for reporting exists
  4. Ending the Delphi Process
    • The process typically ends once acceptable level of consensus has been reached, however, there is no universally agreed cut-off. The level of agreement reached depends on sample numbers (i.e. high attrition or participant burnout), aim of the research (i.e. if complete consensus is required), and available resources
    • Two to four rounds will typically be conducted to ensure study goals are met but to avoid sample fatigue and unnecessarily use of resources

Pros and Cons of the Delphi Technique:

Pros:

  • Allows use of a “committee” with fewer drawbacks (scheduling, travel/space requirements, lengthy discussions)
  • Anonymity reduces impact of dominant individuals and helps reduce peer pressure to conform, and allows opinions to be considered in a non-adversarial manner
  • Responses are weighted equally so no one person can shift the opinions of the group
  • Providing controlled feedback on the group opinion reduces noise and allows participants to reconsider based on others’ rankings

Cons:

  • There are a lack of clear methodological guidelines
  • Continued commitment is required from participants who are being asked a similar question multiple times
  • There is no evidence of reliability (i.e. if two panels received the same question they may not come to the same consensus)
  • Does not allow participant discussion and there is no opportunity for participants to elaborate on their views
  • The existence of a consensus does not necessarily mean that the correct answer, opinion, or judgement has been found, it merely helps to identify areas that one group of participants or experts consider important in relation to that topic

Similar/Alternative Methodologies:

Brainstorming and nominal group technique are similar techniques that allow incorporation of many individual perspectives.

 

References

Goodman CM. The Delphi technique: a critique. Journal of Advanced Nursing. 1987 Nov;12(6):729–734. doi:10.1111/j.1365-2648.1987.tb01376.x. Available from: http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2648.1987.tb01376.x/full

Sackman H. (1974). Delphi critique; expert opinion, forecasting, and group process. Lexington, Mass: Lexington Books.

Hasson F, Keeney S, and McKenna H. Research guidelines for the Delphi survey technique. Journal of Advanced Nursing. 2000 Oct;32(4):1008–1015. doi:10.1046/j.1365-2648.2000.t01-1-01567.x. Available from: http://onlinelibrary.wiley.com/doi/10.1046/j.1365-2648.2000.t01-1-01567.x/full

Hsu C & Sandford BA. The Delphi Technique: Making Sense Of Consensus. Practical Assessment, Research & Evaluation. 2007 Aug;12(10). Available from: http://pareonline.net/getvn.asp?v=12&n=10

McKenna HP. The Delphi technique: a worthwhile research approach for nursing? Journal of Advanced Nursing. 1994 June;19(6):1221–1225. doi:10.1111/j.1365-2648.1994.tb01207.x. Available from: http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2648.1994.tb01207.x/full

Linstone HA, Turoff M. (1975). The Delphi Method: Techniques and Applications. Addison-Wesley Pub. Co., Advanced Book Program.

 

The post The Delphi Technique appeared first on Students 4 Best Evidence.

Wednesday, November 8, 2017

Should you intercalate?

Introduction

An intercalated degree is an opportunity to learn more about a particular topic, to develop transferable skills and/or participate in a more in-depth research project than previously available as part of a medical degree. The number of students that choose to intercalate varies from university to university, however, with the increase in academic focus of most training programmes, more and more variety of intercalations are becoming available.

How it works

Most UK universities will offer the opportunity to do an intercalated degree towards the end of your medical degree. This is usually after the 3rd or 4th year, but there are exceptions. Some places, such as Oxford, Cambridge and Edinburgh, include an intercalated year into their standard medical programmes, making them a 6-year degree. An intercalation typically lasts one year.

What’s on offer?

In the UK, it is possible to intercalate in a BSc, MSc, MRes or MPhil programme. Other degrees are available; however these are the most common.

First, it is important to decide whether or not you would like to intercalate. This is an important decision and requires careful thought. Next, consider whether you would like to intercalate at your own university or whether you would like to go elsewhere. Things to think about include exploring different areas of the UK, varying opportunities in the different specialities and the selection of types of degrees on offer. It’s useful to explore university websites and contact programme co-ordinators early to find as much information out about how the course is assessed and what projects are available. Depending on the course, it may be possible to contact potential supervisors ahead of time, to express your interest and plan some research in advance – all of which can maximise the likelihood of success during the intercalation year.

You can search a wide variety of available intercalated degrees through the following link >> http://intercalate.hyms.ac.uk/

Advantages

  • Gain an additional degree for only one year of study
  • Gain new transferable skills
  • Show commitment to a speciality you are interested in pursuing
  • Opportunity to gain presentations and publications
  • Opportunity to network with new people
  • Opportunity to live and explore a new area of the UK
  • Additional points for Foundation Programme applications

Disadvantages

  • Additional costs (living expenses, tuition fees etc)
  • Additional time, meaning you finish later and may not graduate with all your peers
  • Potential to lose a lot of medical information that is crucial for finals as well as your job!

Other key points

It is true that an intercalated degree can give you more points towards your Foundation Programme application. Nevertheless, intercalating for this sole reason is not advised. The points that can be attained for the Foundation Programme application for additional degrees can be found below:

Table 1. Additional qualification points table for the Foundation Programme. Adapted from the UKFPO Handbook 2018.

Although an intercalation year can induce additional debts, the NHS will cover the cost of tuition fees for the 5th/6th year of medical school study. Moreover, the NHS provides a £1000 non-means tested grant, with further means-tested funds available. Scholarships and funding from research bodies can also support an intercalation year, however, these require application systems themselves. (this information was based on the application year 2017 and it may change in the future).

Contacting people who have intercalated in the past is useful to help you decide and get further advice, especially if you can find people who are intercalating at the university/degree you wish to pursue.

Summary

To clarify, an intercalated degree is an excellent opportunity to develop skills and can be beneficial to future career applications. However, the decision as to whether this is right for you should be a personal choice.

It is recommended that students should research the prospect of an intercalation and the opportunities available, whether they would be able to handle an additional year of study, and whether this will benefit their future intended career path.

The post Should you intercalate? appeared first on Students 4 Best Evidence.

Tuesday, November 7, 2017

A State-by-State Guide to the 2018 Health Insurance Open Enrollment Period

The Affordable Care Act, aka Obamacare, has a single period – open enrollment – when most people will be eligible for buying health insurance for the upcoming year.

The post A State-by-State Guide to the 2018 Health Insurance Open Enrollment Period appeared first on Earnest Blog | Money Advice for Young Professionals.

Friday, November 3, 2017

Comparing the validity and robustness of different statistical methods for meta-analysis of rare event data

Meta-analyses of rare events or sparse data should employ different methods compared to regular data.

Assumptions for normal meta-analysis methods do not hold true in these scenarios, as there can be no events in one or both of the comparison arms. This is often the case with serious but uncommon adverse events, making it essential to get the analysis method correct as studies have shown that using a different meta-analysis method can change the final effect estimate considerably(1).

The Cochrane handbook has a separate chapter on special statistics (chapter 16), which contains a devoted section on reviews dealing with rare events (section16.9). Though it discusses the validity of the different methods available within the literature, the accompanying RevMan software only permits the use of Mantel-Haenszel odds ratio (OR) method using a 0.5 zero-cell corrections or Peto’s OR, as also acknowledged within the manual(2).

Mantel-Haenszel vs Peto odds ratio

The Mantel-Haenszel OR, using the 0.5 zero-cell corrections, have been repeatedly shown to give biased results (2,3) while the Peto´s OR has generally been considered robust when there is not a large imbalance in the total group size between the different comparison arms, and the expected effects are not large (1-3). This has meant the Peto OR has become the method of choice for most.

However, if the latter assumptions are not met, which is often the case, then the results have a higher potential to be biased. Moreover, the Peto OR model is unable to handle studies where both the arms have no events, and effectively removes them from the analysis. Therefore this further limits its applicability and ethically, patients who have been recruited to these double-zero studies, have a right that their data is included in meta-analyses (4-6).

We therefore undertook a study to identify the validity and robustness of effect estimates across different meta-analysis methods for rare binary events data, using the example of serious rare adverse events from antidepressants trials. We compared the four rare adverse outcomes of all-cause mortality, suicidality, aggressive behaviour and akathisia across the Peto method, the generalized linear mixed model (GLMM), conditional logistic regression, a Bayesian approach using Markov Chain Monte Carlo (MCMC) method and finally the beta-binomial method.

Our results, recently published online (7) in the Journal of Clinical Epidemiology, showed that though the estimates for the four outcomes did not change substantially across the different analysis methods, the Peto method underestimated the treatment harm and overestimated its precision, especially when the estimated OR deviated greatly from 1. For example, the OR for suicidality for children and adolescents was 2.39 (95% CI 1.32 to 4.33) using the Peto method, but increased to 2.64 (1.33 to 5.26) using conditional logistic regression, to 2.69 (1.19 to 6.09) using beta-binomial, to 2.73 (1.37 to 5.42) using the GLMM and finally to 2.87 (1.42 to 5.98) using the MCMC approach. Moreover when we consider absolute numbers and that these are very serious harms, the minor changes in the effect estimates make a difference.

Conclusion

Therefore, the method used for meta-analysis of rare events data influences the estimates obtained, and the exclusion of double zero-event studies can give misleading results. To ensure reduction of bias and erroneous inferences, sensitivity analyses should be performed using different methods. Other methods, in particular the beta-binomial method that was shown to be superior through simulation studies (4,7), should be considered as an appropriate alternative.

References

  1. Sweeting M, Sutton A, Lambert P. What to add to nothing? Use and avoidance of continuity corrections in meta-analysis of sparse data. Statistics in Medicine 2004; 23(9): 1351-75.
  2. Higgins JG, S (editors). Cochrane Handbook for Systematic Reviews of Interventions [updated March 2011]. Available from www.cochrane-handbook.org.: The Cochrane Collaboration; 2011, (accessed May 2016).
  3. Bradburn M, Deeks J, Berlin J, Russell Localio A. Much ado about nothing: a comparison of the performance of meta-analytical methods with rare events. Statistics in Medicine 2007; 26(1): 53-77.
  4. Kuss O. Statistical methods for meta-analyses including information from studies without any events—add nothing to nothing and succeed nevertheless. Statistics in Medicine 2015; 34(7): 1097-116.
  5. Friedrich JO, Adhikari NK, Beyene J. Inclusion of zero total event trials in meta-analyses maintains analytic consistency and incorporates all available data. BMC Medical Research Methodology 2007; 7(1): 1-6.
  6. Keus F, Wetterslev J, Gluud C, Gooszen HG, van Laarhoven CJHM. Robustness Assessments Are Needed to Reduce Bias in Meta-Analyses That Include Zero-Event Randomized Trials. Am J Gastroenterol 2009; 104(3): 546-51.
  7. Sharma T, Gøtzsche PC, Kuss O. The Yusuf-Peto method was not a robust method for meta-analyses of rare events data from antidepressant trials. Journal of Clinical Epidemiology. 2017; Aug 9. pii: S0895-4356(17)30785-0.  doi: 10.1016/j.clinepi.2017.07.006. 

The post Comparing the validity and robustness of different statistical methods for meta-analysis of rare event data appeared first on Students 4 Best Evidence.

Wednesday, November 1, 2017

Evidence-based health practice: a fairytale or reality?

This blog, written by Leonard Goh, was the winner of Cochrane Malaysia and Penang Medical College’s recent evidence-based medicine blog writing competition. Leonard has written an insightful and informative piece to answer the question: ‘Evidence-based health practice: a fairytale or reality’.

Details of the other winners are here – many congratulations to you all.

The philosophy of EBM has its origins in the 19th century, though it was only relatively recently in 1991 that Gordon Guyatt first coined the term in his editorial, espousing the importance of a clinician’s ability to critically review literature and synthesize the new findings into his practice [1].  His comments sparked off the EBM movement in a medical fraternity that was increasingly unsatisfied with basing clinical practice on anecdotal testimonials, leading to the worldwide incorporation of EBM classes into both undergraduate and postgraduate programmes, as well as workshops for already-practicing clinicians.

It has to date percolated into the public arena, to the point where it is not uncommon to hear patients asking, “so what is the evidence for this treatment you are proposing?”

EBM was formally defined as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” in a 1996 article [2], and subsequently revised in 2000 to mean “a systematic approach to clinical problem solving which allows the integration of the best available research evidence with clinical expertise and patient values” [3]. The revision attempted to reflect equal emphases on the clinician’s individual clinical competency and cultural competency, addressing the common criticism of its prior definition that it was “cookbook medicine” that negated individual clinical expertise and the choice of patients.

There is a constant struggle to balance clinical decisions in accordance with the three ideals, especially if current best evidence contradicts individual experience and patient choice. But the main issue with EBM is, in my opinion, the validity and applicability of research results to the real world. Research conclusions are drawn completely from statistical analysis, yet a vast majority of clinicians lack a proper in-depth understanding of statistics to come to a correct conclusion. 

Take the p-value for instance.  It is commonly understood that results are statistically significant if the corresponding p-value is below 0.05, and statistically insignificant if the p-value is above 0.05.  However, this dichotomy is simply not true.

The p-value threshold of 0.05 is an arbitrarily defined convention that was first proposed by RA Fisher in 1925 and has since been used for convenience sake. Furthermore, p-value significance should in fact be seen as a continuum; results with a p-value of 0.049 is not markedly more significant that one with a p-value of 0.051, yet we proclaim the former a significant finding and disregard the latter.  This invariably leads to faulty reporting of study conclusions [4].

Digging deeper into the principles of p-value, it becomes apparent that there is a subtle difference between our commonly conceived notion of it and its exact meaning that can elude even the most astute of clinicians – the p-value does not comment on the truth of a null hypothesis, but rather is about the compatibility of the research data with the hypothesis. To remedy this, the American Statistical Association published a statement on statistical significance and p-values [5], but we can be sure it will take a significant (pun unintended) amount of time before these deeply entrenched misunderstandings are rectified.

This insufficient understanding of statistics means that research studies are vulnerable to statistical manipulation, whether unintentional or otherwise.  Indeed, Stephen Ziliak and Deidre McCloskey, authors of the book The Cult of Statistical Significance, estimate that between 80% and 90% of journal articles have serious flaws in their usage of statistics; even papers published by the New England Journal of Medicine were not spared.

On top of that, huge numbers of papers published every day, making it a Herculean task to determine what constitutes “current best evidence” – it takes time to do a literature search, identify papers, and evaluate them, time which our clinicians simply do not have. Even if correct methodologies are in place, there is concern that claimed research findings may possibly be merely accurate measurements of the prevailing bias [6].

Adding insult to injury is the presence of predatory open access journals in the publishing industry. Neuroskeptic, an anonymous neuroscientist-blogger, illustrated this by submitting a Star Wars-themed spoof manuscript that was absolutely devoid of scientific rigor to nine journals, of which the American Journal of Medical and Biological Research accepted, and the International Journal of Molecular Biology: Open Access, Austin Journal of Pharmacology and Therapeutics, and American Research Journal of Biosciences published [7].

While the Neuroskeptic’s intent was not to make a statement regarding the brokenness of scientific publishing but rather to remind us that some journals falsely claim to be peer-reviewed, this concomitantly highlights the very real probability of better-concealed, intellectually-dishonest papers masquerading as legitimate science, impeding efforts to uphold an evidence-based health practice.

Is evidence-based medicine then an unrealisable fairytale?

To conclude as such would perhaps be exceedingly harsh. Yes, our practice of evidence based medicine is flawed, as pointed out in the preceding paragraphs. That, however, is not to suggest that we should cease striving to improve upon it. Evidence based medicine represents our best hope in ascertaining that we provide our patients with the best available treatment options, and we should persevere in our endeavours to further actualise this fairytale into reality.

But should this be our utmost priority?

In today’s world, where our medical profession is increasingly governed by statistics and algorithms, it is easy to mistake evidence based medicine as a panacea. We would however do well to remember that as much as it is a science, medicine is also an art. It is crucial that we do not lose sight of our raison d’être – to cure sometimes, relieve often, and comfort always.

 

References

The post Evidence-based health practice: a fairytale or reality? appeared first on Students 4 Best Evidence.