From the Executive Editor
Evidence-based practice

Evidence-based practice”: what a terrific theme for the 42nd AASV conference in March 2011! When I graduated in the dark ages (1979), our curriculum did not include critical evaluation of the literature. I must admit that I took the printed literature as fact. As I continued my education while in practice, I avidly read journals and then accepted what I read. Today’s veterinary students have the advantage of a richer curriculum. At the Ontario Veterinary College, students in their second year take a full-year course entitled “Health Management,” taught by Dr Dave Kelton. This course includes lectures and seminars on clinical decision making for the diagnosis, therapy, and prevention of disease, and diagnostic decision making, the appropriate use of diagnostic tests and causal reasoning.

The students complete an assignment on critical evaluation of the literature. They identify a clinical question regarding the care of an animal or a herd and then search the veterinary literature to find up to three current articles published in refereed veterinary journals that will help to answer the question. Using these articles, the students evaluate the strengths and weaknesses of the articles according to the guidelines for critical appraisal. On the basis of the quality of the evidence and the arguments presented in these reference papers, the students make a decision about how to proceed with the management of the clinical case. It sounds like an excellent exercise to prepare veterinarians to use evidence-based practice.

Causal reasoning provides an excellent set of parameters with which to evaluate clinical decisions. I like those described by Dohoo et al,1 that are as follows.

Study design and statistical analyses. Certain study designs, such as cohort studies and randomized field trials, provide more evidence than cross-sectional studies or case-control studies. Laboratory studies provide evidence but not necessarily what we need to determine applicability to the field. Your job is to determine whether bias or confounding factors may explain the results. Certainly, without a concurrent control group, causation cannot be determined. Studies that make conclusions based on historic controls are not valid. Convention states that a P value of < .05 is important. Even so, at .05, the Type 1 error is 5%. That means we have a 5% probability that the association measure was due to chance alone.

Time sequence implies that the cause or the factor of interest occurred before the disease. Stated another way, the treatment or vaccination occurred before the reduction in mortality or morbidity.

Strength of association is measured as an odds ratio or relative risk or may be the size of the coefficient in a linear regression model. If the odds of recovery are four times higher in treated animals than in nontreated animals, you are more likely to accept that the treatment worked than if the odds of recovery are only 1.2 times higher. Similarly, if the outcome is ADG, we are more certain of an effect if the difference in ADG is large rather than very small.

Dose-response is not always applicable. However, if exposure to a small number of virus particles results in mild disease, then exposure to a large number of virus particles will result in more severe disease or a more rapid onset of disease or a higher number of pigs infected. Similarly, if an antibiotic is an effective treatment, then using a sufficient dose for 1 day reduces clinical signs, but using that same dose for several days results in a cure. There may be a threshold over which an increase in dose will not change the result.

Coherence or biological plausibility refers to whether or not the association can be explained by our current scientific knowledge. If we find that a treatment results in a reduction in clinical signs, does it make sense to us as scientists or was it just a chance occurrence? Sometimes our science has not caught up to our current findings in the field and new work needs to be done to explain what we find.

Consistency is a very important criterion. It refers to the fact that we expect a new scientific fact to be repeated by different scientists in different settings. As an example, if we find an association in the laboratory, we then wish to determine whether or not the finding can be replicated in the field. If there is an association in one herd in Iowa, we may wish to determine whether or not we find the same in commercial herds in Minnesota, Canada, and Denmark.

Specificity is an indication of the definition of the clinical syndrome. If Mycoplasma hyopneumoniae causes illness in pigs, is the clinical syndrome always the same? If an agent causes exactly the same clinical problems each time, then we are more certain that the agent causes disease in that species. It is likely one reason there was debate about whether or not post-weaning multisystemic wasting syndrome was caused by porcine circovirus type 2. The clinical presentation was not consistent.

Analogy refers to whether or not similar results are found in other species. Influenza virus causes upper respiratory tract infections and fever in multiple species.

Experimental evidence refers to whether similar results are found in laboratory studies and other studies at lower levels. Often associations found in the field are verified by controlled laboratory studies.

All of the nine criteria do not have to be fulfilled in order for us to accept a causal relationship. As more are fulfilled, we are more certain.

The AASV meeting in Phoenix will be a good opportunity to critique our own approach to evidence-based practice. In the meantime, perhaps we can follow the approach of our student veterinary colleagues.

Reference

Dohoo I, Martin SW, Stryhn H. Introduction and causal concepts. In: Veterinary Epidemiologic Research. 2nd ed. 2009:1–31.