From the Editor

Read this one cover to cover

Cate DeweyThis issue of the Journal of Swine Health and Production likely has something of interest for everyone, but from a study-design perspective, the scientific articles cover a particularly wide territory. When I began as editor, I was determined to write editorials that would describe how one might critically evaluate the literature. If we understand study design, we can determine whether or not the research findings likely relate to the pigs in our care. This editorial describes some of the strengths and weaknesses of the various study designs illustrated in this journal.

The first scientific article is a retrospective observational study by Amass et al. The authors had a question about something that happened in the past and so needed to collect information based on the pig owners' recall of information. As is typical of retrospective observational studies, there is missing data. It is often difficult to obtain a complete data set in observational studies, especially if they are retrospective. As the reader, you have to decide whether or not the missing data has biased the authors' results and conclusions. However, before you dismiss the article, the strength of observational studies is that the study measures what happens in the real world, and therefore the reader does not have to worry about extrapolating results to the field. I cannot imagine designing anything but an observational study to answer the question posed by these authors.

The second scientific article, by Cassar et al, is a field trial. A treatment regime is randomly assigned to sows, and then the sows are observed through pregnancy to the following parturition. The strength of field trials is that the medication is administered to sows in a commercial farm. In contrast to the situation in a laboratory study, the sows will be impacted by disease and management flaws as well as the new treatment regime. The reader can be confident that if the product works on this farm, it will likely work on other commercial farms. The limitation is that the study sows may be impacted by other problems which mask the impact of the product. The reader must understand which sows were included and which were excluded from the study to determine how well the study subjects represented a whole population of sows in a commercial herd.

Laitat et al did their study in a laboratory setting. Using pigs that are almost identical and then randomly assigning to treatment pens is a terrific way to ask a specific question. With this experimental design, the reader is certain that the differences in the outcome are definitely due to the treatments applied. For example, in this article, the differences in time spent at the feeder were due to the feed form. The reader must determine whether or not the laboratory setting mimics a commercial barn setting sufficiently to extrapolate the results to the real world. Laboratory studies provide us with the answers to specific questions which then can be retested in field settings.

Meta-analyses are rare in the applied swine literature, but they serve a very distinct purpose. Miguel et al reviewed all of the published and nonpublished literature that examined the growth effect of an in-feed oligosaccharide. Rather than doing one more study on the impact of this product, these authors did an exhaustive review of the previous studies and then provided the reader with a summary of the findings. This approach works well when the topic has been exhaustively evaluated, and particularly when the product produces variable results. Miguel et al describe carefully how they used statistics to test subsets of the literature and what inclusion criteria they used for each analysis. The reader can use this article to answer the question, "What is the likelihood that my pigs weaned at 21 days will benefit from having this product included in the ration for 2 weeks?"

The cross sectional, observational study described by Karriker in "What's your interpretation?" has a conclusion that makes my heart sing. It would likely have this effect only on an epidemiologist. The author's point is that differences observed in large production units must be tested with statistical analyses to be valid. If two production parameters differ numerically but not significantly, this difference may be due to chance alone. Karriker's point is that the veterinarian cannot be confident that the same difference will happen again. For this reason, economic analyses cannot be based on nonsignificant differences in production parameters.

I hope each of you has the opportunity to read the articles in this issue. I appreciate the hard work of the authors, the volunteer hours donated by our reviewers and editorial board, and the diligence of the staff of the journal, whose efforts together bring you this issue with such a diverse selection of study designs.

-- Cate Dewey