From the Executive Editor:Reading with a critical eye – random allocation
Do you read a manuscript from the front to the back or do you jump from the front to the middle and then to the back? Well, my reading style is definitely the latter. As editor, I read all manuscripts in this journal in their entirety – often more than once. However, when I pick up another journal, I start with title, authors, affiliation, and abstract. I may stop reading after I answer any of the following questions: Does the title reflect a topic that interests me? Do I believe these authors have a good reputation for publishing critically designed studies? From the affiliation, I decide whether or not the pigs in the study are similar to those I deal with and whether I am concerned about bias or conflict of interest. Next, I read the abstract. It is like a movie trailer – it piques my interest, but like a movie trailer, it likely reflects the very best the authors have to offer. It does not give me a chance to critically evaluate the study. I need more information to know whether or not I agree with the conclusions of the authors. If, after reading the abstract, I am willing to proceed, I jump to the methods section.
Critically evaluating the methods is likely the most important process you do as a reader. You need a good description of the pigs and their management and the treatments applied to the pigs. Next, you need to know how pigs were assigned to treatment. True random assignment of pigs to treatment is a laborious and time-consuming process. However, the statistical tests we apply to our data assume that we have assigned pigs randomly. Let me describe one random process to illustrate my point. First, all pigs are given individually numbered ear tags. Next, each number is randomly assigned to a treatment group. Finally, each pig is put through a chute or a weigh scale, its ear tag is read, and then, after checking the random assignment chart, the pig is given the appropriate treatment.
Another method, called systematic random sampling, occurs as follows. All pigs are put through the chute one at a time. A coin flip assigns the first pig to treatment A or B. Assuming the first pig gets treatment B, the odd-numbered pigs get treatment B and the even-numbered pigs get treatment A. I recently conducted a study in that manner. It sounds simple – doesn’t it? The pig went in the weigh scale, one researcher called out the tag number, and it was my job to select the correct vaccine and inject the pig. The process ran smoothly for 2 hours, and then I started to make mistakes. How did that happen? Repetition began to make the job boring and I wasn’t concentrating. If a pig gets the wrong treatment, it is important to admit the mistake and record the data correctly. I added some notes of explanation on the data-recording chart. I will be sure the pigs are in the correct group when I analyze the data.
Recently, I read a paper in which the authors stated that each sow was randomly assigned to one of two treatments. However, I did not believe what I read. The tables showed that, within each parity, there were almost equal numbers of sows in each treatment group. That will not happen with random assignment. I suspect that the authors randomly assigned sows to treatment within parity group. Or did they? If the authors do not describe how they did the random assignment, I always wonder if RANDOM accurately describes what happened.
A few years ago, we conducted a study to determine whether using levamisole as an immunostimulator would increase growth rate and or decrease mortality in the pre-weaning period. We put all pigs from a litter in a bucket and picked the pigs up one at a time. The odd-numbered pigs received levamisole and the even-numbered pigs received saline. Superficially, it seemed like a good process. In retrospect, we should have taken the time to use formal random allocation. At the end of the study, the pigs receiving levamisole had smaller birth weights than the pigs receiving saline. How did that happen? I think it was “humane” error! If there was a particularly small pig in the litter, the researchers unconsciously (or consciously) wanted to give that pig the levamisole to give it the best chance to survive and thrive. If there were four pigs left in the bucket and the next pig was to receive levamisole, we put the small pig into the study next. If we had randomly assigned our pig identification numbers to treatment before the farm visit, we would have avoided this bias. Each pig would have received an ear tag and the random allocation sheet would have told us what treatment the pig should receive. This would have avoided the “human” – or should that be “humane” – factor. Further, with both even- and odd-numbered tags receiving both treatments, we would have been blinded to the treatment when we followed the pigs through the nursing phase.
Assignment to treatment is often done in a haphazard or convenience manner, meaning that pigs get one or the other treatment as they come through an alley way. Sometimes, pigs are assigned to treatment on the basis of pen or barn. In that case, the analysis needs to be conducted at the pen or barn level, not the pig level. The power of the study is reduced substantially. The authors proceed to apply statistical tests to the data, but the fundamental assumption of random allocation is missing. Therefore, the statistical tests are not valid. In the next issue, I will discuss critically evaluating the statistical tests.
— Cate Dewey, DVM, MSc, PhD