Evidence-based decision making includes the use of evidence from research studies. When evaluating the efficacy of an intervention, where it is ethical and feasible to assign animals to intervention groups, clinical trials have the highest evidentiary value of the primary research study designs.1 Clinical trials are controlled trials or experiments conducted to evaluate products or procedures outside of a laboratory setting. When treatment allocation includes a formal random process for assigning animals (or pens) to intervention groups, clinical trials are referred to as randomized controlled trials (RCTs). When designing a clinical trial, it is important to include design features intended to reduce the potential for bias. Bias is defined as a difference between the study results and the truth (ie, the true effect of the interventions).2 The effect of most interventions is relatively small.3 Therefore, it can be difficult to distinguish between true intervention effects and bias. This can lead to invalid interpretations of intervention effects and therefore, inappropriate use of interventions by individuals using the results of the trial for decision making. If trials are thought to be biased, it is increasingly common to recommend that they are excluded from the evidence base. Exclusion from the evidence base means that the study results are not used, the resources devoted to the trial are wasted, and the trial needs be conducted again to get informative results.
Several trial design features have been associated with risk of bias. Meta-epidemiological studies evaluating large numbers of human clinical trials show that inadequate randomization, lack of allocation concealment, and nonblinding of patients and outcome assessors are associated with exaggerated intervention effects.4,5 Statistically significant outcomes are more likely to be reported in a publication compared to outcomes that were not significant, leading to bias due to selective outcome reporting.6
The objective of this commentary is to review these features in the context of clinical trials conducted in swine and to discuss ways in which biases associated with these features can be minimized to avoid research wastage and maximize research utility.
Randomization
When conducting a clinical trial, it is important that the intervention groups are similar in terms of the distribution of prognostic factors (characteristics that are associated with the outcome) at the start of the experiment. For example, in a trial evaluating the efficacy of an intervention to prevent mortality, it is possible that age or animal weight at the time of application of the intervention is a prognostic factor. If this is true, and the age or weight distribution of the animals differs between the intervention groups, then the results of the trial will be biased (ie, will not reflect the true intervention efficacy). To address this type of bias, it is important the eligibility criteria related to these prognostic factors are clearly described. For example, the authors might limit eligibility to weaned pigs between 5 and 7 kg. Then, random allocation (ie, randomization) should be used to assign animals to intervention groups. The term “random” has a precise meaning, wherein each “study unit” has a known probability of receiving a given intervention at the time of allocation. The actual intervention allocated to each study unit is determined by a chance process and cannot be predicted. Depending on the type of interventions, the study unit may be an animal or a grouping of animals. For instance, if the intervention of interest would normally be given to an individual animal (eg, individual treatments to reduce disease severity or duration), then the intervention should be allocated at the animal level. If the intervention would normally be given to groups (eg, evaluating floor surfaces to improve welfare), then the intervention would be allocated at the group, pen, or barn level. The unit of allocation in some trials may differ from the unit of analysis; in a trial where the intervention is allocated at a group level (eg, floor design), the outcome may be measured at the individual level (eg, presence or absence of lameness in individual pigs) or at the group level (eg, total feed consumption). In this example, if the lameness outcome was measured at the individual level, pigs within a pen would not be statistically independent (ie, clustering of responses within pen occurs). This could be addressed by using a group level outcome for lameness (eg, percentage of lame pigs within a pen), thereby making the unit of analysis the same as the unit of allocation. However, this approach may have low statistical power because the unit of analysis, and therefore the sample size, is at the pen level. Alternatively, the unit of analysis could be the individual level with a binary outcome of lame or not lame for each pig with clustering within pen controlled in the analysis.
Random allocation to intervention groups is not difficult to achieve and should be encouraged. However, random allocation is not reported in a substantial number of swine trials and, even when trials are described as random, reporting of the methods of randomization often is suboptimal. Reporting the method used to generate the random sequence is recommended in the REFLECT-statement guidelines for reporting clinical trials in livestock.7,8 In an evaluation of reporting quality in RCTs, the method of random sequence generation was not reported in 79.8% (91 of 114) of RCTs published in veterinary journals, compared to 6.7% (4 of 60) of RCTs published in human medical journals.9 In a systematic review of 44 trials evaluating the efficacy of antibiotics to prevent respiratory disease in swine,10 random allocation was described in 23 trials (52%; although the method used to generate the random sequence was not described in 17 of these trials), 4 trials (9%) did not use random allocation, and there was no information provided on the method of allocation in 17 trials (39%). Failure to randomize has been associated with exaggerated intervention effects, as shown in evaluations of trials conducted in livestock.11-13 For the trials included in the systematic review of antibiotics to prevent respiratory disease,10 and assuming that randomization was correctly implemented in the trials where information on the random sequence generation was not reported, it could be argued that 50% of the trials presented results which are not credible. If the results are not credible, then they should be excluded from consideration in decision making due to concerns over bias in the results. Conducting research that is not included in decision making decreases the value of the original research investment and thereby contributes to research waste. However, it is encouraging that reporting of the method for generating the random sequence increased in vaccine trials conducted in swine from 8% prior to publication of the REFLECT-statement to 67% after publication.14
It might be argued that randomization is not needed in swine trials where the population of pigs within a production stage are homogeneous in terms of breed, weight, and diet. However, in mouse models of stroke, where the animals arguably are even more homogeneous, reported efficacy was significantly lower in studies that were randomized compared to those where randomization was not reported.15
Random allocation of study units to intervention groups should be possible in all swine trials. One option is to use a random number generator. This can be done in Excel (Microsoft Corporation) using the RAND function under the formulas tab, which results in a list of random numbers between 0 and 1. If the trial has 2 intervention arms (eg, the intervention of interest and a single control group), then even numbers from the random number list could be assigned to one group and odd numbers to the other. Other random methods include a coin toss, dice roll, or drawing numbers from a container, which can be done in a barn or as pigs are unloaded or moved to a new barn or pen. Deterministic allocation methods, such as alternate animal identification numbers, days of the week, or birth order are not random16 and may lead to the allocation sequence being predictable, which then could lead to biased results.
Although the purpose of randomization is to minimize important differences between intervention groups, simple randomization may not be sufficient in studies with small sample sizes. Stratified randomization is one method that can be used to minimize differences between groups in important prognostic factors, particularly when sample sizes are small. With this method, animals are randomly allocated to intervention groups within strata of an important prognostic factor.17,18 For example, if researchers are interested in conducting a trial of a feed additive for improving average daily gain but believe that sex is an important predictor of average daily gain, having more male pigs in one intervention group compared to the other group will cause the trial results to be biased. Stratified randomization would involve randomly allocating male pigs to intervention groups and then randomly allocating female pigs to intervention groups as a separate step. This approach will help to balance the number of male and females between intervention groups. Other examples include stratification of piglets within dam in vaccine trials to control for genetics and maternal antibodies, or stratifying based on predefined sections in a barn to reduce the risk that intervention groups will be unevenly placed near fans, which may affect performance outcomes. However, when stratified randomization is used, it is important to adjust for the factors used for stratification in the analysis to provide a valid inference.19
Another concern with small sample sizes is that the number of individuals per group can end up substantially different based on chance. Block randomization, also called permuted block randomization, can be used to create an equal number of individuals in each intervention group.16,17 Block randomization consists of dividing the number of study subjects into smaller groups, or blocks, and randomly allocating animals to intervention groups within blocks. For instance, if there were 20 animals and 2 intervention groups, animals could be randomly allocated in 5 blocks of 4 animals each, with an equal number of intervention groups A and B assigned within each block.
Allocation concealment
Allocation concealment is another trial feature used to minimize the potential for bias due to differences in prognostic factors between intervention groups at the start of an experiment. The concept is that allocation may be circumvented because the person enrolling animals or pens might have a preference for the intervention a participant(s) might receive. If acted upon, consciously or subconsciously, such a preference could disrupt the balance in the intervention group achieved by random allocation. Therefore, allocation concealment refers to methods used to ensure that the person allocating study units to treatment groups and patients (or animal owners in the case of swine trials) are not aware of the random sequence, ie, they do not know whether the next study subject enrolled will be allocated to group A or B.20 Allocation concealment may involve having a third-party person not involved in patient recruitment manage the allocation sequence. Once the investigator has enrolled a new study subject into a trial, the third-party person tells the investigator the intervention assignment. In human trials, allocation concealment is considered a critical trial feature; trials not employing allocation concealment are considered to be at high risk of bias.21 In an evaluation of comprehensive reporting in 31 swine vaccine trials, allocation concealment was not described in any trial.14 However, in many swine trials, it is probable that the owner and person enrolling pens or animals in the trial and allocating them to intervention groups do not have a preference for the intervention group for any specific animal or pen of animals. This would be true when owners do not have a differential attachment to specific pigs and do not know the potential production value of a specific pig or pen of pigs at the time of study enrollment. If this is the case, then failure to conceal allocation at the time of enrollment may not be associated with bias and allocation concealment may not be an essential trial component.14 Researchers designing a trial should consider whether there is potential for one intervention to be preferred over another for animals or pens and decide whether allocation should be concealed on this basis. If allocation concealment is not used, the decision should be justified in the trial report. Ideally, concealing allocation when possible removes doubt about this potential source of bias and is usually a small effort for a lot of gain. If allocation is concealed, decision makers will have no concerns about bias due to circumventing randomization, and therefore about incorporating the results of the study into the decision-making process. Thus, results of the trial will not be wasted.
Blinding
The term blinding refers to methods used to prevent individuals involved in a trial from knowing which study units are assigned to which interventions.22 This may include some or all of animal owners, managers, or caregivers, investigators, individuals collecting outcome information (outcome assessors), and individuals conducting the statistical analysis. Blinding is used to prevent the potential for differential assessment of outcomes and differential care of the animals between the intervention groups, which could bias the trial results. When describing the use of blinding, the tasks that are blinded should be articulated rather than using the terms “single” or “double” blind; although these terms are common in the literature, they are ambiguous and may be interpreted differently by different individuals.23 For instance, it is clearer to state that “owners and outcome assessors were blinded to intervention group”, rather than “the trial was double blinded.”
In a systematic review of 44 trials evaluating the efficacy of antibiotics to prevent respiratory disease in swine,10 blinding of caregivers and outcome assessors was described in 7 trials (16.0%), nonblinding was explicitly described in 2 trials (4.5%), and no information was provided on whether caregivers and outcome assessors were blinded in 35 trials (79.5%). In swine vaccine trials evaluated for completeness of reporting, blinding of caregivers, individuals administering the interventions, and outcome assessors was reported in 15 of 42 trials (36%) prior to publication of the REFLECT-statement and 12 of 19 (63%) of trials after publication.14
Not all trials can be blinded, and lack of blinding does not always lead to a biased result. For instance, if a trial is designed to compare pig stress outcomes when blood sampling is conducted from ear veins as compared to when sampling is conducted from jugular veins, or if the trial was comparing pelleted feeds to mash, the intervention groups would be visibly obvious. However, if blinding is not possible or not used, the potential for bias is less if the outcome can be objectively measured.22
Various methods can be used to blind individuals to intervention allocation. If the intervention is a drug or a biologic, such as a vaccine, it may be possible to have a control group that looks identical but without the active ingredient. This would allow blinding of caregivers and outcome assessors, who may or may not be the same individuals, and also potentially investigators if a third-party provides the allocated interventions. An additional layer of caregiver blinding could be used for interventions that are given at a single timepoint, such as vaccines, if the investigator applies the intervention without the caregiver being present. If the analysis is conducted by a statistician or epidemiologist who is not otherwise involved in the trial, it is simple to blind the analyst by coding the interventions as “A” or “B”, rather than naming the actual intervention in the dataset. If blinding can be used, this removes doubt about awareness of intervention group as a source of bias leading to invalid results. The results will be used to their maximum potential, which is surely the goal when using animals and resources for research.
Selective outcome reporting
It is common for multiple outcomes to be reported in clinical trials; in an evaluation of reporting quality of 100 trials in livestock populations, 91 trials had more than one outcome.12 However, there is evidence from human healthcare evaluations that not all outcomes that have been evaluated in a trial have their results included in the trial report.24,25 Selecting a subset of the outcomes that were evaluated in a trial based on the results is referred to as selective outcome reporting. If the outcomes associated with significant intervention benefit are more likely to be reported, the overall trial results may be misleading. Determining whether selective outcome reporting has occurred requires that an a priori trial protocol is publicly available. The protocol should identify the primary outcome(s) in the trial, as well as any secondary outcomes that will be measured. Then, results for all primary and secondary outcomes should be reported in the trial report. A search of the trial registries in the American Veterinary Medical Association (AVMA) Animal Health Studies Database (https://ebusiness.avma.org/aahsd/study_search.aspx) in March 2022 did not identify any trials conducted in swine. Therefore, the extent to which selective outcome reporting is an issue in swine trials is unknown. However, swine trials conducted by industry groups, pharmaceutical companies, and academics require a trial protocol to receive ethical approval. If researchers posted these protocols to trial registries, such as the AVMA Animal Health Studies Database, it would allow an evaluation of outcome reporting which would increase confidence in, and therefore value of, clinical trials in swine.
Implications
- Biased trial results can lead to inappropriate use of interventions.
- Biased trial results may lead to exclusion from decision making.
- Biased trial results do not maximize the research investment.
Acknowledgments
The authors were responsible for developing the ideas presented in this commentary. Partial funding support was obtained from the University of Guelph Research Leadership Chair (Sargeant).
Conflict of interest
None reported.
Disclaimer
Drs O’Sullivan and Ramirez, this journal’s executive editor and editorial board member, respectively, were not involved in the editorial review of or decision to publish this article.
Scientific manuscripts published in the Journal of Swine Health and Production are peer reviewed. However, information on medications, feed, and management techniques may be specific to the research or commercial situation presented in the manuscript. It is the responsibility of the reader to use information responsibly and in accordance with the rules and regulations governing research or the practice of veterinary medicine in their country or region.
References
1. Sargeant JM, Kelton DF, O’Connor A. Study designs and systematic review of interventions: Building evidence across study designs. Zoonoses Public Health. 2014;61(Suppl 1):10-17. https://doi.org/10.1111/zph.12127
2. Lewis SC, Warlow CP. How to spot bias and other potential problems in randomised controlled trials. J Neurol Neurosurg Psychiatry. 2004;75(2):181-187. https://doi.org/10.1136/jnnp.2003.025833
3. Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166-175. https://doi.org/10.1016/S0140-6736(13)62227-8
4. Savović J, Jones H, Altman D, Harris R, Jűni P, Pildal J, Als-Nielsen B, Balk E, Gluud C, Gluud L, Ioannidis J, Schulz K, Beynon R, Welton N, Wood L, Moher D, Deeks J, Sterne J. Influence of reported study design characteristics on intervention effect estimates from randomised controlled trials: Combined analysis of meta-epidemiological studies. Health Technol Assess. 2012;16(35);1-82. https://doi.org/10.3310/hta16350
5. Savovic J, Turner RM, Mawdsley D, Jones HE, Beynon R, Higgins JPT, Sterne JAC. Association between risk-of-bias assessments and results of randomized trials in Cochrane Reviews: The ROBES meta-epidemiologic study. Am J Epidemiol. 2018;187(5):1113-1122. https://doi.org/10.1093/aje/kwx344
6. Dwan K, Gamble C, Williamson PR, Kirkham JJ, Reporting Bias Group. Systematic review of the empirical evidence of study publication bias and outcome reporting bias – an updated review. PLoS One. 2013;8(7):e66844. https://doi.org/10.1371/journal.pone.0066844
7. O’Connor AM, Sargeant JM, Gardner IA, Dickson JS, Torrence ME, Dewey CE, Dohoo IR, Evans RB, Gray JT, Greiner M, Keefe G, Lefebvre SL, Morley PS, Ramirez A, Sischo W, Smith DR, Snedeker K, Sofos J, Ward MP, Wills R; Steering Committee. The REFLECT statement: Methods and processes of creating reporting guidelines for randomized controlled trials for livestock and food safety. J Vet Intern Med. 2010;24(1):57-64. https://doi.org/10.1111/j.1939-1676.2009.0441.x
8. Sargeant JM, O’Connor AM, Gardner IA, Dickson JS, Torrence ME; Consensus Meeting Participants. The REFLECT statement: Reporting guidelines for randomized controlled trials in livestock and food safety: Explanation and elaboration. Zoonoses Public Health. 2010;57(2):105-136. https://doi.org/10.1111/j.1863-2378.2009.01312.x
9. Di Girolamo N, Meursinge Reynders R. Deficiencies of effectiveness of intervention studies in veterinary medicine: A cross-sectional survey of ten leading veterinary and medical journals. PeerJ. 2016;4:e1649. https://doi.org/10.7717/peerj.1649
10. Sargeant JM, Bergevin MD, Churchill K, Dawkins K, Deb B, Dunn J, Hu D, Moody C, O’Connor AM, O’Sullivan TL, Reist M, Wang C, Wilhelm B, Winder CB. A systematic review of the efficacy of antibiotics for the prevention of swine respiratory disease. Anim Health Res Rev. 2019;20(2):291-304. https://doi.org/10.1017/S1466252319000185
11. Burns MJ, O’Connor AM. Assessment of methodological quality and sources of variation in the magnitude of vaccine efficacy: A systematic review of studies from 1960 to 2005 reporting immunization with Moraxella bovis vaccines in young cattle. Vaccine. 2008;26(2):144-152. https://doi.org/10.1016/j.vaccine.2007.10.014
12. Sargeant JM, Elgie R, Valcour J, Saint-Onge J, Thompson A, Marcynuk P, Snedeker K. Methodological quality and completeness of reporting in clinical trials conducted in livestock species. Prev Vet Med. 2009;91:107-115. https://doi.org/10.1016/j.prevetmed.2009.06.002
13. Sargeant JM, Saint-Onge J, Valcour J, Thompson A, Elgie R, Snedeker K, Marcynuk P. Quality of reporting in clinical trials of preharvest food safety interventions and associations with treatment effect. Foodborne Pathog Dis. 2009;6(8):989-999. https://doi.org/10.1089/fpd.2009.0321
14. Moura CAA, Totton SC, Sargeant JM, O’Sullivan TL, Linhares DCL, O’Connor AM. Evidence of improved reporting of swine vaccination trials in the post-REFLECT statement publication period. J Swine Health Prod. 2019;27(5):265–277.
15. Macleod MR, van der Worp HB, Sena ES, Howells DW, Dirnagl U, Donnan GA. Evidence for the efficacy of NXY-059 in experimental focal cerebral ischaemia is confounded by study quality. Stroke. 2008;39(10):2824-2829. https://doi.org/10.1161/STROKEAHA.108.515957
16. Schulz KF, Grimes DA. Generation of allocation sequences in randomised trials: Chance, not choice. Lancet. 2002;359(9305):515-519. https://doi.org/10.1016/S0140-6736(02)07683-3
17. Altman DG, Bland JM. How to randomise. BMJ. 1999;319(7211):703-704. https://doi.org/10.1136/bmj.319.7211.703
18. Kernan WN, Viscoli CM, Makuch RW, Brass LM, Horwitz RI. Stratified randomization for clinical trials. J Clin Epidemiol. 1999;52(1):19-26. https://doi.org/10.1016/s0895-4356(98)00138-3
19. Kahan BC, Morris TP. Improper analysis of trials randomised using stratified blocks or minimisation. Stat Med. 2012;31(4):328-340. https://doi.org/10.1002/sim.4431
20. Schulz KF, Grimes DA. Allocation concealment in randomised trials: Defending against deciphering. Lancet. 2002;359(9306):614-618. https://doi.org/10.1016/S0140-6736(02)07750-4
*21. Higgins JPT, Savovic J, Page MJ, Sterne JAC, behalf of the RoB2 Development Group. Revised Cochrane risk-of-bias tool for randomized trials (RoB 2). 2019. Accessed March 9, 2022. https://drive.google.com/file/d/19R9savfPdCHC8XLz2iiMvL_71lPJERWK/view
22. Schulz KF, Grimes DA. Blinding in randomised trials: Hiding who got what. Lancet. 2002;359(9307):696-700. https://doi.org/10.1016/S0140-6736(02)07816-9
23. Devereaux PJ, Manns BJ, Ghali WA, Quan H, Lacchetti C, Montori VM, Bhandari M, Guyatt GH. Physician interpretations and textbook definitions of blinding terminology in randomized controlled trials. JAMA. 2001;285(15):2000-2003. https://doi.org/10.1001/jama.285.15.2000
24. Kirkham JJ, Dwan KM, Altman DG, Gamble C, Dodd S, Smyth R, Williamson PR. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ. 2010;340:c365. https://doi.org/10.1136/bmj.c365
25. Page MJ, McKenzie JE, Kirkham J, Dwan K, Kramer S, Green S, Forbes A. Bias due to selective inclusion and reporting of outcomes and analyses in systematic reviews of randomised trials of healthcare interventions. Cochrane Database Syst Rev. 2014;2014(10):MR000035. https://doi.org/10.1002/14651858.MR000035.pub2