|Year : 2020 | Volume
| Issue : 2 | Page : 53-56
The importance of power analysis and effect size in preclinical rodent experimentation
Siddha Central Research Institute, CCRS, Ministry of Ayush, Govt. of India, Chennai, Tamil Nadu, India
|Date of Submission||28-Feb-2022|
|Date of Acceptance||16-Mar-2022|
|Date of Web Publication||30-May-2022|
Dr. Krishnamurthy Venkataraman
Siddha Central Research Institute, CCRS, Ministry of Ayush, Govt. of India, Chennai 600106, Tamil Nadu
Source of Support: None, Conflict of Interest: None
Introduction: The quest for identification of novel compounds to treat disease conditions involves the conduct of proof of concept studies in laboratory animals. The experimental design often does not justify the animal numbers distributed across the various groups in an experiment.
Materials and Methods: The prejudice to use a sample size 6 across all groups is out of sheer misconception that it can effectively inform the success of a treatment intervention. The statisticians however do not recommend the reliance on this misconceived notion and recommend the conduct of the power analysis for every biological experiment.
Results: By employing power analysis, incorporating the effect size the sample size achieved can effectively prevent an experiment suffering from Type II error. The type II errors can frequently occur and can go unnoticed in biological experiments when novel treatments are tested.
Conclusion: It therefore becomes a moral responsibility of an investigator to employ power analysis to estimate the sample size which can also benefit the investigator by alerting the investigator in not choosing more than recommended sample size which can result in saving monetary and manpower resources.
Keywords: Experiments, novel compounds, sample size, treatment intervention, type II error
|How to cite this article:|
Venkataraman K. The importance of power analysis and effect size in preclinical rodent experimentation. J Res Siddha Med 2020;3:53-6
|How to cite this URL:|
Venkataraman K. The importance of power analysis and effect size in preclinical rodent experimentation. J Res Siddha Med [serial online] 2020 [cited 2022 Aug 16];3:53-6. Available from: http://www.jrsm.in/text.asp?2020/3/2/53/346336
| Introduction|| |
The animal experimentation is in vogue for the need to discover and/or validate therapeutics to address the ailments that pose threat to the human population. Nevertheless, the animal experimentation is fraught with criticisms by the animal activists groups for myriad reasons like lack of welfare for the animals engaged in the experimentation, redundant use of animals for testing of similar chemicals/new chemical entities and unjustified animal numbers in experiments. This has prompted the regulatory bodies of national and international significance (CPCSEA, AAALAC) to govern the animal experimentation which led to the adoption of the concept of 3Rs. Of the 3Rs––Replacement, Reduction, and Refinement––Reduction is cautiously treated by the scientific community fearing backlash under circumstances where animal utilization is unjustified in terms of population size employed in the experiments.
The principle of reduction is however not judiciously employed by the investigators in the experimentation. The urge to stick to sample size ‘6’ is incorrect and the argument that n = 6 is a statistical need is unjustifiably flawed. The belief that ‘6’ as golden number is not scientifically validated. It has however not stopped the publication of many experiments by journals. This has led to the various research groups to follow suit with the non-validated sample size citing such publications as precedents.
An investigator can arrive at a right sample size by calculating the power of the employed statistic. To calculate the power of an experiment, the latter is required to set the power (for experiments in biology 80% is generally chosen) and choose an effect size. The effect size can be understood by performing a detailed literature survey. There can be instances where prior experimentation may not have been conducted to measure the effect size/standard deviation which can preclude the investigator from resorting to the common methods used in sample size determination. This article intends to sensitize the readers about the importance of calculation of the sample size to enhance the quality of the preclinical experimentation.
| Defining ‘Power’ In An Animal Experiment|| |
The ability of a statistical test (employed in the experiment) to detect the difference between 2 or more groups is called the power of that test. The understanding of power can be illustrated with an example. An experiment is conducted to understand the difference between two groups in terms of an intervention, say a cure for a disease condition. The animals in the control (disease) group receive no treatment for the disease while the animals in the treatment group are administered with a drug under experimentation. The investigator collects the population parameter, at the end of the experiment, from both the groups. This could be a mean. The means from both the groups should be compared to conclude if the treatment intervention can significantly affect the population in terms of disease remediation. There can be two conclusions by this assessment such that there is no difference in the population by the treatment intervention or the treatment has benefitted the population that suffered the disease. If the employed statistical test fails to detect the difference between the two groups, when the difference actually exists, it leads to type II error called β. The failure of a statistical test to detect the difference when it actually exists is due to lack of sufficient sample size. This β can be controlled by estimating a sample size with power of at least 80%. The power chosen as 80% is an arbitrary value and it can be extended up to 90%.
| Defining an effect size of the experiment|| |
The effect size is the minimum difference between two (or more groups) which is considered to be clinically relevant or sufficient in terms of the magnitude of difference to decide on the success of an intervention in an experiment. The effect size can be arbitrarily fixed as 0.2 (small), 0.5 (medium), and 0.8 (large) or the effect size can be calculated from the data obtained by perusing previous published studies with the formula mentioned elsewhere. The reporting of effect size in manuscripts can guide the prospective researchers to construct and forecast the course of the experiment.
| Other factors influencing the preclinical experiment|| |
Besides power and effect size, factors like standard deviation, direction of the effect and the choice of the statistical tests influence the experiment. These factors do not go unnoticed by the investigator when the literature survey is carried out however in the zeal to design an experiment these factors should not be ignored or missed.
| Use Of G-Power To Calculate The Sample Size|| |
There are many applications that an investigator can use to arrive at a sample size by providing inputs on power and effect size. G-power is one such application that can advise the investigator on the sample size prior to experiment initiation. The effect size as sensitivity can also be determined if the investigator does not have the liberty to change the sample size in situations like reduced resources in terms of money, and labor to understand how impactful the experiment can be.
| Relevance of sample size calculation in complementary medicine preclinical experiments|| |
The necessity to mention the sample size is enforced by many guidelines that govern the animal experimentation. The ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines were developed by NC3Rs (National Centre for the Replacement, Refinement, and Reduction of Animals in Research) and is adopted by many life sciences organization engaged in animal experimentation across Europe and USA. The guideline insists to reveal the method employed to arrive at the sample size. AAALAC (Association for Assessment and Accreditation of Laboratory Animal Care) is a nonprofit organization based in North America that offers accreditation status to the institutes that are engaged in animal experimentation and are registered with AAALAC International. The AAALAC accreditation is based on the recommendations of The Guide which is the resource material developed by the National Research Council to advise the experimenters who conduct preclinical animal experimentation. The Guide recommends including power analysis at the time of protocol development seeking regulatory approval for the conduct of animal experimentation. There are 26 Indian organizations (https://www.aaalac.org/accreditation-program/directory/directory-of-accredited-organizations-search-result/?nocache=1#adv_acc_dir_search) in India that is accredited by AAALAC International requiring the organizations to carry out responsible animal research. The importance of calculation of power to estimate sample size and effect size is not given its due importance in preclinical research in phytopharmacology experiments. Since many of the clinical experiments are based on the outcomes of preclinical data, preclusion of power and effect size calculation may not completely offer robustness in the acquired data.
| Conclusion|| |
The quest to discover or repurpose an existing drug to treat an ailment begins with preclinical rodent experiments to garner trust in terms of proof of concept studies so as to extend the program to human clinical trials. It therefore becomes the prerogative of the investigator to choose the right sample size in animal experimentation. Often with an unjustified assumption, the rodent experiments are designed with a sample size of 6 across control and treatment groups; nevertheless, the assumption may not hold true in all kind of biological experiments. The importance of sample size selection is valued at the time of execution of experiments that anticipate animal mortality (e.g. diabetes mellitus)as uncontrolled death in the experimental units can jeopardize the experiment.
The premise used by the investigators to deem an experiment successful is on reliance on the P value. The lesser the P value, the more the treatment mean can distant itself from the control’s mean indicating the success of the treatment intervention in bringing a resolve to the disease condition. However, the P value offers no clue on the effect size. If a statistical significance is achieved it simply means that the sample size employed could be considerably large and enough evidence is available to gather evidence against null hypotheses. A significant P value indicates the statistical significance while the clinical significance cannot be. The investigator is morally obligated to mention the P value than reporting the significance as P < 0.05. A value of P = 0.051 does not differ much from a P value of 0.049 meaning that the treatment may not offer effective clinical improvement of the disease condition. , Many scientific publications report significance (without reporting P value) as P < 0.05 but the truth that a P value conclusion says nothing about the estimated size of benefit in the treatment group, resulting in a false sense of satisfaction about a treatment intervention. The advocacy to simply rely on P value can only result in the increase of sample size thereby compromising the 3R’s of laboratory animal research against the tenets laid by CPCSEA.
This article intends to recommend the readers to arrive at a justified proposal to request for sample size in preclinical experimentation. There are many online resource sites despite the G-power application recommended above that can assist an investigator in estimating the sample size judiciously. The survey of published literature can offer insight to the reader to understand the prevalent effect size. The effect size can also be calculated with methods recommended elsewhere. Biomedical research is threatened by prejudicial conviction on P value-based rejection of null hypotheses thereby creating avenues for the occurrence of Type I (false positive) errors. The investigator cannot shy away from the responsibility of employing right tools to report the data obtained in a preclinical experimentation. The inclination in reporting the significant P value without reporting the effect size and without revealing the method employed to choose the sample size can set a wrong precedent and misguide the prospective investigators who build their hypotheses on such conclusions.
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
| References|| |
Pallant J. SPSS Survival Manual: A Step by Step Guide to Data Analysis Using SPSS for Windows Version 10. Buckingham: Open University Press; 2001. p. 1-7.
Devane D, Begley CM, Clarke M. How many do I need? Basic principles of sample size estimation. j Adv Nurs 2004;47:297-302.
Cohen J. Some issues in power analysis. In: Hillsdale NJ, editor. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. New York: Lawrence Erlbaum Associates Publishers; 1988.
Charan J, Kantharia ND. How to calculate sample size in animal studies? j Pharmacol Pharmacother 2013;4:303-6.
] [Full text]
Cohen J. Statistical power analysis. Curr Dir Psychol Sci 1992;1:98-101.
Work-learning.com. How to calculate effect sizes from published research articles: A simplified methodology. Available from: http://work-learning.com/effect_sizes.htm. [Last accessed on 15 Feb 2022].
Lakens D. Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Front Psychol 2013;4:863.
Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: The arrive guidelines for reporting animal research. PLOS Biol 2010;8:e1000412.
National Research Council. Guide for the Care and Use of Laboratory Animals. 8th ed. Washington, DC: The National Academies Press; 2011.
The Hong Kong Polytechnic University. Thresholds for interpreting effect sizes. Available from: www.Poly.edu.hk. [Last accessed on 15 Feb 2022].
Thiese MS, Ronna B, Ott U. p value interpretations and considerations. J Thorac Dis 2016;8:E928-31.
Pandis N. The p value problem. Am J Orthod Dentofacial Orthop 2013;143:150-1.