Bias in Homeopathic Clinical Trial

Dr. Pulkesh P. Chothani
Dr. Pratiksha G. Rangani

Case taking in homeopathic science is based on unprejudiced method. Unprejudiced is indicated without bias for selection of similimum medicine. Every research requires unbiased and unprejudiced method.  Various bias occurs in clinical trial research in homeopathy.

Clinical trials are experiment on human being conducted in a scientific way. The objective is to find out whether, when we do something to a group of people, it gives you the desire result in many of them or not? Some newly devised or modified regimen is intentionally applied on people with or without disease to find out the efficacy and safety of this regimen. Therefore, in homeopathy, clinical trials as the scientific investigation that examine and evaluate safety and efficacy of medicine therapies in human subjects are randomly allocated to two group, known as the “Experiment” and the “Control” group.  The experimental group is given the medicine being tested and the control group is given the placebo, an inert substance in sugar pill.

Example like this

That samples vary from one another and thus results also vary. If we take a random sample of 20 students in same class and measure their weight, the mean would be different from the mean of another sample of 20 students in same class. Variation is an essential feature of human beings.

Key words : Bias, Control and Experimental group, Odd Ratio, Random


Definition Bias: Bias is a disproportionate weight in favor of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group, or a belief. In science and engineering, a bias is a systematic error. Statistical bias results from an unfair sampling of a population, or from an estimation process that does not give accurate results on average.

Bias occurs when the results of a study are systematically different from ‘truth’. For example, if the objective of the study is to estimate the risk of disease associated with an exposure, and the result from the study consistently overestimates the risk, the result is said to be biased. Bias should be distinguished from random error, in that random error cannot be associated with a particular cause and tends to ‘average out’ in repeated sampling. Bias, on the other hand, would repeat the same direction of error in repeated sampling with the same design. Bias results from faulty design. There may be many reasons for bias, and care has to be taken to minimize bias when designing the study, since it is often difficult to separate the true effects from bias. Simply increasing the sample size, on the other hand, can minimize the effect of random error.

  1. Nonresponse bias

Nonresponse has two types of adverse impacts on the results. The first is that the ultimate sample size available to draw conclusions reduces, and this affects the reliability of the results. This deficiency can be remedied by increasing the sample size corresponding to the anticipated nonresponse. The second is more serious. Suppose you select a sample of 3000 out of one million. But if only 250 respond out of 3000, your survey could be severely biased. These responders could be those who are favourable or those with strong views.

  1. Bias in Design:

This bias occurs when the case group and control group are not equivalent at baseline, and differentials in factors affecting the results are not properly accounted for at the time of analysis.

  1. Bias in Selection of Subjects:

The subjects included in the study may not truly represent the target population. This can happen either because the sampling was not random, or because the sample size is too small to represent the entire spectrum of subjects in the target population. Studies on volunteers always have this kind of bias. Selection bias can also occur because the serious cases have already died and are not available with the same frequency as the mild cases (survival bias).

  1. Bias due to Concomitant Medication or Concurrent Disease:

Selected patients may suffer from other apparently unrelated conditions but their response might differ either because of the condition itself or because of medication given concurrently for that condition.

  1. Bias in Detection of Cases:

Error can occur in diagnostic or screening criteria. For example, a laboratory investigation done properly in a hospital setting is less error prone compared to one carried out in a field setting where the study is actually done. In a prostate cancer detection study, if prostate biopsies are not performed in men with normal results after screening, true sensitivity and specificity of the test cannot be determined.

  1. Lead-Time Bias:

All cases are not detected at the same stage of the disease. With regard to cancers, some may be detected at the time of screening, for example by pap smear, and some may be detected when the patients start complaining. But the follow-up is generally from the time of detection. This difference in “lead time” can cause systematic error in the results.

  1. Contamination in Controls:

Control subjects are generally those that receive placebo or existing therapy. If these subjects are in their homes, it is difficult to know if they have received some other therapy that can affect their status as controls.

  1. Interviewer Bias or Observer Bias:

Interviewer bias occurs when one is able to get better responses from one group of patients (say, those who are educated) relative to the other kind (such as illiterates). Observer bias occurs when the observer unwittingly (or even intentionally) exercises more care about one type of responses or measurements such as those supporting a particular hypothesis than those opposing the hypothesis

  1. Instrument Bias:

This occurs when the measuring instrument is not properly calibrated. A scale may be biased to give a higher reading than the actual or lower than the actual such as a mercury column of a blood pressure instrument not being empty in the resting position.

  1. Hawthorne Effect:

If subjects know that they are being observed or being investigated, their behaviour and response can change. In fact, this is the basis for including a placebo group in a trial. Usual responses of subjects are not the same as when under a scanner

  1. Recall Bias:

There are two types of recall bias. One such bias arises from better recall of recent events than those that occurred a long time ago. Also, serious illnesses are easier to recall than mild illnesses. The second type of bias arises when cases suffering from a disease are able to recall events much more easily than the controls if they are apparently healthy subjects.

  1. Mid-Course Bias:

Sometimes the subjects after enrolment have to be excluded if they develop an unrelated condition such as an injury, or become so serious that their continuation in the trial is no longer in the interest of the patient. If a new facility such as a health centre is started or closed for the population being observed for a study, the response may alter. If two independent trials are going on in the same population, one may contaminate the other. An unexpected intervention such as a disease outbreak can alter the response of those who are not affected.

  1. Bias due to Self-Improvement Effect:

Many diseases are self-limiting. Improvement over time occurs irrespective of the intervention, and it may be partially or fully unnecessarily ascribed to the intervention. Diseases such as arthritis and asthma have natural periods of remission that may look like the effect of therapy.

  1. Bias due to Digit Preference:

It is well known that almost all of us have a special love for digits zero and five. Measurements are more frequently recorded ending with these digits. A person aged 69 or 71 is very likely to report one’s age as 70 years. Another manifestation of digit preference is in forming intervals for quantitative data. Blood glucose level categories would be 70–79, 80–89, 90–99, etc., and not 64–71, 72–79, etc. If digit zero is preferred, 88, 89, 90, 91, and 92 can be recorded as 90. Thus, intervals such as 88–92, 93–97, and 98–102, are better to ameliorate the effect of digit preference, and not the conventional 85–89, 90–94, 95–99, etc.

  1. Attrition Bias:

The pattern of nonresponse can differ from one group to the other in the sense that in one group more severe cases drop out, whereas in another group mostly mild cases drop out.

  1. Recording Bias:

Two types of errors can occur in recording. The first arises due to the inability to properly decipher the writing on case sheets. Physicians are notorious for illegible writing. This can happen particularly with similar looking digits such as 1 and 7, and 3 and 5. Thus the entry of data may be in error. The second arises due to the carelessness of the investigator. A diastolic level of 87 can be wrongly recorded as 78, or a code 4 entered as 6 when memory is relied upon, which can fail to recall the correct code. Wrongly pressing adjacent keys on the computer keyboard is not uncommon either.

  1. Bias in Analysis:

This again can be of two types. The first occurs when gearing the analysis to support a particular hypothesis. For example, while comparing pre- and post-values, for example, hemoglobin (Hb) levels before and after weekly supplementation of iron, the increase may be small that will not be detected by comparison of means. But it may be detected when evaluated as a proportion of subjects with levels  <10 mg/dl before and after iron supplementation. The second can arise due to differential interpretation of p-values. When p = 0.055, one researcher may refuse to say that it is significant at 0.05 level and the other may say that it is marginally significant. Some researchers may change the level of significance from 5% to 10% if the result is to their liking.

  1. Interpretation Bias:

This arises from the tendency among some research workers to interpret the results in favour of a particular hypothesis ignoring the opposite evidence. This can be intentional or unintentional.

  1. Bias in Presentation of Results:

Scales in graphs can be chosen such that a small change looks like a big change or vice versa. The second is that the researcher may merely state the inconvenient findings that contradict the main conclusion but does not highlight them in the same way as the favourable findings are done.

  1. Publication Bias:

Many journals are much too keen to publish reports that give a positive result regarding efficacy of a new regimen compared with the negative trials that did not find any difference. If a vote count is done on the basis of the published reports, positive results would hugely outnumber than negative results, although the fact may be just the reverse.

Steps for Minimising Bias
The purpose of describing various types of biases in so much detail is to create awareness to avoid or at least minimise them. Everything possible should be done to keep them under control. The following steps can be suggested to minimise bias in the results in a research setup. All steps do not apply to all the situations.

Specify the trial in full detail.

  1. Assess the validity of the identified target population, and the groups to be included in the study in the context of objectives and the methodology.
  2. Assess the validity of pre-existing factors and outcomes for providing correct answer to your questions. In addition, there might be other factors at work about which nobody knows. Medical science is still very incomplete and we do not know about many factors that affect health and disease.
  3. Carry out a pilot study and pretest the tools such as questionnaire and laboratory kits. Make changes as needed.
  4. Choose a representative sample, preferably by a random method.
  5. Choose an adequate size of sample in each group.
  6. Researchers and cow workers should be trained in making correct assessments.
  7. Use matching, blinding, masking, and random allocation as needed.
  8. Monitor each stage of research, including periodic checking of data.
  9. Make determined efforts to minimise nonresponse and partial response.
  10. Double check the data and rectify errors in recording, entries, etc.
  11. Analyse the data with proper statistical methods. Use standardised or adjusted rates where needed, perform the stratified analysis, or use mathematical models such as regression to take care of those confounders that could not be ruled out by design.
  12. Interpret the results in an objective manner based on evidence.
  13. Report only the evidence-based results, enthusiastically but dispassionately.

Bias with prejudice mind in research result definitely will come wrong.


  1. Abramson J.H. Survey methods in community medicine, 2 ed. New York, Churchill Livingstone, 1979.
  2. Barker D.J.P., Bennet F.J. Practical epidemiology. New York, Churchill Livingstone, 1976.
  3. Beaglehole R., Bonita R., Kjellstrom T. Basic epidemiology. Geneva, WHO, 1993.
  4. Chow Shein-Chung and Liu Jen-pei, Design and Analysis of Clinical TrialsConcepts and Methodologies, Publisher John Wiley & Sons, Inc., (1998).
  5. Fletcher H.R., Fletcher W.S., Wagner H.E. Clinical epidemiology: the essentials. London, Williams and Wilkins, 1982.
  6. Hosmer D.W., Lemeshow S. Applied logistic regression. New York, John Wiley and Sons, 1989.
  7. Indrayan A., Basic Methods of Medical Research, Publisher AITBS, Delhi. Third Edition, (2012).
  8. Kleinbaum G.D., Kupper L.L., Morganstern H. Epidemiologic research: principles and quantitative methods. New York, Van Nostrand Reinhold, 1982.
  9. Lillienfeld A.M., Lillienfeld D.E. Foundations of epidemiology, 2 ed. Oxford, Oxford University Press, 1980
  10. Sackett D.L., Haynes R.B., Guyatt G.H., Tugwell P. Clinical epidemiology: a basic science for clinical medicine. Boston, Little, Brown and Company, 1991.

Dr. Pulkesh P. Chothani (MD Homeopath), Government Homeopathic Medical College, Dethali, Siddhpur. Guajrat.
Dr. Pratiksha G. Rangani (P.G Scholar), Tantia Homeopathic Medical College, Shri Ganganagar, Rajasthan.
Ph: 9825230997

Be the first to comment

Leave a Reply

Your email address will not be published.