Published on *Explorable.com* (https://forum.explorable.com)

Experimental error is unavoidable during the conduct of any experiment, mainly because of the falsifiability [3] principle of the scientific method.

Therefore it is important to take steps to minimize the errors and also to understand them in order to be better able to understand the results of the experiment. This entails a study of the type and degree of errors in experimentation.

Statistical tests contain experimental errors that can be classified as either Type-I or Type-II errors. It is important to study both these effects in order to be able to manage error and report it, so that the conclusion [4] of the experiment can be rightly interpreted.

Type I Error - False PositiveType II Error - False Negative

The Type I error (α-error, false positives) occurs when a the null hypothesis [5] (H_{0}) is rejected in favor of the research hypothesis [6] (H_{1}), when in reality the 'null' is correct. This can be understood in terms of medical tests. For example, suppose there is a test that is used to detect a disease in a person. If a Type I error occurs in the test, it means that the test will say the person is suffering from that disease even though he is healthy.

Type II errors (β-errors, false negatives) on the other hand, imply that we reject the research hypothesis, when in fact it is correct. In the similar example of a medical test for a disease, if a Type-II error occurs, then it means that the test will not detect the disease in the person even though he is actually suffering from it.

Scientific Conclusion | |||
---|---|---|---|

H_{0} Accepted | H_{1} Accepted | ||

Truth | H_{0} | Correct Conclusion! | Type 1 Error (false positive) |

H_{1} | Type 2 Error (false negative) | Correct Conclusion! |

In case of Type-I errors, the research hypothesis is accepted even though the null hypothesis is correct. Type-I errors are a false positive that lead to the rejection of the null hypothesis when in fact it may be true.

When a Type-II error occurs, the research hypothesis is not detected as the correct conclusion and is therefore passed off. In terms of the null hypothesis, this kind of an error might lead to accepting the null hypothesis when in fact it is false.

The significance level [7] refers only to the Type-I error. This is because we ask the question "What is the probability that the correlation we observed is purely by chance?" and when this question yields an answer of below a significance level (typically 5% or 1%), we state that the result wasn't a chance process and that the parameters under study are indeed related.

Scientific experiments involve a different type of error analysis [8] than a statistical experiment. In science, experimental errors may be caused due to human inaccuracies like a wrong experimental setup in a science experiment or choosing the wrong set of people for a social experiment [9].

Systematic error refers to that error which is inherent in the system of experimentation. For example, if you want to calculate the value of acceleration due to gravity by swinging a pendulum [10], then your result will invariably be affected by air resistance, friction at the point of suspension and finite mass of the thread.

Random errors occur because it is impossible to practically achieve infinite precision. Since the value is higher or lower in a random fashion, averaging several readings will reduce random errors.

**Links**

[1] https://forum.explorable.com/experimental-error

[2] https://forum.explorable.com/users/siddharth

[3] https://forum.explorable.com/falsifiability

[4] https://forum.explorable.com/drawing-conclusions

[5] https://forum.explorable.com/null-hypothesis

[6] https://forum.explorable.com/research-hypothesis

[7] https://forum.explorable.com/statistically-significant-results

[8] https://forum.explorable.com/type-I-error

[9] https://forum.explorable.com/social-psychology-experiments

[10] https://forum.explorable.com/pendulum-experiment