## Error Analysis 2

Error has to do with uncertainty in measurements that nothing can be done about. A metre rule might measure to the nearest mm. If a measurement is repeated, the length obtained may differ and none of the measurements can be preferred. Although it is not possible to do anything about such errors, they can be characterized using standard methods.

## Classification of Error

Generally, errors can be divided into two broad and rough but useful classes: systematic and random.

Systematic errors are errors which tend to shift all measurements in a systematic way so their mean value is displaced. This may be due to such things as incorrect calibration of equipment, consistently improper use of equipment or failure to properly account for some effect. In a sense, a systematic error is rather like a blunder and large systematic errors can and must be eliminated in a good experiment. On the other hand random errors will always be present due to inaccurate calibration or measurement.

Random errors are errors which fluctuate from one measurement to the next. They yield results distributed about some mean value. They can occur for a variety of reasons.

• They may occur due to lack of sensitivity. For a sufficiently a small change an instrument may not be able to respond to it or to indicate it or the observer may not be able to discern it.

• They may occur due to noise. There may be extraneous disturbances which cannot be taken into account.

• They may be due to imprecise definition.

• They may also occur due to statistical processes such as the roll of dice.

Propagation of Errors

Frequently, the result of an experiment will not be measured directly. Rather, it will be calculated from several measured physical quantities (each of which has a mean value and an error). What is the resulting error in the final result of such an experiment?

For instance, what is the error in where and are two measured quantities with errors and respectively?

A first thought might be that the error in would be just the sum of the errors in and This assumes that, when combined, the errors in and have the same sign and maximum magnitude; that is that they always combine in the worst possible way. This could only happen if the errors in the two variables were perfectly correlated, (i.e.. if the two variables were not really independent).

If the variables are independent then sometimes the error in one variable will happen to cancel out some of the error in the other and so, on the average, the error in will be less than the sum of the errors in its parts. A reasonable way to try to take this into account is to treat the perturbations in Z produced by perturbations in its parts as if they were &quot;perpendicular&quot; and added according to the Pythagorean theorem, That is, if and then since This idea can be used to derive a general rule. Suppose there are two measurements, and and the final result is for some function If is perturbed by then will be perturbed by where is the derivative of with respect to with held constant. Similarly the perturbation in due to a perturbation in is, Combining these by the Pythagorean theorem yields ,

so this gives the same result as before. Similarly if then, which also gives the same result. Errors combine in the same way for both addition and subtraction. However, if then, or the fractional error in is the square root of the sum of the squares of the fractional errors in its parts. (You should be able to verify that the result is the same for division as it is for multiplication.) For example, It should be noted that since the above applies only when the two measured quantities are independent of each other it does not apply when, for example, one physical quantity is measured and what is required is its square. If then the perturbation in due to a perturbation in is  