# Error

By aasim786
Dec 03, 2012
1923 Words

Experimental Errors and Uncertainty

No physical quantity can be measured with perfect certainty; there are always errors in any measurement. This means that if we measure some quantity and, then, repeat the measurement, we will almost certainly measure a different value the second time. How, then, can we know the “true” value of a physical quantity? The short answer is that we can’t. However, as we take greater care in our measurements and apply ever more refined experimental methods, we can reduce the errors and, thereby, gain greater confidence that our measurements approximate ever more closely the true value. “Error analysis” is the study of uncertainties in physical measurements, and a complete description of error analysis would require much more time and space than we have in this course. However, by taking the time to learn some basic principles of error analysis, we can: 1) Understand how to measure experimental error, 2) Understand the types and sources of experimental errors, 3) Clearly and correctly report measurements and the uncertainties in those measurements, and 4) Design experimental methods and techniques and improve our measurement skills to reduce experimental errors. Two excellent references on error analysis are: • • John R. Taylor, An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements, 2d Edition, University Science Books, 1997 Philip R. Bevington and D. Keith Robinson, Data Reduction and Error Analysis for the Physical Sciences, 2d Edition, WCB/McGraw-Hill, 1992

Accuracy and Precision Experimental error is the difference between a measurement and the true value or between two measured values. Experimental error, itself, is measured by its accuracy and precision. Accuracy measures how close a measured value is to the true value or accepted value. Since a true or accepted value for a physical quantity may be unknown, it is sometimes not possible to determine the accuracy of a measurement. Precision measures how closely two or more measurements agree with other. Precision is sometimes referred to as “repeatability” or “reproducibility.” A measurement which is highly reproducible tends to give values which are very close to each other. Figure 1 defines accuracy and precision by analogy to the grouping of arrows in a target.

© G.A. Carlson, 2000 - 2002

Page 1 of 6

Figure 1 – Accuracy and Precision

X XX XX

High precision High accuracy

X X X X X

Low precision High accuracy

X XX XX

High precision Low accuracy

X X

X X X

Low precision Low accuracy

Types and Sources of Experimental Errors When scientists refer to experimental errors, they are not referring to what are commonly called mistakes, blunders, or miscalculations. Sometimes also referred to as “illegitimate,” “human,” or “personal” errors, these types of errors can result from measuring a width when the length should have been measured, or measuring the voltage across the wrong portion of an electrical circuit, or misreading the scale on an instrument, or forgetting to divide the diameter by 2 before calculating the area of a circle with the formula A = π r2. Such errors are surely significant, but they can be eliminated by performing the experiment again C correctly the next time. Experimental errors, on the other hand, are inherent in the measurement process and cannot be eliminated simply by repeating the experiment no matter how carefully. There are two types of experimental errors: systematic errors and random errors. Systematic Errors Systematic errors are errors that affect the accuracy of a measurement. Systematic errors are “one-sided” errors, because, in the absence of other types of errors, repeated measurements yield results that differ from the true or accepted value by the same amount. The accuracy of measurements subject to systematic errors cannot be improved by repeating those measurements. Systematic errors cannot easily be analyzed by statistical analysis. Systematic errors can be difficult to detect, but once detected can be reduced only by refining the measurement method or technique. Common sources of systematic errors are faulty calibration of measuring instruments, poorly maintained instruments, or faulty reading of instruments by the user. A common form of this last source of systematic error is called “parallax error,” which results from the user reading an instrument at an angle resulting in a reading which is consistently high or consistently low.

© G.A. Carlson, 2000 - 2002

Page 2 of 6

Random Errors Random errors are errors that affect the precision of a measurement. Random errors are “two-sided” errors, because, in the absence of other types of errors, repeated measurements yield results that fluctuate above and below the true or accepted value. Measurements subject to random errors differ from each other due to random, unpredictable variations in the measurement process. The precision of measurements subject to random errors can be improved by repeating those measurements. Random errors are easily analyzed by statistical analysis. Random errors can be easily detected, but can be reduced by repeating the measurement or by refining the measurement method or technique. Common sources of random errors are problems estimating a quantity that lies between the graduations (the lines) on an instrument and the inability to read an instrument because the reading fluctuates during the measurement. Calculating Experimental Error When a scientist reports the results of an experiment, the report must describe the accuracy and precision of the experimental measurements. Some common ways to describe accuracy and precision are described below. Significant Figures The least significant digit in a measurement depends on the smallest unit which can be measured using the measuring instrument. The precision of a measurement can then be estimated by the number of significant digits with which the measurement is reported. In general, any measurement is reported to a precision equal to 1/10 of the smallest graduation on the measuring instrument, and the precision of the measurement is said to be 1/10 of the smallest graduation. For example, a measurement of length using a meterstick with 1-mm graduations will be reported with a precision of ±0.1 mm. A measurement of volume using a graduated cylinder with 1-ml graduations will be reported with a precision of ±0.1 ml. Digital instruments are treated differently. Unless the instrument manufacturer indicates otherwise, the precision of measurement made with digital instruments are reported with a precision of ±½ of the smallest unit of the instrument. For example, a digital voltmeter reads 1.493 volts; the precision of the voltage measurement is ±½ of 0.001 volts or ±0.0005 volt. Percent Error Percent error (sometimes referred to as fractional difference) measures the accuracy of a measurement by the difference between a measured or experimental value E and a true or accepted value A. The percent error is calculated from the following equation: % Error = © G.A. Carlson, 2000 - 2002

E−A A

(1)

Page 3 of 6

Percent Difference Percent difference measures precision of two measurements by the difference between the measured or experimental values E1 and E2 expressed as a fraction the average of the two values. The equation to use to calculate the percent difference is: % Difference = Mean and Standard Deviation When a measurement is repeated several times, we see the measured values are grouped around some central value. This grouping or distribution can be described with two numbers: the mean, which measures the central value, and the standard deviation which describes the spread or deviation of the measured values about the mean For a set of N measured values for some quantity x, the mean of x is represented by the symbol and is calculated by the following formula: 1 N 1 x = ∑ xi = x + x + x + L + x N −1 + x N (3) N i =1 N 1 2 3 E1 − E 2 E1 + E 2 2 (2)

(

)

where xi is the i-th measured value of x. The mean is simply the sum of the measured values divided by the number of measured values. The standard deviation of the measured values is represented by the symbol σx and is given by the formula: 1 N 2 σx = ∑ (xi − x ) (4) N − 1 i =1 The standard deviation is sometimes referred to as the “mean square deviation” and measures how widely spread the measured values are on either side of the mean. The meaning of the standard deviation can be seen from Figure 2, which is a plot of data with a mean of 0.5. SD represents the standard deviation. As seen in Figure 2, the larger the standard deviation, the more widely spread the data is about the mean. For measurements which have only random errors, the standard deviation means that 68% of the measured values are within σx from the mean, 95% are within 2σx from mean, and 99% are within 3σx from the mean. Figure 2: Measured Values of x

12

10

Fraction of Total Measurements

8

6

4

2

0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Measured Value of x

SD = 0.1

SD = 0.2

SD = 0.3

© G.A. Carlson, 2000 - 2002

Page 4 of 6

Reporting the Results of an Experimental Measurement When a scientist reports the result of an experimental measurement of a quantity x, that result is reported with two parts. First, the best estimate of the measurement is reported. The best estimate of a set of measurement is usually reported as the mean of the measurements. Second, the variation of the measurements is reported. The variation in the measurements is usually reported by the standard deviation σx of the measurements. The measured quantity is then known have a best estimate equal to the average, but it may also vary from + σx to – σx. Any experimental measurement should then be reported in the following form: x = x ±σ x Example Consider Table 1, which lists 30 measurements of the mass m of a sample of some unknown material. Table 1: Measured Mass (kg) of Unknown 1.09 1.11 1.14 1.15 1.08 1.06 1.01 1.04 1.03 1.06 1.07 1.12 1.10 1.16 1.17 1.12 1.14 1.00 1.14 1.13 1.09 1.08 1.11 1.10 1.16 1.17 1.09 1.20 1.05 1.07

(5)

We can represent this data on a type of bar chart called a histogram (Figure 3), which shows the number of measured values which lie in a range of mass values with the given midpoint. Figure 3: Mass of Unknown Sample

6 # of Measurements 5 4 3 2 1 0 1.02 1.04 1.06 1.08 1.1 1.12 1.14 1.16 1.18 1.2 Mass (kg)

© G.A. Carlson, 2000 - 2002

Page 5 of 6

For the 30 mass measurements, the mean mass is given by: m = 1 . (33.04 kg) = 110 kg 30 (6)

We see from the histogram that the data does appear to be centered on a mass value of 1.10 kg. The standard deviation is given by: σm =

2 1 30 . ∑ (mi − 110 kg ) = 0.05 kg 30 − 1 i =1

(7)

We also see that from the histogram that the data does, in deed, appear to be spread about the mean of 1.10 kg so that approximately 70% (= 20/30×100) of the values are within σm from the mean. The measured mass of the unknown sample is then reported as: m = 1.10± 0.05 kg (8)

© G.A. Carlson, 2000 - 2002

Page 6 of 6

No physical quantity can be measured with perfect certainty; there are always errors in any measurement. This means that if we measure some quantity and, then, repeat the measurement, we will almost certainly measure a different value the second time. How, then, can we know the “true” value of a physical quantity? The short answer is that we can’t. However, as we take greater care in our measurements and apply ever more refined experimental methods, we can reduce the errors and, thereby, gain greater confidence that our measurements approximate ever more closely the true value. “Error analysis” is the study of uncertainties in physical measurements, and a complete description of error analysis would require much more time and space than we have in this course. However, by taking the time to learn some basic principles of error analysis, we can: 1) Understand how to measure experimental error, 2) Understand the types and sources of experimental errors, 3) Clearly and correctly report measurements and the uncertainties in those measurements, and 4) Design experimental methods and techniques and improve our measurement skills to reduce experimental errors. Two excellent references on error analysis are: • • John R. Taylor, An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements, 2d Edition, University Science Books, 1997 Philip R. Bevington and D. Keith Robinson, Data Reduction and Error Analysis for the Physical Sciences, 2d Edition, WCB/McGraw-Hill, 1992

Accuracy and Precision Experimental error is the difference between a measurement and the true value or between two measured values. Experimental error, itself, is measured by its accuracy and precision. Accuracy measures how close a measured value is to the true value or accepted value. Since a true or accepted value for a physical quantity may be unknown, it is sometimes not possible to determine the accuracy of a measurement. Precision measures how closely two or more measurements agree with other. Precision is sometimes referred to as “repeatability” or “reproducibility.” A measurement which is highly reproducible tends to give values which are very close to each other. Figure 1 defines accuracy and precision by analogy to the grouping of arrows in a target.

© G.A. Carlson, 2000 - 2002

Page 1 of 6

Figure 1 – Accuracy and Precision

X XX XX

High precision High accuracy

X X X X X

Low precision High accuracy

X XX XX

High precision Low accuracy

X X

X X X

Low precision Low accuracy

Types and Sources of Experimental Errors When scientists refer to experimental errors, they are not referring to what are commonly called mistakes, blunders, or miscalculations. Sometimes also referred to as “illegitimate,” “human,” or “personal” errors, these types of errors can result from measuring a width when the length should have been measured, or measuring the voltage across the wrong portion of an electrical circuit, or misreading the scale on an instrument, or forgetting to divide the diameter by 2 before calculating the area of a circle with the formula A = π r2. Such errors are surely significant, but they can be eliminated by performing the experiment again C correctly the next time. Experimental errors, on the other hand, are inherent in the measurement process and cannot be eliminated simply by repeating the experiment no matter how carefully. There are two types of experimental errors: systematic errors and random errors. Systematic Errors Systematic errors are errors that affect the accuracy of a measurement. Systematic errors are “one-sided” errors, because, in the absence of other types of errors, repeated measurements yield results that differ from the true or accepted value by the same amount. The accuracy of measurements subject to systematic errors cannot be improved by repeating those measurements. Systematic errors cannot easily be analyzed by statistical analysis. Systematic errors can be difficult to detect, but once detected can be reduced only by refining the measurement method or technique. Common sources of systematic errors are faulty calibration of measuring instruments, poorly maintained instruments, or faulty reading of instruments by the user. A common form of this last source of systematic error is called “parallax error,” which results from the user reading an instrument at an angle resulting in a reading which is consistently high or consistently low.

© G.A. Carlson, 2000 - 2002

Page 2 of 6

Random Errors Random errors are errors that affect the precision of a measurement. Random errors are “two-sided” errors, because, in the absence of other types of errors, repeated measurements yield results that fluctuate above and below the true or accepted value. Measurements subject to random errors differ from each other due to random, unpredictable variations in the measurement process. The precision of measurements subject to random errors can be improved by repeating those measurements. Random errors are easily analyzed by statistical analysis. Random errors can be easily detected, but can be reduced by repeating the measurement or by refining the measurement method or technique. Common sources of random errors are problems estimating a quantity that lies between the graduations (the lines) on an instrument and the inability to read an instrument because the reading fluctuates during the measurement. Calculating Experimental Error When a scientist reports the results of an experiment, the report must describe the accuracy and precision of the experimental measurements. Some common ways to describe accuracy and precision are described below. Significant Figures The least significant digit in a measurement depends on the smallest unit which can be measured using the measuring instrument. The precision of a measurement can then be estimated by the number of significant digits with which the measurement is reported. In general, any measurement is reported to a precision equal to 1/10 of the smallest graduation on the measuring instrument, and the precision of the measurement is said to be 1/10 of the smallest graduation. For example, a measurement of length using a meterstick with 1-mm graduations will be reported with a precision of ±0.1 mm. A measurement of volume using a graduated cylinder with 1-ml graduations will be reported with a precision of ±0.1 ml. Digital instruments are treated differently. Unless the instrument manufacturer indicates otherwise, the precision of measurement made with digital instruments are reported with a precision of ±½ of the smallest unit of the instrument. For example, a digital voltmeter reads 1.493 volts; the precision of the voltage measurement is ±½ of 0.001 volts or ±0.0005 volt. Percent Error Percent error (sometimes referred to as fractional difference) measures the accuracy of a measurement by the difference between a measured or experimental value E and a true or accepted value A. The percent error is calculated from the following equation: % Error = © G.A. Carlson, 2000 - 2002

E−A A

(1)

Page 3 of 6

Percent Difference Percent difference measures precision of two measurements by the difference between the measured or experimental values E1 and E2 expressed as a fraction the average of the two values. The equation to use to calculate the percent difference is: % Difference = Mean and Standard Deviation When a measurement is repeated several times, we see the measured values are grouped around some central value. This grouping or distribution can be described with two numbers: the mean, which measures the central value, and the standard deviation which describes the spread or deviation of the measured values about the mean For a set of N measured values for some quantity x, the mean of x is represented by the symbol and is calculated by the following formula: 1 N 1 x = ∑ xi = x + x + x + L + x N −1 + x N (3) N i =1 N 1 2 3 E1 − E 2 E1 + E 2 2 (2)

(

)

where xi is the i-th measured value of x. The mean is simply the sum of the measured values divided by the number of measured values. The standard deviation of the measured values is represented by the symbol σx and is given by the formula: 1 N 2 σx = ∑ (xi − x ) (4) N − 1 i =1 The standard deviation is sometimes referred to as the “mean square deviation” and measures how widely spread the measured values are on either side of the mean. The meaning of the standard deviation can be seen from Figure 2, which is a plot of data with a mean of 0.5. SD represents the standard deviation. As seen in Figure 2, the larger the standard deviation, the more widely spread the data is about the mean. For measurements which have only random errors, the standard deviation means that 68% of the measured values are within σx from the mean, 95% are within 2σx from mean, and 99% are within 3σx from the mean. Figure 2: Measured Values of x

12

10

Fraction of Total Measurements

8

6

4

2

0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Measured Value of x

SD = 0.1

SD = 0.2

SD = 0.3

© G.A. Carlson, 2000 - 2002

Page 4 of 6

Reporting the Results of an Experimental Measurement When a scientist reports the result of an experimental measurement of a quantity x, that result is reported with two parts. First, the best estimate of the measurement is reported. The best estimate of a set of measurement is usually reported as the mean of the measurements. Second, the variation of the measurements is reported. The variation in the measurements is usually reported by the standard deviation σx of the measurements. The measured quantity is then known have a best estimate equal to the average, but it may also vary from + σx to – σx. Any experimental measurement should then be reported in the following form: x = x ±σ x Example Consider Table 1, which lists 30 measurements of the mass m of a sample of some unknown material. Table 1: Measured Mass (kg) of Unknown 1.09 1.11 1.14 1.15 1.08 1.06 1.01 1.04 1.03 1.06 1.07 1.12 1.10 1.16 1.17 1.12 1.14 1.00 1.14 1.13 1.09 1.08 1.11 1.10 1.16 1.17 1.09 1.20 1.05 1.07

(5)

We can represent this data on a type of bar chart called a histogram (Figure 3), which shows the number of measured values which lie in a range of mass values with the given midpoint. Figure 3: Mass of Unknown Sample

6 # of Measurements 5 4 3 2 1 0 1.02 1.04 1.06 1.08 1.1 1.12 1.14 1.16 1.18 1.2 Mass (kg)

© G.A. Carlson, 2000 - 2002

Page 5 of 6

For the 30 mass measurements, the mean mass is given by: m = 1 . (33.04 kg) = 110 kg 30 (6)

We see from the histogram that the data does appear to be centered on a mass value of 1.10 kg. The standard deviation is given by: σm =

2 1 30 . ∑ (mi − 110 kg ) = 0.05 kg 30 − 1 i =1

(7)

We also see that from the histogram that the data does, in deed, appear to be spread about the mean of 1.10 kg so that approximately 70% (= 20/30×100) of the values are within σm from the mean. The measured mass of the unknown sample is then reported as: m = 1.10± 0.05 kg (8)

© G.A. Carlson, 2000 - 2002

Page 6 of 6