Module 3. Characteristics of instruments and measurement systems
Lesson 6
Static characteristics of measuring instruments - I
6.1 Introduction
When considering a measurement instrument, it is important to have a clear understanding of all the parameters involved in defining the characteristics of the measurement device. By knowing the accuracy and resolution requirements for your application, you can compute the total error in the measurement device you are considering and verify that it satisfies your needs. There are number of important performance parameters discussed in the following text, but they should not be considered the ultimate or only parameters to take into account. It is worthwhile to ask the instruments supplier to clarify the meaning of the specifications in the instrument data sheet. Not knowing the true performance of your instrument could lead you to incorrect readings, and the cost of this error could be very high.
The performance characteristics of instruments and measurement systems can be divided into two distinct categories, viz., the Static characteristics, and the Dynamic characteristics. Some applications involve the measurement of quantities that are either constant or vary very slowly with time. Under these circumstances, it is possible to define a set of criteria that gives a meaningful description of quality of measurement without interfering with dynamic descriptions that involve the use of differential equations. The characteristics in this set of criteria are called Static Characteristics. Thus the static characteristics of a measurement system are those which must be considered when the system or instrument is used under a condition not varying with time.
However, many measurements are concerned with rapidly varying quantities. In such cases we must examine the dynamic relations which exist between the output and the input. This is normally done with the help of differential equations or other methods. Performance criteria based upon dynamic relations constitute the Dynamic Characteristics.
6.2 Static
Calibration
All the static performance characteristics are obtained in one form or another by a process called static calibration. The calibration procedures involve a comparison of the particular characteristic with either a primary standard, a secondary standard with a higher accuracy than the instrument to be calibrated, or an instrument of known accuracy. It checks the instrument against a known standard and subsequently to errors in accuracy. Actually all measuring instruments must be calibrated against some reference instruments which have a higher accuracy. Thus reference instruments in turn must be calibrated against instrument of still higher grade of accuracy, or against primary standard, or against other standards of known accuracy. It is essential that any measurement made must ultimately be traceable to the relevant primary standards.
6.3 Static Characteristics
The main static characteristics include:
(i) Accuracy, (ii) Sensitivity, (iii) Reproducibility,
(iv) Drift, (v) Static error, and (vi) Dead zone
The qualities (i), (ii) and (iii) are desirable, while qualities (iv), (v) and (vi) are undesirable. The above characteristics have been defined in several different ways and the generally accepted definitions are presented here. Some more quantities have to be defined here which are essential in understanding the above characteristics.
6.3.1 Scale range and scale span
In an analogue indicating instrument, the measured value of a variable is indicated on a scale by a pointer. The choice of proper range of instruments is important in measurement. The region between the limits with in which an instrument is designed to operate for measuring, indicating or recording a physical quantity is called the range of the instruments. The Scale Range of an instrument is thus defined as the difference between the largest and the smallest reading of the instrument. Supposing the highest point of calibration is Xmax units while the lowest is Xmin units and the calibration is continuous between the two points, then the instrument range is between Xmin and Xmax . Many times it is also said that the instrument range is Xmax. The instrument span is the difference between highest and the lowest point of calibtaration. Thus
Span = Xmax – Xmin
For example for a thermometer calibrated between 100°C to 400°C, the range is 100°C to 400°C (or 400°C) but the span is 400 – 100 = 300°C.
The same is true of digital instruments. There is another factor that must be considered while determining the range of the instrument. This is the Frequency Range, which is defined as frequencies over which measurements can be performed with a specified degree of accuracy. For example a moving iron instrument may have a 0-250 V range and 0-135 Hz frequency range.
6.3.2 True value
The true value of variable quantity being measured may be defined as the average of an infinite number of measured values when the average deviation due to the various contributing factors tends to zero. Such an ideal situation is impossible to realize the practice and hence it is not possible to determine the true value of a quantity by experimental means. The reason for this is that there are several factors such as lags, loading effects, wear or noise pick-up etc. Normally an experimenter would never know that the value or quantity being measured by experimental means is the ‘true value’ of the quantity or not.
6.3.3 Accuracy
Accuracy is the closeness with which an instrument reading approaches the true value of the quantity being measured. Thus accuracy of a measurement means conformity to truth. The accuracy of an instrument may be expressed in many ways. The accuracy may be expressed as point accuracy, percent of true value or percent of scale range. Point accuracy is stated for one or more points in the range, for example, the scale of length may be read with in ± 0.2 mm. Another common way is to specify that the instrument is ‘accurate to within ±x percent of instrument span’ at all points on the scale. Another way of expressing accuracy is based upon instrument range.
Accuracy is many a time confused with Precision. There is difference in these two terms. The term ‘Precise’ means clearly or sharply defined. For example an ammeter will possesses high degree of precision by virtue of its clearly legible, finely divided, distinct scale and a knife edge pointer with mirror arrangements to remove parallax. As an example of the difference in meaning of the two terms, suppose above ammeter can read up to 1/100 of an ampere. Now if its zero adjustment is wrong, every time we take a reading, the readings taken with this ammeter are not accurate, since they do not confirm to truth on account of its faulty zero adjustment. Though the ammeter is as precise as ever and readings are consistent and clearly defined and can be down to 1/100 of an ampere. The instrument can be calibrated remove the zero error. Thus the accuracy of the instrument can be improved upon by calibration but not the precision.
6.3.4 Static error
Measurements done with an instrument always involve errors. No measurement is free from errors. If the precision of the equipment is adequate, no matter what its accuracy is, a discrepancy will always be observed between two measured results. Since the accuracy of an instrument is measured in terms of its error, an understanding and evaluation of the errors is thus essential.
Static error is defined as the difference between the best measured value and the true value of the quantity. Then:
Es = Am – At
Where, Es = error,
Am = measured value of quantity, and
At = true value of quantity.
Es is also called the absolute static error of quantity A. The absolute value of error does not indicate precisely the accuracy of measurement. For example, an error of ±2 A is negligible when the current being-measured is of the order of 1000 A while the same error highly significant if the current under measurement is 10 A. Thus another term relative static error is introduced. The relative static error is the ratio of absolute static error to the true value of the quantity under measurement. Thus the relative static error Er is given by:
Percentage static error % Er = Er x 100
Static Correction
It is the difference between the true value and the measured value of the quantity, or
δC = At - Am
6.4 Numericals
1. A meter reads 115.50 V and the true value of the voltage is 115.44 V. Determine the static error, and the static correction for this instrument.
Solution:
The error is: Es = Am – At = 115.50 – 115.44 = +0.06 V
Static correction δC = At - Am = -0.06 V.
2. A thermometer reads 71.5 °C and the static correction given is +0.5°C. Determine the true value of the temperature.
Solution:
True value of the temperature
At = Am + δC = 71.5+ 0.5 = 72.0°C.
3. A thermometer is calibrated for the range of 100°C to 150°C. The accuracy is specified within ±0.25 percent. What is the maximum static error?
Solution:
Span of thermometer = 150 – 100 = 50°C
Maximum static error =
4. An analogue indicating instrument with a scale range of 0 – 2.50 V shows a voltage of 1.46 V. A voltage has a true value of 1.50 V. What are the values of absolute error and correction? Express the error as a fraction of the true value and the full scale deflection.
Solution:
Absolute error = Am – At
= 1.46 – 1.50 = -0.04 V
Absolute correction δC = δA = +0.04 V
Relative error
Relative error expressed as a percentage of full scale division
=
5. A pressure indicator showed a reading as 22 bar on a scale range of 0-25 bar. If the true value was 21.4 bar, determine:
i) Static error
ii) Static correction
iii) Relative static error
Solution:
i) Static error = 22 – 21.4 = + 0.6 bar
ii) Static correction = - (+0.6) = - 0.6 bar
iii) Relative error = 0.6 / 21.4 = 0.028 or 2.8 %