A basic assumption when designing a filter is that the noise frequency can be distinguished from the frequency of the signal to be measured. A low-pass filter is commonly used to eliminate high-frequency noise. The introduction of a low-pass filter effectively limits the useful speed of the instrument because any signal changing faster than the cut-off frequency of the filter is significantly attenuated.
The low-pass filter may be a standalone circuit and may be inherent in the design of the A/D. For instance, the basic design of the dual-slope A/D converter allows it to easily eliminate 'periodic' noise. Often it is assumed that the primary source of conducted noise is at power line frequencies. In this case, the noise is eliminated by time averaging (integrating) over an integral multiple of 1/50 or 1/60 second periods, depending upon which country you live in.
The average value of the periodic noise over this period is zero. Note that this also limits the A/D speed by requiring one measurement to consume at least one power-line cycle of time. As a practical matter, useful noise reduction often requires the integration period to be several power-line cycles.
Accuracy and resolution
A common belief, often encouraged by instrument manufacturers, is that resolution and accuracy are the same. There is a strong tendency to do this because the resolution of an instrument can be expressed in the same units as accuracy and the resolution specification always looks better than the accuracy specification.
Consider the following statement by a major instrument supplier: ". . .A 12-bit system delivers accuracy of one part in 4096, and while 32-bit resolution is more accurate, there are few applications that need to be accurate to one part in 4 294 967 296." This statement seems to imply a stronger relationship between resolution and accuracy than there really is.
It is true that 212 = 4096, but the accuracy of an A/D having this number of bits cannot be one part in 4096. In fact, you will find a measurement uncertainty 20 times greater than the resolution to be more typical after all error components are summed. One important reason that this is true is because the calibration process always leaves at least one bit of error.
Sometimes one is only interested in the repeatability of the measurements. In controlling process, for instance, an operator may know from experience that when the displayed value is 3,87, the process is producing an excellent product; but if this value varies more than 0,10 from 3,87, then the resulting product is not acceptable. In this instance, it is important that the operator be sure that when a change of 0,10 occurs, it is due to the process and not to the instrument's temperature drift, time drift or short-term repeatability.
Now suppose that a new instrument is put in place or that the whole process needs to be replicated elsewhere. In this case, the absolute accuracy is important. If the instrument has reasonable accuracy, then you are assured of measurements that are not only repeatable from measurement to measurement but also from instrument to instrument.
The absolute accuracy is important when the measurements are to be used to compare measurements with other processes or standards. If the data is to provide useful comparisons, it must be traceable to some standard, eg, the standards maintained by the National Institute of Standards & Technology (NIST).
The effect of noise filtering on accuracy
A usable reading requires both the elimination of noise and a measurement that has the accuracy required by the application. While noise can be reduced to an acceptable level using either hardware or software filtering, accuracy is achieved by careful analog design using a judicious choice of components.
There is a tendency to expect filtering to achieve the required accuracy of the measurement. One might choose a fast but less accurate A/D converter so that the speed is available when necessary. In this case, when more accurate measurements are required, users tend to simply average several readings. This does not work! While it certainly reduces errors due to random noise, however the reading still reflects the basic inaccuracy inherent in the measurement hardware. As always, it is necessary to determine the accuracy required for your application, and then ensure that your hardware does the job.
For more information contact Van Zyl Koegelenberg, Spescom MeasureGraph, 011 266 1572, email@example.com
Accuracy: Accuracy is an expression relating the difference between an indicated value and an accepted standard (the 'true value').
Calibration accuracy: is the accuracy of the instrument immediately following calibration, before any conditions change. Often this is called 24-hour accuracy.
Total instrument accuracy: is a statement of the maximum operating error that could be expected under worst case conditions. All known error terms, and effects of drift with time and ambient temperature are incorporated. This is distinguished from a run-of-the-mill accuracy specification because it is complete with all relevant contributions to measurement error, rather than simply stating the calibration accuracy.
Repeatability: is an expression quantifying an instrument's ability to reproduce a reading of the same signal under the same conditions at different times. This assumes each reading is made within a relatively short time span, say, the 24-hour specification period. Factors that affect repeatability are noise inherent in the design of the analog to digital converter, and hysteresis from sources such as dielectric absorption of capacitors.
Resolution: is the incremental input signal that yields the smallest distinguishable reading or output. In digital instruments this is the least significant digit.