### Topics

### features

### Publications

### Issue Archive

# Accuracy versus Resolution in Analyzing System Errors

- Created on Monday, 01 December 2008

### Distinguishing between accuracy and resolution can be misinterpreted in determining system needs.

To ensure a system’s accuracy meets required needs, system error budgets must be an integral part of system design. Considerations should include necessary levels of accuracy for system elements, as well as such issues as compatibility between software algorithm calculations and measurement accuracy – meaning resolution must also be taken into account.

Accuracy is the degree of absolute correctness of a measurement device; resolution is the smallest number that the device can display or record. In the following examples, the digital device quantizing error (±1 bit minimum) in the least significant digit is assumed to be zero. Remember that a measurement device with a specified accuracy of ±0.015% actually gives an output between 0.99985 and 1.00015 times the actual true value.

1. Measure a voltage source known to be exactly 5.6430 volts with a digital voltmeter that is (somehow) 100% accurate but has only 3 display digits, defined as “3-digit resolution.” The reading is 5.64 volts, which does not represent the actual voltage value although both the source and the instrument are 100% accurate. Resolution here is 10mV.

2. Measure the precise 5.643-volt source using a 5-digit display digital voltmeter with a specified accuracy of ±0.015%. The reading is between 5.6421 and 5.6438. This is closer to the actual voltage (5.6430), but still not 100% accurate. Resolution in this case is 1 mV.

Measuring 1 volt within ±0.015% accuracy requires a 6-digit instrument capable of displaying five decimal places. The fifth decimal place represents a resolution of 10 microvolts. Using any instrument with less than 6 digits, “accuracy” is determined by the resolution of the reading instrument and the acceptance of the observer.

Table 1 displays some different system
“accuracy” correction calculations. Since
errors are random and have ± values, RMS
calculations are often used as opposed to
worst-case maximum and minimum. RMS
error is defined as the square root of the
sum of each error squared, ˆ {(E1)^{2}
+(E2)^{2} + (E3)^{2}}.

Analog-to-digital converters (ADC) are
advertised as having “n” bit resolution –
often misunderstood to mean accuracy.
The effective accuracy of an n-bit ADC is
not equal to ADC resolution, which is
defined as approximately 1/(2_{n}-1). Figure
2 shows a conceptual system used to convert
an analog signal to digital representation.
Semiconductor switches select analog
input signals, which are captured and held
in a sample and hold amplifier function
block (SHA). An n-bit counter then begins
to count, and the counter contents are converted
to an analog voltage using switched
resistors or current sources. When the analog
and SHA signals are equal, counting
stops and the counter contents become
available as a digital representation of the
sampled analog input value. The process,
however, includes sources of error that collectively
degrade true accuracy.

Errors associated with typical ADC schemes:

*Sampling Speed*. From Nyquist Sampling Theory, if the analog signal changes rapidly, the ADC must sample at least twice as fast as the changing input. Sampling slower than one-half the signal frequency will result in inaccurate readings.*Input Multiplexer*. Input multiplexer circuits may have OpAmp buffers on each input line that could introduce errors in voltage offset, current bias, and linearity. In addition, multiplexers can create cross talk between channels and signal attenuation.*Sample and Hold Amplifier*. This function is an OpAmp-based circuit with components designed to switch, buffer, and hold the sampled analog voltage value. Consequently linearity, gain, power supply shifts, voltage offsets, charge injection, and input bias currents will contribute errors.*Converter*. In the counter, comparator, and ADC circuit there are such errors as overall linearity, quantizing error (uncertainty in the least significant bit), and power supply shifts.*Temperature*. All analog circuit functions within the ADC unit are subject to temperature errors.

Obviously, many factors, including resolution, must be considered to determine overall system accuracy. Often, the errors in a total system error budget are predominantly from industrial sensors used in process control and data acquisition systems because they can have accuracies much lower than SCMs or ADC units.

*This article was written by John Lehman,
Engineering Manager, at Dataforth Corporation.
For more information, visit //info.hotims.com/15144-121.*

### White Papers

| ||||||