Electrical Test Fundamentals
Good measurement practices and collecting top quality data and may mean numerous things to different people. However, most practitioners would agree that the facility to create a test setup suitable for the intended measurement outcome is prime. Frequently, this involves a test scenario where electrical characteristics of a tool or material are being determined. The test equipment can range from an easy setup, corresponding to using a benchtop digital multimeter (DMM) to measure resistance values, to more complex systems that involve fixturing, special cabling, etc. When determining the desired performance of the test system, important criteria include measurement accuracy, sensitivity, speed, etc. One must recognize that these criteria involve not just the performance of the measurement instrument, but in addition the restrictions imposed by the results of cabling, connectors, a test fixture, or even our environment under which tests are performed.
When considering a selected measurement instrument for an application, the specification or data sheet is the primary place to appear for info on its performance and the way that has effects on test results. Still, data sheets aren’t always easy to interpret because they sometimes use specialized terminology. Additionally, as alluded to above, instrument specifications provide information on only 1 component to the test system, and should not be the sole consideration in determining if a bit of test equipment will meet application requirements. Characteristics of the fabric or device under test can also have a serious impact on measurement quality.
Four-Step Measurement Process. The method of designing and characterizing the performance of any test setup could be broken down into four essential steps. Following this process will greatly increase the possibilities of creating a system that meets requirements and eliminates unpleasant and costly surprises.
Step 1 – Step one , before specifying a bit of kit, is to define the system’s required measurement performance. Here’s a necessary prerequisite to designing, building, verifying, and ultimately using a test system to be able to meet a user’s requirements. Defining the necessary level of performance involves understanding terminology like resolution, accuracy, repeatability, rise time, sensitivity, and so forth.
Resolution: Thisis the smallest part of the signal being measured that may actually be observed. It’s determined by the analog-to-digital (A/D) converter inside the measurement device. There are several methods to characterize resolution—bits, digits, counts, etc. The more bits or digits there are, the easier the device’s resolution. The resolution of most benchtop instruments is laid out in digits, reminiscent of a 6?-digit DMM. Take into account that the ?-digit terminology implies that probably the most significant digit has not up to a whole range of 0 to 9. As a general rule, ? digit implies essentially the mostsome of the most significant digit may have the values 0, 1, or 2. Compared, data acquisition boards are usually laid out in the choice of bits their A/D converters have. Here’s how these different resolution specs compare:
12-bit A/D – 4096 counts – approx. 3? digits
16-bit A/D – 65,536 counts – approx. 4? digits
18-bit A/D – 262,144 counts – approx. 5? digits
22-bit A/D – 4,194,304 counts – approx. 6? digits
25-bit A/D – 33,554,304 counts – approx. 7? digits
28 bit-A/D – 268,435,456 counts – approx. 8? digits
Sensitivity: Although the terms sensitivity and accuracy are usually considered synonymous, they don’t mean a similar thing. Sensitivity refers back to the smallest change within the measurement that may be detected and is laid out in units of the measured value, corresponding to volts, ohms, amps, degrees, etc. The sensitivity of an instrument is the same as its lowest range divided by the resolution. Therefore, the sensitivity of a 16-bit A/D in response to a 2V scale is two divided by 65536 or 30 microvolts. a range of instruments are optimized for making highly sensitive measurements, including nanovoltmeters, picoammeters, electrometers, and high-resolution DMMs. Listed below are some examples of ways to calculate the sensitivity for A/Ds of varying levels of resolution:
3? digits (2000) on 2V range = 1mV
4? digits (20000) on 2? range = 100m?
16-bit (65536) A/D on 2V range = 30?V
8? digits on 200mV range = 1nV
Accuracy: There are two styles of accuracy to think of – absolute accuracy and relative accuracy. Absolute accuracy indicates the closeness of agreement between the results of a measurement and its true value, as traceable to an accepted national or international standard value. Measurement devices are usually calibrated by comparing them to a known standard value. Most countries have their very own standards institute where national standards are kept. Relative accuracy is the level to which a measurement accurately reflects the connection between an unknown and a locally established reference value. Within the calibration of an instrument to either kind of standard, crucial consideration is calibration drift. The drift of an instrument refers to its ability to retain calibration over the years for a given range of temperatures.
The implications of those terms are demonstrated by the challenge of ensuring absolutely the accuracy of a temperature measurement of 100.00?C to ?0.01? C, versus measuring a metamorphosis in temperature of 0.01?C. Measuring the change is way easier than ensuring absolute accuracy to this tolerance, and sometimes, it’s all a user requires.
Repeatability: This is often the flexibility to measure a similar signal input and get the similar value over and over. Ideally, the repeatability of measurements must be better than the accuracy. If repeatability is high, and the sources of error are known and quantified, then high resolution and repeatable measurements are usually acceptable for plenty of applications. Such measurements could have high relative accuracy with low absolute accuracy.
Step 2 – This step gets into the true technique of designing the measurement system, including choice of equipment and fixtures, etc. As mentioned previously, interpreting a knowledge sheet to establish which specifications are relevant to a system should be would becould very well be daunting. The next explanations can help.
Accuracy: Instrument manufacturers should not have a uniform approach to specifying accuracy. Relating to Keithley Instruments, accuracy specifications are normally laid out in two parts – (1) as a proportion of the price being measured, and (2) as a proportion of the dimensions that the measurement is taken on. These two elements of accuracy (i.e., measurement uncertainty) could be expressed as ? (gain error + offset error), or as ? (% reading + % range), or as ? (ppm of reading + ppm of range). Accuracy specs for prime-quality measurement devices may be given for twenty-four hours, 90 days, 12 months, two years, or maybe five years from the time of last calibration. Basic accuracy specs often assume usage within 90 days of calibration.
Temperature coefficient:Accuracy specs are normally guaranteed within a selected temperature range, together with 23?C, ?5?C. Within that temperature span, for a given instrument measurement range, the accuracy specification can be given, as an example, as ? (50ppm of reading + 35ppm of range) If finishing up measurements where temperatures are outside this range, it is necessary so as to add temperature-related uncertainty. For the instrument and measurement example just given, the extra temperature related error might probably be stated as ?2ppm over 0-18?C and ?6ppm over 28-50?C. Determining measurement uncertainty becomes especially difficult when ambient temperatures are unstable or get outside the manufacturer’s stated temperature ranges.
Instrumentation error:Some measurement uncertainty is a function of instrument design. For a given signal level and measurement range, a 6?-digit DMM with a 22-bit A/D converter would be inherently more accurate than a 3?-digit DMM or 12-bit A/D data acquisition board. Care should be taken even if comparing, as an example, two 6?-digit DMMs from different manufacturers. A manufacturer’s abbreviated specs frequently provide only the gain error, but offset error often is the most vital factor when measuring values on the low end of a measurement range. Remember,
Accuracy = ? (% reading + % range) = ? (gain error + offset error).
<em>Noise: Instrument sensitivity (smallest observable change that may be detected) may well be limited either by noise or by the instrument’s digital resolution. The extent of instrument noise will likely be specified as a peak-to-peak or RMS value, sometimes within a undeniable bandwidth. It’s important that sensitivity figures from the information sheet match your application requirements, but in addition consider the noise figures as these will especially affect low level measurements. Accurate measurements become increasingly difficult as changes inside the signal level approach the instrument’s noise level.
Measurement Settling Time: For a given level of accuracy, settling time affects test system speed or throughput. Obviously, automated test equipment with PC-controlled instruments enables quicker measurements than taking them manually, which might be especially important in a producing environment. Nevertheless, the instrument reading, which fits from one level (before the signal is measured) to a different (the specified measurement value) should have settled sufficiently toward its final value. Put otherwise, there’s always a tradeoff between the rate at which measurements are made and the accuracy of the measurements.
The rise time of an analog instrument (or analog output) is often defined because the time necessary for the output to rise from 10% to 90% of the overall value when the input signal rises instantaneously from zero to a couple fixed value. Rise time affects the accuracy of the measurement when it’s of a similar order of magnitude because the period of the measurement. If the length of time allowed before taking the reading is the same as the upward push time, an error of roughly 10% will result, since the signal may have reached only 90% of its final value. To minimize the mistake, more time should be allowed. To cut back the mistake to at least one%, about two rise times need to be allowed; reducing the mistake to 0.1% will require roughly three rise times (or nearly seven time constants).
Step 3 – This step involves the particular building of the test system and verifying its performance. A major component of this process is adopting appropriate measurement techniques which may improve results.
At this point the test system builder has picked appropriate equipment, cables, and fixtures, and has determined that the equipment’s specifications can meet the measurement requirements. Now it is time to assemble the test system and verify its performance. That is necessary to first check that every measurement instrument have been calibrated and remains within its specified calibration period, that is usually three hundred and sixty five days.
Pretest checks: If the instrument should be used for making voltage measurements, place a quick around the inputs of the meter to study for offset error. This is often in comparison to the specifications from the knowledge sheet, and customarily would be nulled out because of the instrument’s ZERO or REL function. Similarly, if the instrument should be used for current measurements, check to determine if there’s an offset current reading at the meter with an open circuit on the input. Again, this may be in comparison to specifications, and there’s provisions for zeroing the meter. Next, add the system cabling and repeat the pretest checks. Then do the similar after adding the test fixture. Finally, add the device under test (DUT), repeating the pretest checks. This stepwise procedure of assembling and checking the test system may help identify the source of offset errors and other problems inside the system. (Pinpointing and correcting sources of errors are covered in additional detail later.)
Measurement settling time: Make certain there’s sufficient delay between application of the signal and taking a measurement. The goal is to reach a suitable tradeoff between measurement accuracy and test system throughput. Overemphasis on speed may end up in insufficient delay time, that is a typical source of error in test systems. It is especially evident when running the test at high speed produces another result than when performing the test manually, or in a step-by-step fashion.
Besides an instrument’s settings and inherent design, cabling and other sources of reactance inside the test circuit can affect measurement settling time. Generally, capacitance is probably to be the source of the issue. Large systems with plenty of cabling (i.e., high cable capacitance), and/or those measuring high impedance may require relatively long delay times by using a lengthy system time constant (? = RC). To handle this problem, many instruments have a programmable trigger delay. In a manual system, a delay of 0.25 to 0.5 seconds will seem instantaneous. However, in automated test equipment, steps are usually executed in a millisecond or less. Even the best of systems may require delays of 5 to 10 milliseconds after a metamorphosis in stimulus so we can get accurate results.
Minimizing the consequences of Error Sources: Guarding of the test leads or cabling is one technique for coping with capacitance issues, and thereby reduces leakage errors and reduces response time. Guarding contains a conductor driven by a low impedance source surrounding the lead of a high impedance signal. The guard voltage is kept at or near the possibility of the signal voltage [1]. Some instruments have built-in guard circuits.
Test lead resistance is a standard source of error in 2-wire low resistance measurements. This is often minimized with the aid of a 4-wire (Kelvin) test lead setup [2]. Instruments with one of these setup provide one pair of leads that provide a known test current to the unknown resistance, and a second pair of ends up in measure the voltage around the resistance. Since little or no current flows inside the voltage measurement leads, the resistance of these leads has minimal affect at the measurement. The unknown resistance is then determined from Ohms Law. If, however, the unknown resistance is amazingly high, approaching the input resistance of the voltmeter circuit, then an electrometer or specialized meter with extremely high input resistance could be required.
Thermoelectric EMFs usually are found in any measurement system. These create voltage offsets, which result from connections between dissimilar metals that act as a thermocouple. The magnitude of the resulting offset voltage error depends upon the Seebeck coefficient of both metals and the ambient temperature. As an instance, the relationship between a clean copper lead and a copper test fixture that has become oxidized (i.e., a Cu-CuO connection) has a Seebeck coefficient of 1mV/?C. Therefore at a room temperature of 25?C, the thermoelectric EMF generated is 25mV, which may be significant compared to the worth to be measured. Therefore, it’s highly desirable to make use of only clean Cu-Cu connections in a test circuit, that have a Seebeck coefficient of lower than 0.2?V/?C. For dissimilar metal connections that can not be avoided, some instruments provide an offset-compensated ohms measurement technique that minimizes the mistake from thermal EMFs.
<em>RFI/EMI is an anomaly attributable to radio frequency interference (RFI) or electromagnetic interference (EMI) which can introduce AC noise and DC offsets right into a measurement. AC noise can act on to obscure low level AC measurements. DC offset errors may result from the rectification of RFI/EMI inside the test circuit or instrument. The most typical source of external noise is 50Hz or 60Hz power line pick-up, counting on where on the planet the measurements are being made. Picking up millivolts of noise is just not uncommon, especially when measurements are made near fluorescent lights.
The signal components of noise superimposed on a DC signal being measured may end up in highly inaccurate and fluctuating measurements. To bypass this, many modern instruments allow users to set the combination period of the A/D converter in terms of the collection of power line cycles (NPLC). For instance, a setting of 1NPLC will bring about the measurement being integrated for 20 milliseconds (for 50Hz power) or 16.67milliseconds (for 60Hz). A 1NPLC integration period will eliminate noise inducted from the ability line. While the performance improvement from this selection may be dramatic, it also limits the system measurement speed to a undeniable degree.
Step 4 – Once the test system have been built using appropriate instruments and measurement techniques, and verified in Step 3, it could produce reliable measurement results. However, you need to recheck the performance of any test setup all the time. By reason of component and temperature drifts, the accuracy of an instrument will vary through the years, and it’s going to be recalibrated usually.
References. The subsequent references provide more information on guarding, 4-wire measurements, and other techniques to reduce sources of error in electrical measurements:
1. “Low Level Measurements Handbook”, 6th Edition, 2004, pp2-5 to two-10; available online at http://www.keithley.com/knowledgecenter/knowledgecenter_pdf/LowLevMsHandbk_1.pdf.
2. MacLachlan, Derek, “Getting Back to the fundamentals of electric Measurements”, Keithley Instruments White Paper, available online at http://www.keithley.com/data-asset=54359.