Let me start with “I hate it!” It is okay for simple tasks, but calibration and metrology is not a simple task. Often, what we do in automation is very complex—getting into the low-level settings of our lab standards so we can configure it to make the most accurate measurements. Then, we want to take several measurements so we can evaluate how close the measurement is to the true value.
This is where emulation modes break down; for example, on a Keysight E44440A Spectrum analyzer, I have to look at the status bytes when I want to check the calibration status. That command for the Keysight PSA/E444xA Spectrum analyzers is “STAT:QUES:CAL?” The returned results contain information on whether the spectrum analyzer needs alignment and what alignments need to be performed.
For another manufacturer to emulate low-level operations like status questionable, indicating that the spectrum analyzer’s measurement quality may not be accurate. Emulating low-level commands like this would be challenging because the statutes are closely tied to the hardware design of the E4440A spectrum analyzer. Yet, the query is critical if you want to know the status of the hardware before you make a measurement.
In Metrology.NET®, we check the hardware status before each measurement. If the instrument status is questionable, we can run the self-calibration, equipment zero, or whatever operation is needed to enable the hardware to make the best measurements.
Several things can affect the quality of a measurement, especially temperature drift. Most high-accuracy measurement circuits are affected by temperature. Every good calibration lab logs the temperature and humidity. Plus, all of your lab standards will specify the operating temperature ranges with associated uncertainties.
Some instruments will take it a step further by embedding temperature sensors inside the hardware. For example, the Hewlett-Packard 3458A has an internal temperature sensor. It is used to measure the temperature inside the instrument. The instrument’s accuracies are specified based on the internal temperature at the time of calibration and the temperature at the time of the last ACAL.
To further my stance on hating emulation modes, several digital multimeters emulate the command set of the HP 3458A, but none of them implement the temperature queries. The reason is their digital multimeter was designed differently, and their uncertainties are not based on the Calibration and ACAL temperatures–some of them don’t even have temperature sensors inside the instrument.
Those are just some of the problems with emulation mode not working because the equipment manufacturer didn’t add support for all the commands implemented by the equipment they are trying to emulate. I’m sure there are thousands of additional examples. However, the biggest one for metrology is related to measurement uncertainties.
Most hardware calculates uncertainties using a percent of reading pulse percent of range formula. It may be specified in parts per million (ppm) or floor value instead of ppm of range, but either way, it is a slope + offset calculation. So, even if you can substitute one device for another because the emulation mode works, you still have the problem of uncertainty calculations, such as: What is the range the device was on when it made the measurement? How high above the range can the instrument make a measurement 1%, 5%, 10%?
In metrology, we need to know the exact state of our reference standards to make an accurate measurement and calculate the measurement uncertainties. This is why I hate the idea of using emulation modes in metrology software hardware.
Leave a Reply
You must be logged in to post a comment.