09-08-2012, 03:28 PM
Calibration Process
methods of calibration.doc (Size: 148.5 KB / Downloads: 54)
Abstract
Calibration refers to the act of evaluating and adjusting the precision and accuracy of measurement equipment. In ultrasonic testing, several forms of calibration must occur. First, the electronics of the equipment must be calibrated to ensure that they are performing as designed. This operation is usually performed by the equipment manufacturer and will not be discussed further in this material. It is also usually necessary for the operator to perform a "user calibration" of the equipment. This user calibration is necessary because most ultrasonic equipment can be reconfigured for use in a large variety of applications. The user must "calibrate" the system, which includes the equipment settings, the transducer, and the test setup, to validate that the desired level of precision and accuracy are achieved. The term calibration standard is usually only used when an absolute value is measured and in many cases, the standards are traceable back to standards at the National Institute for Standards and Technology.
In ultrasonic testing, there is also a need for reference standards. Reference standards are used to establish a general level of consistency in measurements and to help interpret and quantify the information contained in the received signal. Reference standards are used to validate that the equipment and the setup provide similar results from one day to the next and that similar results are produced by different systems. Reference standards also help the inspector to estimate the size of flaws. In a pulse-echo type setup, signal strength depends on both the size of the flaw and the distance between the flaw and the transducer. The inspector can use a reference standard with an artificially induced flaw of known size and at approximately the same distance away for the transducer to produce a signal. By comparing the signal from the reference standard to that received from the actual flaw, the inspector can estimate the flaw size.
Introduction to the Common Standards
Calibration and reference standards for ultrasonic testing come in many shapes and sizes. The type of standard used is dependent on the NDE application and the form and shape of the object being evaluated. The material of the reference standard should be the same as the material being inspected and the artificially induced flaw should closely resemble that of the actual flaw. This second requirement is a major limitation of most standard reference samples. Most use drilled holes and notches that do not closely represent real flaws. In most cases the artificially induced defects in reference standards are better reflectors of sound energy (due to their flatter and smoother surfaces) and produce indications that are larger than those that a similar sized flaw would produce. Producing more "realistic" defects is cost prohibitive in most cases and, therefore, the inspector can only make an estimate of the flaw size. Computer programs that allow the inspector to create computer simulated models of the part and flaw may one day lessen this limitation.
Area-Amplitude Blocks
Area-amplitude blocks are also usually purchased in an eight-block set and look very similar to Distance/Area-Amplitude Blocks. However, area-amplitude blocks have a constant 3-inch metal path distance and the hole sizes are varied from 1/64" to 8/64" in 1/64" steps. The blocks are used to determine the relationship between flaw size and signal amplitude by comparing signal responses for the different sized holes. Sets are commonly sold in 4340 Vacuum melt Steel, 7075-T6 Aluminum, and Type 304 Corrosion Resistant Steel. Aluminum blocks are fabricated per the requirements of ASTM E127, Standard Practice for Fabricating and Checking Aluminum Alloy Ultrasonic Standard Reference Blocks. Steel blocks are fabricated per the requirements of ASTM E428, Standard Practice for Fabrication and Control of Steel Reference Blocks Used in Ultrasonic Inspection.