Strain Sensor Resistances Explained
Why are there different strain sensor resistances?
The story for strain gage electrical resistance goes back many decades. Originally, 120 Ω was used, because that value matched the impedance characteristics of the available instrumentation, and was a convenient enough value to manufacture in common sizes using 25 µm diameter wires. Modern instrumentation eliminates the requirement for 120 ohms, and allows for much higher resistances, if needed (for example, 5000 Ω). Today, gage resistance has more to do with the overall size of the gage grid and certain performance characteristics, along with a minor impact on price. Smaller gage sizes require lower resistance values so the grid design is easy to manufacture. However, with Micro-Measurements’ new Advance Sensors technology, this restriction is greatly relaxed and very small grids (<1 mm x 1 mm) can be easily produced in 350- or 1000-ohms. Dependent on the actual gage design and construction, lower resistance values can also result in improved fatigue life (resistance to failure at high-cyclic loading). When a choice exists, the higher resistance strain sensor (gage) is preferable, because the reduced current flow associated with the higher resistance reduces gage self-heating. Since gage factor desensitization (signal attenuation) is based on the relative values of lead wire and gage resistance, that for any given value of lead wire resistance, higher gage resistances (350 Ω and 1 KΩ) reduces the signal attenuation. If the strain gage circuit is in an electrically noisy environment, the signal-to-noise ratio can be improved with higher resistance strain gages by using a higher excitation without fear of gage self-heating (zero instability). This, however, has its limits. Very high gage resistances (10 KΩ) can act like an antenna and pickup certain electrical noise; for example, 60 Hz hum from florescent lighting.
Read more: Strain Gage (Strain Gauge) Technology (Tech Notes & Papers)
Recent comments