How it works
Gross errors arise from human mistakes — misreading a scale, applying wrong range, or recording a digit incorrectly; they are reduced by care and repeated readings. Systematic (deterministic) errors have a fixed pattern: zero error, span error, loading error (voltmeter drawing current alters the circuit), environmental errors (temperature drift in a bridge). Random errors follow a Gaussian distribution; the mean μ and standard deviation σ are estimated from N repeated readings using x̄ = Σxi/N and S = √(Σ(xi−x̄)²/(N−1)). Error propagation: if Z = f(X, Y), then δZ = √((∂f/∂X·δX)² + (∂f/∂Y·δY)²) for independent errors in X and Y.
Key points to remember
Accuracy refers to how close a measurement is to the true value; precision refers to repeatability — a precise instrument can be inaccurate if it has systematic error. Percentage error = (|measured − true|/true) × 100%. For a voltmeter reading 24.7 V when the true value is 25 V, percentage error = 1.2%. The probable error is 0.6745σ and defines the interval within which 50% of readings lie. Loading error in a voltmeter is minimised by using a high internal resistance — a 20 kΩ/V multiplier is better than a 1 kΩ/V meter for the same circuit. Resolution is the smallest change that the instrument can detect, while threshold is the minimum input that produces an output — they are often confused in exam answers.
Exam tip
The examiner always asks you to distinguish between accuracy and precision using a target diagram analogy and then give a numerical example of loading error when a voltmeter with known internal resistance is connected across a resistor in a circuit.