Comparison

Decimation vs Interpolation in DSP

A MEMS microphone outputs audio at 3.072 MHz using PDM; getting it down to 48 kHz for a codec requires dividing the sample rate by 64 — that is decimation. Going the other direction, a CD player reads 44.1 kHz PCM and must drive a DAC at 176.4 kHz for smoother reconstruction — that is interpolation. Both operations are the backbone of multirate DSP and show up in every modern audio, modem, and software-defined radio pipeline.

ECE, EI

Side-by-side comparison

ParameterDecimationInterpolation in DSP
OperationReduce sample rate by factor M (downsample)Increase sample rate by factor L (upsample)
ProcessLowpass filter → keep every M-th sampleInsert L−1 zeros between samples → lowpass filter
Filter RoleAnti-aliasing: remove energy above new NyquistAnti-imaging: remove spectral images above original Nyquist
Typical FactorM = 2, 4, 8, 64 (PDM mic to PCM)L = 2, 4 (CD 44.1 kHz → 176.4 kHz)
Effect on SpectrumSpectrum compressed; images fold if not filteredSpectrum stretched; images appear at multiples of original fs
Computational LoadFilter at high rate, then downsample — polyphase reduces opsUpsample first, then filter — polyphase reduces ops
Nyquist Frequency Afterfs_new/2 = fs_old/(2M)fs_new/2 = L×fs_old/2
Risk if Filter OmittedAliasing — high-freq content folds into basebandImaging — spectral copies appear in output
Common DSP ICsADAU1777 decimation filter, CIC decimators in sigma-delta ADCsInterpolation filter in CS43L22 audio DAC
Cascade StructureMultistage: decimate by 2 three times rather than once by 8Multistage: interpolate by 2 repeatedly for efficiency

Key differences

Decimation by M keeps one sample out of every M, but without lowpass filtering first, any signal component above fs/(2M) folds back as aliasing — permanently corrupting the decimated signal. Interpolation by L inserts L−1 zeros and then filters to remove spectral images at multiples of the original fs; omitting the filter leaves images that sound like harmonic distortion. The key asymmetry: decimation's anti-aliasing filter operates at the original high sample rate (expensive), while interpolation's anti-imaging filter operates after upsampling (also at high rate). Polyphase decomposition reduces both to filtering at the lower rate, saving computation by a factor of M or L.

When to use Decimation

Use decimation when interfacing high-rate digital sensors (PDM microphones, sigma-delta ADCs outputting at 3 MHz) to low-rate digital processors. CIC (Cascaded Integrator-Comb) filters are the standard decimation structure in these paths.

When to use Interpolation in DSP

Use interpolation when a DAC requires a higher sample rate than the digital source provides, such as driving the CS43L22 at 176.4 kHz from a 44.1 kHz I²S stream, or in digital upconversion before RF transmission in SDR systems.

Recommendation

Understand both as a pair — real systems chain them. Choose decimation after oversampled ADCs and interpolation before DACs or RF upconverters. Always implement the anti-aliasing or anti-imaging lowpass filter; skipping it is the single most common error in multirate design.

Exam tip: Examiners ask you to draw the decimation block diagram showing the order of lowpass filter then down-sampler, and interpolation as up-sampler then lowpass filter — reversing the order is a common wrong answer.

Interview tip: Interviewers at audio IC companies and modem designers ask why multistage decimation (e.g., ÷2 three times) is more efficient than a single ÷8 stage — the answer is that the FIR filter in each stage operates at a progressively lower rate, reducing total multiply-accumulate operations.

More Digital Signal Processing comparisons