Interview questions & answers
Q1. What is resolution in an ADC and how does it affect measurement accuracy?
Resolution is the number of bits the ADC uses to represent the analog input; an N-bit ADC divides its full-scale input range into 2^N discrete levels, so a 12-bit ADC with a 3.3V reference has a step size of 3.3/4096 ≈ 0.8 mV. The MCP3204, a 12-bit SPI ADC, uses this 0.8 mV resolution when measuring a pressure sensor output. Increasing resolution from 12 to 16 bits reduces the LSB step by 16×, but also slows conversion speed in successive approximation ADCs because more clock cycles are needed.
Follow-up: How does ADC resolution affect the dynamic range of a measurement system in dB?
Q2. What is quantization error in an ADC?
Quantization error is the difference between the actual analog input value and the nearest discrete digital level the ADC can represent, ranging from −0.5 LSB to +0.5 LSB in a properly calibrated converter. For a 10-bit ADC with a 5V reference, one LSB equals 4.88 mV, so quantization error is at most ±2.44 mV. Quantization error is irreducible and sets the fundamental noise floor of any ADC regardless of how low the circuit noise is.
Follow-up: How does oversampling and averaging reduce the effective quantization error in a measurement?
Q3. Explain how a successive approximation register (SAR) ADC works.
A SAR ADC uses a DAC and a comparator in a binary search loop: it sets the MSB of a trial register, compares the DAC output against the input, keeps or clears that bit based on the comparison result, then repeats for each lower bit from MSB to LSB. The ADS1115, a 16-bit SAR ADC from Texas Instruments, completes a conversion in 16 comparator clock cycles. SAR ADCs are the dominant architecture in microcontroller embedded ADCs because they require minimal hardware and achieve medium speed with high resolution.
Follow-up: What is the role of the sample-and-hold circuit at the input of a SAR ADC?
Q4. What is a flash ADC and when would you use it instead of a SAR ADC?
A flash ADC uses 2^N − 1 comparators in parallel, each connected to a different reference voltage tap from a resistor ladder, so all bit decisions are made simultaneously in a single comparator delay. The AD9226 is a 12-bit flash-based ADC capable of 65 MSPS used in radar and oscilloscope front ends. Flash ADCs are chosen when conversion speed matters more than power or area, as the comparator count doubles for every additional bit of resolution.
Follow-up: Why does a 12-bit flash ADC require 4095 comparators?
Q5. What is a sigma-delta ADC and what applications is it best suited for?
A sigma-delta ADC oversamples the input at a very high rate with a 1-bit comparator and uses digital filtering and decimation to produce a high-resolution output at a lower sample rate, trading speed for resolution. The ADS1256 is a 24-bit sigma-delta ADC that achieves noise-free resolution down to 21 bits at 10 SPS for precision weighing scales. Sigma-delta ADCs dominate audio codecs and precision measurement instruments because they self-calibrate against noise through oversampling.
Follow-up: What is the decimation filter in a sigma-delta ADC and why is it necessary?
Q6. What is a R-2R ladder DAC and how does it work?
An R-2R ladder DAC uses a resistor network of only two resistance values (R and 2R) arranged in a ladder topology so that each successive bit contributes exactly half the current of the previous bit, producing an output voltage proportional to the binary input word. The DAC0808 is an 8-bit R-2R ladder DAC that produces an output current from 0 to 255/256 of the full-scale reference current. The R-2R network is preferred over a binary-weighted resistor DAC because only two resistor values are needed regardless of resolution, making accurate matching practical.
Follow-up: What is the main source of nonlinearity in an R-2R ladder DAC?
Q7. What is the Nyquist theorem and how does it apply to ADC selection?
The Nyquist theorem states that a signal must be sampled at a rate at least twice its highest frequency component to reconstruct it without aliasing. An audio ADC like the PCM1808 samples at 192 kHz to capture audio up to 96 kHz while applying an anti-aliasing filter that rolls off sharply above 96 kHz. If an ADC's sample rate is below twice the input signal bandwidth, high-frequency components fold back into the baseband and appear as spurious signals indistinguishable from real data.
Follow-up: What is the purpose of an anti-aliasing filter and at what frequency should it be set for a 10 kHz bandwidth system?
Q8. What is integral nonlinearity (INL) and differential nonlinearity (DNL) in a DAC?
DNL is the deviation of any step size from the ideal 1 LSB step between adjacent output codes, while INL is the cumulative deviation of the actual transfer curve from the ideal straight line between zero-scale and full-scale. A DNL of −1 LSB means a missing output code exists where two consecutive input codes produce the same output voltage, which is fatal in control applications. INL greater than ±0.5 LSB in a DAC used for motor current control causes position error that accumulates over time in servo systems.
Follow-up: What DNL value guarantees a DAC has no missing codes?
Q9. What is the difference between a voltage-output DAC and a current-output DAC?
A current-output DAC produces an output current proportional to the digital code and requires an external transimpedance amplifier to convert it to voltage, while a voltage-output DAC includes an internal output buffer and drives the load directly. The DAC0808 is a current-output DAC that needs an op-amp like the LM741 on its output to produce a usable voltage. Current-output DACs are faster because they avoid the settling time of the internal output buffer, making them preferred in high-speed signal synthesis applications.
Follow-up: What op-amp specification is most critical when choosing a transimpedance amplifier for a current-output DAC?
Q10. What is glitch energy in a DAC and how is it minimized?
Glitch energy occurs when multiple bits of a DAC switch simultaneously at a code transition and settle at slightly different rates, causing a momentary spurious spike on the output that is proportional to the number of bits switching. The worst glitch in an R-2R DAC occurs at the major carry transition (01111111 to 10000000) when all 8 bits change simultaneously. A deglitching sample-and-hold circuit on the DAC output, like those integrated in the AD5541, captures the output only after full settling to eliminate glitch artifacts in audio and waveform synthesis.
Follow-up: Why is the major carry transition specifically the worst-case glitch point in a binary DAC?
Q11. What is the ENOB (Effective Number of Bits) of an ADC?
ENOB is the actual resolution the ADC achieves in practice, accounting for all noise and distortion sources, calculated as ENOB = (SINAD − 1.76) / 6.02, where SINAD is signal-to-noise-and-distortion ratio in dB. An ADC0804 nominally rated at 8 bits may achieve only 7.4 ENOB at higher input frequencies due to aperture jitter and comparator noise. ENOB decreases with increasing input frequency, which is why ADC datasheets always plot ENOB versus input frequency.
Follow-up: How does aperture jitter of the sample-and-hold circuit reduce ENOB at high input frequencies?
Q12. How is a DAC used to generate a sine wave in a direct digital synthesis system?
A phase accumulator increments a phase register each clock cycle; the upper bits index a sine lookup table stored in ROM, and the table output drives a DAC whose output is filtered to remove DAC images. The AD9833 is a DDS waveform generator IC that uses this architecture to produce sine waves from 0 to 12.5 MHz with 28-bit frequency resolution at 25 MHz clock. The output low-pass filter is critical because the DAC output is a staircase approximation that contains harmonics at multiples of the sampling frequency.
Follow-up: What determines the frequency resolution of a DDS system?
Q13. What causes offset error and gain error in an ADC?
Offset error is a constant shift of the entire transfer curve away from zero caused by comparator input offset voltage or reference bias currents, while gain error is a slope deviation of the transfer curve from ideal caused by reference voltage inaccuracy. A 12-bit ADC with 2 LSB offset error always reads 2 counts high regardless of input level, while a 1% gain error causes the full-scale reading to be off by 41 counts. Both errors can be corrected in firmware using a two-point calibration with known reference voltages applied to the ADC input.
Follow-up: How do you perform a two-point calibration to correct both offset and gain error in an ADC?
Q14. What is the difference between single-ended and differential ADC inputs?
A single-ended ADC input measures voltage relative to a common analog ground, making it susceptible to ground noise, while a differential input measures the voltage difference between two signal lines, rejecting common-mode noise that appears equally on both lines. The ADS1115 provides differential input measurement with over 90 dB common-mode rejection, making it suitable for sensing thermocouple voltages in noisy industrial environments. For any measurement where signal cables run more than 30 cm near switching power supplies, differential input is mandatory.
Follow-up: What is common-mode rejection ratio (CMRR) and how does it affect ADC measurement accuracy in a noisy environment?
Q15. What is pipeline ADC architecture and what advantage does it have over SAR?
A pipeline ADC splits the conversion into multiple stages where each stage digitizes a few bits and passes a residue amplified signal to the next stage, allowing a new sample to enter the first stage every clock cycle so throughput equals the clock rate despite multi-cycle latency. The AD9224, a 12-bit 40 MSPS pipeline ADC, processes 12 samples simultaneously in its pipeline stages. Pipeline ADCs achieve much higher throughput than SAR ADCs of the same resolution because successive samples are processed in parallel across stages.
Follow-up: What is pipeline latency and why does it matter in closed-loop control system ADC selection?
Common misconceptions
Misconception: Higher ADC resolution always means higher measurement accuracy.
Correct: Resolution determines the smallest detectable step, but accuracy also depends on INL, offset error, noise, and reference voltage stability, which can degrade a 16-bit ADC to 12-bit effective accuracy.
Misconception: A DAC output is a perfect analog signal.
Correct: A DAC output is a staircase waveform requiring a reconstruction low-pass filter to remove the sampling-frequency images before it can be used as a clean analog signal.
Misconception: Sampling at exactly twice the signal frequency satisfies the Nyquist criterion in practice.
Correct: Exactly 2× sampling is the theoretical minimum; practical systems sample at 5× to 10× or use sigma-delta oversampling to allow realizable anti-aliasing filter roll-off slopes.
Misconception: An R-2R ladder DAC needs as many different resistor values as it has bits.
Correct: An R-2R ladder uses only two resistor values regardless of resolution, which is why it is preferred over binary-weighted resistor DACs beyond 4 bits.