Interview questions

Memory Types Interview Questions

Memory types questions appear in interviews at semiconductor companies like Samsung and Qualcomm, at embedded systems roles in Bosch and Texas Instruments, and in IT company aptitude-plus-technical rounds at TCS and Infosys. The topic typically surfaces in the first or second technical round and connects directly to microprocessor architecture, embedded firmware design, and SoC memory subsystem questions.

EEE, ECE, EI

Interview questions & answers

Q1. What is the difference between SRAM and DRAM?

SRAM stores each bit in a 6-transistor flip-flop and holds data as long as power is applied without refreshing, while DRAM stores each bit as charge on a capacitor that leaks and must be refreshed every few milliseconds. An ISSI IS61WV5128BLL is a 4 Mbit SRAM used as cache in embedded processors because its access time is under 10 ns without refresh overhead. DRAM's need for a refresh controller and the timing complexity it adds is why SRAM is always used for on-chip cache while DRAM is used for large off-chip main memory.

Follow-up: Why does DRAM require refresh and at what typical interval does a DDR4 module perform it?

Q2. What is the difference between NOR flash and NAND flash memory?

NOR flash connects each cell between the bit line and ground allowing random byte-level reads with fast access (90 ns typical), while NAND flash strings many cells in series requiring page-level reads but achieving much higher density and lower cost per bit. The W25Q128 is a NOR flash used for storing MCU firmware because code can execute-in-place (XIP) directly from it. NAND flash (like a Micron MT29F) is used for mass storage in SSDs and eMMC where sequential large-block writes dominate and random access latency is acceptable.

Follow-up: Why can firmware not execute in place from NAND flash without a copy-to-RAM step?

Q3. What is EEPROM and how does it differ from flash memory?

EEPROM allows individual bytes to be erased and rewritten electrically without erasing an entire sector, while flash memory requires erasing a minimum block (typically 4 KB to 256 KB) before writing. The AT24C256 is a 256 Kbit I2C EEPROM used to store calibration constants in sensor modules because a single byte can be updated without disturbing adjacent data. Flash is preferred for large firmware storage because its block architecture achieves higher density, but EEPROM is chosen whenever byte-granular updates are required.

Follow-up: What is the typical endurance rating of an EEPROM in write cycles and why does it matter?

Q4. What is the difference between volatile and non-volatile memory?

Volatile memory loses its data when power is removed, while non-volatile memory retains data indefinitely without power. SRAM and DRAM are volatile, whereas Flash, EEPROM, and ROM are non-volatile. In an STM32 microcontroller, the 256 KB Flash retains the firmware image through power cycles while the 64 KB SRAM holds runtime variables and stack that are re-initialized at every reset.

Follow-up: Name a memory technology that is non-volatile and also has byte-level random-access write capability comparable to SRAM.

Q5. What is cache memory and what problem does it solve?

Cache is a small, fast SRAM buffer placed between the CPU and slower main memory that stores recently used instructions and data, reducing the average memory access latency exploiting temporal and spatial locality. A Cortex-M7 processor has a 16 KB instruction cache and 16 KB data cache that allow code in external SDRAM to run at near-full 400 MHz processor speed instead of the SDRAM's 60 ns access-limited speed. Without cache, every instruction fetch from off-chip DDR4 would stall the CPU for 40–100 cycles.

Follow-up: What is cache coherency and why does it become a problem in multi-core processors?

Q6. What is a memory hierarchy and why is it organized as a pyramid?

A memory hierarchy organizes storage from the fastest, smallest, most expensive (registers → L1 cache → L2 cache → L3 cache → DRAM → SSD) to the slowest, largest, cheapest, with each level acting as a buffer for the next. An Intel Core i7 has 32 KB L1 cache per core, 256 KB L2, and 8 MB L3 shared cache before reaching DDR4 main memory. The pyramid shape reflects the fundamental tradeoff that higher speed storage requires larger transistor circuits with lower density and higher cost per bit.

Follow-up: How does the principle of locality justify organizing memory as a hierarchy?

Q7. What is ROM and what are its different types?

ROM (Read-Only Memory) stores data permanently written during manufacturing or programming; types include mask ROM (programmed in fabrication), PROM (one-time programmable by the user with fuses), EPROM (erasable with UV light), and EEPROM (electrically erasable). The 27C256 is a UV-erasable EPROM used in older embedded systems where firmware was burned once and rarely changed, requiring a quartz window on the package to expose the die to UV light. Modern designs have replaced discrete EPROM with on-chip Flash because it eliminates the UV erasing step and the need for a separate programmer.

Follow-up: Why does an EPROM require UV light for erasure while EEPROM uses electrical signals?

Q8. What is DDR4 and how does it differ from DDR3 RAM?

DDR4 operates at supply voltages of 1.2V versus DDR3's 1.5V, uses bank groups to allow four simultaneous bursts instead of DDR3's two, and operates at data rates from 2133 to 3200 MT/s versus DDR3's 800 to 2133 MT/s. A DDR4-3200 DIMM transfers data at 25.6 GB/s on a 64-bit wide bus, compared to a DDR3-1600 DIMM's 12.8 GB/s. The bank group architecture in DDR4 reduces the minimum latency between back-to-back accesses to different banks, which is critical for modern multi-core processor workloads.

Follow-up: What is CAS latency in DDR memory and how do you compare two DDR4 kits with different speeds and CAS latencies?

Q9. What is wear leveling in flash memory and why is it necessary?

Wear leveling is a firmware algorithm in flash controllers that distributes write and erase operations evenly across all flash blocks to prevent any single block from reaching its erase cycle limit before others. NAND flash cells wear out after 3,000 to 10,000 program-erase cycles; without wear leveling, file system metadata blocks written frequently would fail while the rest of the flash remained unused. The flash controller in a Samsung 970 EVO SSD implements dynamic and static wear leveling to ensure uniform block aging across the entire 1 TB capacity.

Follow-up: What is the difference between dynamic wear leveling and static wear leveling?

Q10. What is the difference between synchronous and asynchronous memory?

Synchronous memory (SRAM or DRAM) uses a clock signal to coordinate data transfers with the processor bus, allowing pipeline access and burst modes, while asynchronous memory responds to address and control signals with a fixed propagation delay without a clock. PSRAM like the ESP-PSRAM64H used with ESP32 is synchronous, allowing burst reads at the SPI clock rate, whereas old EPROM like the 27C256 is asynchronous, responding within its access time (typically 70–200 ns) after the address is valid. Modern processors exclusively use synchronous memory because it allows precise timing margins and burst pipelining.

Follow-up: What is a burst access in synchronous SRAM and how does it improve effective bandwidth?

Q11. What is FRAM (Ferroelectric RAM) and what are its advantages?

FRAM stores bits as polarization states of a ferroelectric capacitor, providing non-volatile data retention, byte-level write granularity, and endurance exceeding 10^14 write cycles — far higher than the 10^5 cycles of Flash or EEPROM. The FM25V10 is a 1 Mbit SPI FRAM from Cypress that writes at full SPI clock speed with no erase cycle delay, unlike Flash. FRAM is used in energy-metering ICs and industrial data loggers where high-frequency non-volatile writes are needed without the endurance limitations of Flash or EEPROM.

Follow-up: What is the main disadvantage of FRAM compared to NAND flash for high-density storage applications?

Q12. What is virtual memory and how is it related to physical RAM?

Virtual memory uses the MMU to map each process's virtual address space to physical RAM locations, allowing processes to use more memory than physically installed by swapping pages to disk and giving each process an isolated address space. A Linux system with 4 GB RAM can run processes requiring 8 GB total virtual address space because inactive pages are swapped to a disk partition. Without the MMU performing virtual-to-physical translation, a bug in one process could corrupt another process's memory, making virtual memory essential for multi-process operating system stability.

Follow-up: What is a TLB (Translation Lookaside Buffer) and why is it necessary for virtual memory performance?

Q13. How does ECC memory work and when is it required?

ECC (Error Correcting Code) memory adds extra bits (typically 8 for a 64-bit word) storing a Hamming code that allows the memory controller to detect and correct single-bit errors and detect double-bit errors on every read. A registered ECC DDR4 DIMM used in a server corrects soft errors caused by cosmic ray-induced bit flips that occur roughly once per 1 GB of DRAM per month. ECC is mandatory in servers, medical equipment, and aerospace systems where silent data corruption causes catastrophic failures; consumer PCs omit it to reduce cost.

Follow-up: What is the difference between SECDED ECC (single error correct, double error detect) and Chipkill ECC?

Q14. What is LPDDR memory and where is it used?

LPDDR (Low Power Double Data Rate) is a DRAM standard optimized for mobile applications with lower operating voltage (1.1V for LPDDR5) and support for partial array self-refresh to power down unused banks during idle periods. Samsung LPDDR5 memory integrated in the Exynos 2200 SoC operates at 6400 MT/s while consuming significantly less power than desktop DDR5 under similar workloads. LPDDR is used exclusively in smartphones, tablets, and laptops where battery life and thermal constraints dominate the memory selection criteria.

Follow-up: What is the self-refresh mode in LPDDR and how does it reduce power consumption during standby?

Q15. What is the difference between word-addressable and byte-addressable memory?

Byte-addressable memory assigns a unique address to every individual byte, while word-addressable memory assigns one address to a multi-byte word (typically 16 or 32 bits), requiring multiple accesses or unaligned access handling to read individual bytes. Almost all modern processors including ARM Cortex-M and x86 use byte addressing, where a 32-bit read at address 0x20000000 reads bytes 0, 1, 2, and 3 simultaneously. Word-addressable DSP processors like the TMS320C28x use 16-bit word addressing, which must be accounted for when porting byte-oriented C string code.

Follow-up: What is an unaligned memory access and why can it cause a fault on ARM Cortex-M processors?

Common misconceptions

Misconception: Flash memory and EEPROM are the same thing because both are non-volatile and electrically erasable.

Correct: EEPROM supports byte-level erase and write, while Flash requires erasing a minimum sector before writing, making them architecturally distinct despite both being electrically erasable.

Misconception: SRAM is faster than DRAM only because it does not need refresh.

Correct: SRAM is faster primarily because its 6-transistor flip-flop cell has lower access latency than DRAM's capacitor charge sensing; the absence of refresh is a secondary advantage.

Misconception: Cache memory is a type of storage that supplements RAM when RAM is full.

Correct: Cache is a speed buffer between the CPU and RAM that stores recently accessed data to reduce average access latency; it does not extend memory capacity like virtual memory does.

Misconception: NOR flash is better than NAND flash for all embedded applications.

Correct: NOR flash is better for code storage requiring random read access, but NAND flash offers far higher density and lower cost per bit for data storage, making each architecture optimal for different use cases.

Quick one-liners

Which memory type requires periodic refresh to retain data?DRAM, because charge on its storage capacitors leaks and must be restored every few milliseconds.
What is the minimum erase unit in NAND flash?A block, typically ranging from 128 KB to 4 MB depending on the flash geometry.
Which memory type is used for the firmware code storage in STM32 microcontrollers?NOR flash, integrated on-chip alongside the ARM Cortex-M core.
What does XIP stand for and which memory type supports it?Execute In Place — NOR flash supports it because it allows random byte-level reads at any address.
Name one non-volatile memory type with endurance exceeding 10^14 write cycles.FRAM (Ferroelectric RAM), such as the FM25V10 from Cypress.
What is the voltage used by DDR4 SDRAM?1.2V, reduced from DDR3's 1.5V to lower power consumption.
What is wear leveling used for in SSD controllers?It distributes write operations evenly across all flash blocks to prevent premature failure of frequently written blocks.
Which processor subsystem performs virtual-to-physical address translation?The Memory Management Unit (MMU).
What does ECC memory correct that standard memory cannot?Single-bit errors caused by noise or radiation-induced bit flips, using a Hamming code stored in extra parity bits.
Name the I2C EEPROM commonly used to store calibration data in sensor modules.The AT24C256, a 256 Kbit EEPROM from Microchip/Atmel.

More Digital Electronics questions