Memory Systems in Embedded and IoT Devices

Embedded and IoT devices operate under constraints that desktop and server platforms never encounter: milliwatts of power budgets, physical footprints measured in square millimeters, and deployment lifespans that may exceed 10 years without manual servicing. Memory architecture in these environments is not an optimization exercise — it is a primary design constraint that determines whether a device is manufacturable, certifiable, and commercially viable. This page covers the classification of memory types used in embedded and IoT contexts, the operational mechanisms that govern their behavior, the scenarios where each class is applied, and the decision criteria that guide selection among competing technologies.


Definition and scope

Embedded memory systems are storage and retrieval architectures integrated into or closely coupled with a fixed-function or constrained-function processing unit. Unlike general-purpose computing environments, embedded systems typically execute a defined task set from a fixed firmware image, which shapes every layer of the memory hierarchy — from on-chip SRAM to external NOR flash.

The scope of this domain covers microcontroller-class devices (typically 8-bit to 32-bit architectures), System-on-Chip (SoC) platforms used in IoT gateways, and purpose-built industrial controllers. The JEDEC Solid State Technology Association, which publishes the primary interoperability standards for semiconductor memory, classifies embedded memory into volatile categories (SRAM, DRAM variants such as LPDDR) and non-volatile categories (NOR flash, NAND flash, EEPROM, and emerging technologies such as MRAM and ReRAM).

Within IoT specifically, the ETSI EN 303 645 standard for consumer IoT cybersecurity (published by the European Telecommunications Standards Institute) identifies secure storage as a baseline requirement, making memory architecture a compliance concern, not merely a technical one.

The full taxonomy of volatile and non-volatile options is examined at Volatile vs. Nonvolatile Memory, which provides the foundational classification framework that embedded design builds upon.


How it works

Embedded memory systems function through a layered architecture that balances speed, persistence, and energy consumption across three operational tiers:

  1. On-chip SRAM — Integrated directly into the microcontroller die, SRAM provides the fastest read/write access (typically single-cycle at the processor's clock frequency) and serves as the primary working memory for stack, heap, and runtime variables. SRAM is fully volatile: all content is lost when power is removed. Capacities in microcontroller-class devices range from 2 KB in the smallest 8-bit parts to 8 MB in high-end Cortex-M33 SoCs.

  2. Non-volatile program storage — Firmware and application code are stored in NOR flash (preferred for execute-in-place, or XIP capability) or NAND flash (preferred for high-density data logging). NOR flash offers byte-addressable reads with access times around 35–85 nanoseconds, making it suitable for direct code execution. NAND flash requires page-based reads (typically 4 KB pages) and a flash translation layer (FTL) to manage wear leveling and bad block mapping.

  3. Data retention memory — EEPROM (Electrically Erasable Programmable Read-Only Memory) provides byte-level erasure for configuration storage, calibration constants, and device identity data. Endurance specifications from JEDEC standards commonly target 100,000 erase cycles for EEPROM cells. MRAM (Magnetoresistive RAM) is an emerging alternative offering near-infinite endurance with non-volatile retention, referenced in JEDEC Standard JEP106.

Power consumption across these tiers differs by orders of magnitude. SRAM standby current in low-power microcontrollers can reach sub-microamp levels (per ARM Cortex-M0+ datasheets), while active NAND flash operations may draw 10–25 milliamps during write cycles.

The Memory Hierarchy Explained reference on this site covers the broader architectural principles governing how these tiers interact across latency and capacity trade-offs.


Common scenarios

Embedded and IoT memory configurations cluster around four recognizable deployment patterns:

Bare-metal microcontroller firmware — A device such as a temperature sensor or motor controller runs a single firmware image from internal NOR flash, uses on-chip SRAM for all runtime state, and stores calibration data in a few kilobytes of EEPROM. No operating system manages memory; the linker script defines the memory map statically.

RTOS-based IoT node — Devices running FreeRTOS, Zephyr RTOS (governed by the Linux Foundation), or Mbed OS require enough SRAM to support multiple task stacks and a TCP/IP stack. Zephyr's documented minimum RAM requirement for a basic networked build is approximately 20 KB. Memory protection units (MPUs), standardized in the ARMv7-M and ARMv8-M architecture references (ARM Architecture Reference Manual), partition SRAM regions between tasks to prevent stack overflows from corrupting kernel state.

IoT gateway with Linux — A gateway aggregating data from 50 or more sensor nodes typically runs a Linux-based OS on a Cortex-A-class SoC. These platforms use LPDDR4 DRAM (64 MB to 1 GB) alongside eMMC or UFS flash storage, following the same memory management principles documented in Memory Management Techniques.

Safety-critical embedded systems — Automotive ECUs, medical infusion pumps, and industrial PLCs require memory subsystems certified under IEC 61508 (functional safety) or ISO 26262 (automotive). These standards mandate error detection and correction (ECC) in SRAM and flash interfaces. The IEC publishes IEC 61508 in six parts, with Part 2 covering hardware requirements including memory integrity.


Decision boundaries

Selecting a memory architecture for an embedded or IoT product involves five discrete trade-off axes:

  1. Volatility requirement — If the device must retain state through power loss without a backup power source, non-volatile storage is mandatory. SRAM alone cannot satisfy this requirement.

  2. Endurance vs. density — NOR flash endurance is typically 100,000 program/erase cycles per JEDEC standards, while NAND flash endurance ranges from 3,000 cycles (MLC NAND) to 100,000 cycles (SLC NAND). High-write-frequency applications (data loggers, over-the-air update systems) must calculate projected lifetime write loads against rated endurance.

  3. Execute-in-place (XIP) necessity — Devices with insufficient SRAM to copy code at runtime require NOR flash for direct execution. NAND flash cannot be used for XIP without shadow-copying to RAM first, which changes the SRAM capacity requirement significantly.

  4. Power budget — LPDDR4 DRAM in self-refresh mode draws approximately 0.5–2 milliwatts depending on capacity and manufacturer, while MRAM retains data with zero standby power. Battery-powered devices operating on coin cells (CR2032, nominally 240 mAh) have hard limits on cumulative memory access energy.

  5. Security and certification — Devices subject to ETSI EN 303 645, NIST SP 800-213 (NIST Special Publication 800-213, IoT Device Cybersecurity Guidance), or IEC 61508 may require hardware memory isolation, secure boot with verified flash integrity, or ECC-protected SRAM. These constraints eliminate the lowest-cost memory options and must be resolved before component selection.

For a structured comparison of flash memory categories relevant to embedded storage, Flash Memory Systems details the NOR/NAND classification boundary and endurance characteristics. The broader landscape of memory architectures across computing domains is indexed at Memory Systems Authority.


References