Part Number: TDA4VM
Tool/software:
Last Updated: January 20th, 2026
This post is created to track commonly asked questions pertaining to the LPDDR4 interface for TDA4x, DRA8x, AM68x, and AM69x devices. It will be updated as needed.
Supported DDR Memory
- Does TI provide an approved vendor list for the LPDDR4 interface?
- No, TI does not provide an approved vendor list for the LPDDR4 interface for TDA4x, DRA8x, AM68x, or AM69x devices. These TI devices support JEDEC compliant LPDDR4 x16 mode memories.
- The memory part used in our system has reached EOL (end-of-life). Can you recommend a replacement part?
- It is recommended that you reach out to the memory supplier for a replacement part so that they can ensure longevity of the new DRAM, as well as identify a part with compatible architecture (density, ranks, etc.).
- Does the DDR sub-system support LPDDR4x?
- No, the interface only supports LPDDR4 memories requiring a 1.1 Volt VDDQ. It does not support LPDDR4x memories requiring a 0.6 Volt VDDQ.
- Does the DDR sub-system support DDR4?
- No, the interface only supports LPDDR4 memories requiring a 1.1 Volt VDDQ. It does not support DDR4 memories requiring a 1.2 Volt VDDQ.
- Does the DDR sub-system support x8 "byte mode" memories?
- No, the interface does not support x8 "byte mode" memories.
Customizing Software for Custom Board
- How do I customize software to initialize the DDRSS interface on my custom board?
- Please see the steps below:
- Pre-requisites:
- LPDDR4 datasheet for the memory on the custom board
- Schematic of custom board (or understanding of which DDRSS of the TI processor are connected to LPDDR4 memory)
- Step 1: Use the DDR Register Configuration tool to generate custom settings based on the LPDDR4 datasheet and board configuration. (See links below)
-
J7200: https://dev.ti.com/sysconfig/?product=TDA4x_DRA8x_AM67x-AM69x_DDR_Config&device=J7200_DRA821_SR1.0_alpha
-
J721S2: https://dev.ti.com/sysconfig/?product=TDA4x_DRA8x_AM67x-AM69x_DDR_Config&device=J721S2_TDA4VE_TDA4AL_TDA4VL_AM68x
-
J784S4: https://dev.ti.com/sysconfig/?product=TDA4x_DRA8x_AM67x-AM69x_DDR_Config&device=J784S4_TDA4AP_TDA4VP_TDA4AH_TDA4VH_AM69x
-
J722S: https://dev.ti.com/sysconfig/?product=TDA4x_DRA8x_AM67x-AM69x_DDR_Config&device=J722S_TDA4VEN_TDA4AEN_AM67
- Step 2: Integrate the new register settings into the corresponding SDK.
- SPL:
- Add DTSI output file to <U-BOOT>/arch/arm/dts/
- Reference the new DTSI in device tree files (see examples below)
- J721E:
- File k3-j721e-r5-common-proc-board.dts references k3-j721e-ddr-evm-lp4-4266.dtsi
- https://git.ti.com/cgit/ti-u-boot/ti-u-boot/tree/arch/arm/dts/k3-j721e-r5-common-proc-board.dts?h=11.00.09#n9
- J7200
- File k3-j7200-r5-common-proc-board.dts references k3-j7200-ddr-evm-lp4-2666.dtsi
- https://git.ti.com/cgit/ti-u-boot/ti-u-boot/tree/arch/arm/dts/k3-j7200-r5-common-proc-board.dts?h=11.00.09#n9
- J721S2
- File k3-j721s2-r5-common-proc-board.dts references k3-j721s2-ddr-evm-lp4-4266.dtsi
- https://git.ti.com/cgit/ti-u-boot/ti-u-boot/tree/arch/arm/dts/k3-j721s2-r5-common-proc-board.dts?h=11.00.09#n9
- J784S4
- File k3-j784s4-r5-evm.dts references k3-j784s4-ddr-evm-lp4-4266.dtsi
- https://git.ti.com/cgit/ti-u-boot/ti-u-boot/tree/arch/arm/dts/k3-j784s4-r5-evm.dts?h=11.00.09#n9
- J722S
- File k3-j722s-r5-evm.dts references k3-j722s-ddr-lp4-50-4000.dtsi
- https://git.ti.com/cgit/ti-u-boot/ti-u-boot/tree/arch/arm/dts/k3-j722s-r5-evm.dts?h=11.00.09#n10
- J721E:
- SBL:
- Add header file to <PDK>/packages/ti/board/src/<SOC>_evm/include/
- SPL:
- Pre-requisites:
- Please see the steps below:
-
-
- Additional Considerations for SPL:
- The SPL DDR driver may not use information from the register configuration file to determine which DDR sub-systems to initialize. If the custom board does not use ALL of the available DDRSS of the device superset part, please be sure to review the referenced E2E thread below.
- DDR is initialized in the R5 bootloader, but the A72 image passes the total DDR memory size to the operating system. If the total available DDR memory size does not match the TI EVM for the corresponding superset part, please be sure to review the referenced E2E thread below.
- Reference E2E Thread: https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1476355/am68a-single-chip-lpddr4-configuration
- Additional Considerations for SPL:
-
Accessing LPDDR4 Mode Registers
- I want to read the LPDDR4 mode registers to read out a vendor ID or similar information. Is this possible?
- The controller supports issuing a software initiated mode register read (MRR) command. Bootloader driver code includes a driver function (lpddr4_getmrregister) which can be used or viewed as an example. See the driver function (l
pddr4_getmmrregister) here: https://git.ti.com/cgit/ti-u-boot/ti-u-boot/tree/drivers/ram/k3-ddrss/lpddr4.c?h=08.06.00.007#n288The function input parameter,
readmoderegval, corresponds to DDRSS_CTL_160[24:8].Bits ( 7:0) define the memory mode register
Bits ( 15:8) define the chip select.
Bit (16) triggers the readSee the example below, where readModeRegVal variable is set as 0x10000 | ((rank & 1) << 8) | (mr & 0xFF), where "rank" is the chip select of interest, and "mr" is the mode register number of interest.

- The controller supports issuing a software initiated mode register read (MRR) command. Bootloader driver code includes a driver function (lpddr4_getmrregister) which can be used or viewed as an example. See the driver function (l
Training
- The training placement does not appear optimal in a software-based eye or physical measurement. Why?
- This is expected.
- Eye shape can be dependent on several factors, including data patterns and number of transactions. While training attempts to find best placement for delay / VREF, the algorithms still have to be practical for end applications. Specifically, boot time (including DDR initialization) is important to end use-cases, and that means that training operates with reduced data traffic compared to what may be executed in a software-based eye tool or physical measurement (where training time is in the millisecond range and software-based eye or physical measurements may be captured over a much longer period of time). Thus, training should have a much more optimistic view of the eye, and this can lead to the training placement not being perfectly centered in the eye captured by a software-based tool or from an oscilloscope and probe.
- In addition, some trained values such as VREF are shared across several IO. Thus, the training algorithms have to find a suitable value that works for all the IO sharing the same setting. This can also contribute to the trained value appearing to be non-optimal for a given signal.
- If the system / board under test is experiencing functional issues or if there is concern that the delay / VREF placement is too close to the eye edge, a first step would be to turn on DBI (if not already enabled) and evaluate whether the issue or concern still exists. In addition, it is recommended to ensure that the board design follows the "LPDDR4 Board Design and Layout Guidelines" to help identify any possible improvements to PCB design.
- This is expected.
System Address to Physical LPDDR4 Memory Mapping
- What is the max LPDDR4 address space that the device can support?
- Each DDR sub-system within the TI SOC can support a max 8GB of memory. To achieve 8GB for a single DDRSS, this requires that the connected LPDDR4 device is dual-channel (32 DQ IO) and dual-rank. Thus, the max addressable DDR space will be dependent on how many DDRSS are available and enabled in the system.
- How do I access the LPDDR4 memory?
- The LPDDR4 memory is mapped to the TI SOC memory map, and split into two regions: a lower order memory region (2GB), and a higher order memory region (32GB, effectively 30GB). It is important to note that the actual address space physically mapped to LPDDR4 will be dependent on the number of DDRSS enabled and the size of the selected LPDDR4 memory on the board. The regions shown below are reserved for LPDDR4 memory assuming 4x DDRSS each interfaced to 8GB LPDDR4 memory.
- Lower Order Memory Region: 0x00 8000 0000 to 0x00 FFFF FFFF ( 2 GB)
- Higher Order Memory Region: 0x08 0000 0000 to 0x0F FFFF FFFF (32 GB)
- The LPDDR4 memory is mapped to the TI SOC memory map, and split into two regions: a lower order memory region (2GB), and a higher order memory region (32GB, effectively 30GB). It is important to note that the actual address space physically mapped to LPDDR4 will be dependent on the number of DDRSS enabled and the size of the selected LPDDR4 memory on the board. The regions shown below are reserved for LPDDR4 memory assuming 4x DDRSS each interfaced to 8GB LPDDR4 memory.
- Why does software use a start address of 0x08 8000 0000 for the higher order memory region if it starts at 0x08 0000 0000?
- The first 2GB of the higher order memory region is disabled within the MSMC. This is always true (by hardware design) and not software programmable. Thus, effectively the higher order memory region is limited to 30GB, starting at address 0x08 8000 0000.
- The SOC I am using supports more than 1x DDRSS. What is the difference between the following to interleave settings: "DDRSS0 / 1 Interleaved + DDRSS2" and "DDRSS0 / 1 / 2 Interleaved + DDRSS2"?
- The only difference between these two settings is whether DDRSS2 is interleaved with DDRSS0 and DDRSS1, or entirely separated. In the case of "DDRSS0 / 1 Interleaved + DDRSS2" , the memory connected to DDRSS2 is entirely separated from DDRSS0 and DDRSS1. In the case of "DDRSS0 / 1 / 2 Interleaved + DDRSS2", the memory connected to DDRSS2 is interleaved with the memory connected to DDRSS0 and DDRSS1. The reason why both options show "+ DDRSS2" is because any available physical memory attached to DDRSS2 that is NOT used in the interleave region will be available after the interleave region. This would apply in situations where the physical memory connected to DDRSS2 is larger than the physical memory connected to DDRSS0 and DDRSS1. Consider the following example(s):
Assume:
DDRSS0 has 2GB memory
DDRSS1 has 2GB memory
DDRSS2 has 3GB memoryMemory Mapping if "DDRSS0 / 1 Interleaved + DDRSS2" Selected:
4GB mapped to both DDRSS0 and DDRSS1 (interleaved at selected granularity)
3GB mapped to DDRSS2 (not interleaved with other DDRSS)Memory Mapping if "DDRSS0 / 1 / 2 Interleaved + DDRSS2" Selected:
6GB mapped to DDRSS0, DDRSS1, and DDRSS2 (interleaved at selected granularity)
1GB mapped to DDRSS2 (not interleaved with other DDRSS)
- The only difference between these two settings is whether DDRSS2 is interleaved with DDRSS0 and DDRSS1, or entirely separated. In the case of "DDRSS0 / 1 Interleaved + DDRSS2" , the memory connected to DDRSS2 is entirely separated from DDRSS0 and DDRSS1. In the case of "DDRSS0 / 1 / 2 Interleaved + DDRSS2", the memory connected to DDRSS2 is interleaved with the memory connected to DDRSS0 and DDRSS1. The reason why both options show "+ DDRSS2" is because any available physical memory attached to DDRSS2 that is NOT used in the interleave region will be available after the interleave region. This would apply in situations where the physical memory connected to DDRSS2 is larger than the physical memory connected to DDRSS0 and DDRSS1. Consider the following example(s):
- What is the physical mapping between system address and a specific DDRSS LPDDR4 memory?
- This is dependent on the number of DDRSS enabled, the MSMC DDRSS interleave configuration (for devices using more than 1x DDRSS), and the size of the LPDDR4 memory attached to each DDRSS.
- Single DDRSS Solution
- For a single DDRSS solution, the LPDDR4 memory will be mapped starting in the lower order memory region and continue up to the size of the memory. Please consider the following 2 examples:
- 1GB memory connected to DDRSS0:
- DDRSS0 mapped to 0x00 8000 0000 to 0x00 BFFF FFFF (1 GB)
- 4GB memory connected to DDRSS0:
- DDRSS0 mapped to 0x00 8000 0000 to 0x00 FFFF FFFF (2 GB)
- DDRSS0 mapped to 0x08 8000 0000 to 0x08 FFFF FFFF (2 GB)
- 1GB memory connected to DDRSS0:
- For a single DDRSS solution, the LPDDR4 memory will be mapped starting in the lower order memory region and continue up to the size of the memory. Please consider the following 2 examples:
- Multi DDRSS Solution
- Similar to the single DDRSS solution, the LPDDR4 memory will begin starting in the lower order memory region. In this case, the MSMC DDRSS interleave configuration needs to be considered. Consider the following examples:
- 3x DDRSS (2GB per DDRSS), fully interleaved w/ granularity of 128 bytes
- DDRSS0 mapped to 0x00 8000 0000 to 0x00 8000 007F (128 bytes)
- DDRSS1 mapped to 0x00 8000 0080 to 0x00 8000 00FF (128 bytes)
- DDRSS2 mapped to 0x00 8000 0100 to 0x00 8000 017F (128 bytes)
- DDRSS0 mapped to 0x00 8000 0180 to 0x00 8000 01FF (128 bytes)
- DDRSS1 mapped to 0x00 8000 0200 to 0x00 8000 027F (128 bytes)
- DDRSS2 mapped to 0x00 8000 0280 to 0x00 8000 02FF (128 bytes)
- ....
- DDRSS0 mapped to 0x09 7FFF FE80 to 0x09 7FFF FEFF (128 bytes)
- DDRSS1 mapped to 0x09 7FFF FF00 to 0x09 7FFF FF7F (128 bytes)
- DDRSS2 mapped to 0x09 7FFF FF80 to 0x09 7FFF FFFF (128 bytes)
- 3x DDRSS (2GB per DDRSS), DDRSS0 / 1 interleaved w/ granularity of 128 bytes, DDRSS2 separated
- DDRSS0 mapped to 0x00 8000 0000 to 0x00 8000 007F (128 bytes)
- DDRSS1 mapped to 0x00 8000 0080 to 0x00 8000 00FF (128 bytes)
- DDRSS0 mapped to 0x00 8000 0100 to 0x00 8000 017F (128 bytes)
- DDRSS1 mapped to 0x00 8000 0180 to 0x00 8000 01FF (128 bytes)
- ....
- DDRSS0 mapped to 0x08 FFFF FF00 to 0x08 FFFF FF7F (128 bytes)
- DDRSS1 mapped to 0x08 FFFF FF80 to 0x08 FFFF FFFF (128 bytes)
- DDRSS2 mapped to 0x09 0000 0000 to 0x09 7FFF FFFF (2 GB)
- 3x DDRSS (2GB per DDRSS), fully interleaved w/ granularity of 128 bytes
- Similar to the single DDRSS solution, the LPDDR4 memory will begin starting in the lower order memory region. In this case, the MSMC DDRSS interleave configuration needs to be considered. Consider the following examples:
- Single DDRSS Solution
- This is dependent on the number of DDRSS enabled, the MSMC DDRSS interleave configuration (for devices using more than 1x DDRSS), and the size of the LPDDR4 memory attached to each DDRSS.