This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DRA7xx large (4GB) DDR Memory Configuration

Other Parts Discussed in Thread: DRA744

Background:

We’re using the DRA744 and are deviating from the DRA7x EVM in that we have 2x the DDR memory as in the previous post

System Memory:

EMIF1: 2 GB DDR3

EMIF2: 2 GB DDR3

Starting development point: TI BSP for android 6AJ1.3

omapedia.org/.../6AJ.1.3_Release_Notes

We have followed the points explained here:

e2e.ti.com/.../2019877

Enable ARM LPAE support in the kernel
Configure the MPU Memory Adapter to enable high memory interleaving
Configure the MPU Memory Adapter/DMM LISA map sections
- we enabled "LPAE" support in kernel kconfig (it has to be disabled the TI_OMAP2 support)

- in u-boot (based on TI BSP for Android), we configured the MPU Memory Adapter to enable high memory interleaving

- we configured the LISA MAP :

const struct dmm_lisa_map_regs lisa_map_4G_x_2_x_2 = {

.dmm_lisa_map_0 = 0x0,
.dmm_lisa_map_1 = 0x0,
.dmm_lisa_map_2 = 0x80740300,
.dmm_lisa_map_3 = 0xFF020100,
.is_ma_present = 0x1


};

but we are facing a kernel block during boot with the following prints (visible only with early-printk enabled)

..
..
6cma: CMA: reserved 32 MiB at 9d000000
6cma: CMA: reserved 112 MiB at a8800000
Memory policy: ECC disabled, Data cache writealloc
It seems that the LPAE support in the kernel referenced by the 6AJ.1.3 needs something to patch to be really usable.

If we disabled the LPAE system starts with 2 GB RAM visible.

Is there a guide or a more specific setup to follow to test all the 4 GB RAM access starting from TI android BSP?

Thanks in advance

Paolo, Dario, Michele

  • Hi,

    I've pinged experts to comment here.

    Regards,
    Mariya
  • Thanks you for the moment.

    I try to add some debug info to specify better the issue symptoms.

    Following the traces of early printk of the stucking kernel with some more debug added in "arch/arm/mm/mmu.c" here:

    	/*
    	 * Create a mapping for the machine vectors at the high-vectors
    	 * location (0xffff0000).  If we aren't using high-vectors, also
    	 * create a mapping at the low-vectors virtual address.
    	 */
    printk("__phys_to_pfn\n");
    	map.pfn = __phys_to_pfn(virt_to_phys(vectors));
    printk("__phys_to_pfn done\n");
    	map.virtual = 0xffff0000;
    	map.length = PAGE_SIZE;
    	map.type = MT_HIGH_VECTORS;
    printk("create_mapping HIGH\n");
    	create_mapping(&map, false);

    and also here:

    static void __init create_mapping(struct map_desc *md, bool force_pages)
    {
    	unsigned long addr, length, end;
    	phys_addr_t phys;
    	const struct mem_type *type;
    	pgd_t *pgd;
    
    	if (md->virtual != vectors_base() && md->virtual < TASK_SIZE) {
    		printk(KERN_WARNING "BUG: not creating mapping for 0x%08llx"
    		       " at 0x%08lx in user region\n",
    		       (long long)__pfn_to_phys((u64)md->pfn), md->virtual);
    		return;
    	}
    
    	if ((md->type == MT_DEVICE || md->type == MT_ROM) &&
    	    md->virtual >= PAGE_OFFSET &&
    	    (md->virtual < VMALLOC_START || md->virtual >= VMALLOC_END)) {
    		printk(KERN_WARNING "BUG: mapping for 0x%08llx"
    		       " at 0x%08lx out of vmalloc space\n",
    		       (long long)__pfn_to_phys((u64)md->pfn), md->virtual);
    	}
    
    	type = &mem_types[md->type];
    
    #ifndef CONFIG_ARM_LPAE
    	/*
    	 * Catch 36-bit addresses
    	 */
    	if (md->pfn >= 0x100000) {
    		create_36bit_mapping(md, type);
    		return;
    	}
    #endif
    
    	addr = md->virtual & PAGE_MASK;
    	phys = __pfn_to_phys(md->pfn);
    	length = PAGE_ALIGN(md->length + (md->virtual & ~PAGE_MASK));
    
    
    
    	if (type->prot_l1 == 0 && ((addr | phys | length) & ~SECTION_MASK)) {
    		printk(KERN_WARNING "BUG: map for 0x%08llx at 0x%08lx can not "
    		       "be mapped using pages, ignoring.\n",
    		       (long long)__pfn_to_phys(md->pfn), addr);
    		return;
    	}
    
    printk("map for 0x%08llx at 0x%08lx len 0x%08lx\n",(long long)__pfn_to_phys(md->pfn), addr, length);
    
    	pgd = pgd_offset_k(addr);
    	end = addr + length;
    
    printk(" pdg 0x%08lx end 0x%08lx\n",pgd, end);
    	do {
    		unsigned long next = pgd_addr_end(addr, end);
    printk(" next 0x%08lx\n",next);
    		alloc_init_pud(pgd, addr, next, phys, type, force_pages);
    
    		phys += next - addr;
    		addr = next;
    printk(" phys 0x%08lx - addr 0x%08lx\n",phys, addr);
    
    	} while (pgd++, addr != end);
    }
    

    OUTPUT:

    Starting kernel...
    Booting Linux on physical CPU 0x0
    Linux version 3.8.13-00125-g230ac8a-dirty-01.06.00.01 (root@rebuildhawk-4) (gcc version 4.7 (GCC) ) #11 SMP PREEMPT Fri Nov 4 13:24:34 CET 2016
    CPU: ARMv7 Processor [412fc0f2] revision 2 (ARMv7), cr=30c7387d
    CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction cache
    Machine: Jacinto6 evm board, model: TI DRA7
    mta_hw_type from cmdline is =  213
     :::::::::: u-boot_ver : 2013.04-MTA_01_06_00.01-00051-gd5fcabe-dirty :::::::::::
    cma: CMA: reserved 32 MiB at 9d000000
    cma: CMA: reserved 112 MiB at a8800000
    Memory policy: ECC disabled, Data cache writealloc
    map for 0x80000000 at 0xc0000000 len 0x2f800000
     pdg 0xc0003018 end 0xef800000
     next 0xef800000
     phys 0x00000000 - addr 0xaf800000
    map for 0x9d000000 at 0xdd000000 len 0x02000000
     pdg 0xc0003018 end 0xdf000000
     next 0xdf000000
     phys 0x00000002 - addr 0x9f000000
    map for 0xa8800000 at 0xe8800000 len 0x07000000
     pdg 0xc0003018 end 0xef800000
     next 0xef800000
     phys 0x00000002 - addr 0xaf800000
    devicemaps_init
    early_trap_init
    early_trap_done
    __phys_to_pfn
    __phys_to_pfn done
    create_mapping HIGH
    map for 0xa87b6000 at 0xffff0000 len 0x00001000
     pdg 0xc0003018 end 0xffff1000
     next 0xffff1000

    can you suggest something to check in deep or something to test at least to better investigate this issue?

    Thanks

  • Hi
    Complete support for LPAE was added only as part of the kernel version 4.4. Older kernel versions will not be able handle more than 2 GB memory without LPAE support. back porting will not be straight forward and it is advisable that >2GB memory support be taken up with the latest SDK version(based on kernel version 4.4).



    Regards
    Sriram
  • Hi, unfortunatelly the system is in production and we need to add the functionality on K 3.8.

    In order to have an idea of amounts of work, can the LPAE support be ported only on kernel un-touching the bootloader?


    BR

    Fabio

  • Hi Fabio

    As per management, this functionality cannot be supported for v3.8 neither work amount analysis has been done for it.
    It is an old code base and it is not considered for adding new functionalities as significant as LPAE.
    In order to have complete support for LPAE recommendation is again the same - migration to Kernel version 4.4 has to be done.

    As the answer of the problem is clear we will close this tread.

    Best regards
    Lucy
  • Hi Lucy, we understood but my previous answer was unanswer.

    Is mandatory to migrate also the bootloader? Or we can use old bootloader and then enable LPAE and 4GB only at kernel level?

    BR,
    Fabio
  • Fabio

    The memory configuration is handled during early stages of the boot process as part bootloader relocates itself to run out of DDR. The linux kernel is also loaded and run out of DDR, hence any memory configuration related changes have to be carried out early in the bootloader itself.

    In short, memory configuration(EMIF timing parameters) is programmed from the bootloader, hence you will need to update the bootloader(for updated timing parameters and memory adapter/LISA mapping) in addition to enabling LPAE support in the kernel. The linux kernel relies on memory configuration from uboot and we enable the LPAE in kernel to access higher order memory addresses(above 32-bit physical address space)

    Hope this helps

    Regards

    Sriram

  • Hi,
    Is there a performace impact if we enable LPAE ?
    Have you measured the performance impact with system LPAE enabled and disabled ?

  • Hi

    We havent specifically done performance measurements. Enabling LPAE by itself should not introduce performance regressions. If additional memory avaiable in the system is now mapped as High memory in Linux and accessed extensively through high memory regions, then there could be additional software overhead in managing these mappings at run time

    Regards

    Sriram