This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

8148 2GB DDR3

Hello,

We have a custom6148 board with 2GB DDR3, What are the LISA_MAP_X register values to access this complete 2gb ?

Thanks,

Mike

  • Hi Mike,

    Do you mean custom DM8148 board (not 6148) ?

    BR
    Pavel
  • Hi Pavel,

    My bad sorry, I meant custom DM8148.

    Thanks,

    Mike

  • Mike,

    One possible configurations is:

    DMM_LISA_MAP__0 = 0
    DMM_LISA_MAP__1 = 0
    DMM_LISA_MAP__2 = 0x80640300
    DMM_LISA_MAP__3 = 0xC0640320

    See the below wiki for more info regarding this configuration:
    processors.wiki.ti.com/.../EZSDK_Memory_Map

    Other possible configuration is:

    DMM_LISA_MAP__0 = 0
    DMM_LISA_MAP__1 = 0
    DMM_LISA_MAP__2 = 0
    DMM_LISA_MAP__3 = 0x80740300

    See DM814x TRM, chapter 6 DMM/TILER
    6.2.1.6 Section Mapping
    6.3.1 DMM Basic Register Setup
    6.3.4 Address Management Using LISA Sections

    See also DM814x silicon errata, advisory 3.0.31

    BR
    Pavel
  • Hi Pavel,

    Please let me know, is enlisted above LISA setting (define 1 or 2 mapped regions) may influent on DDR read/write performance, specially on EDMA transfers?

  • For optimal system performance, it is recommended to enable interleaving between the 2 EMIF banks and thus have same sized memory on both the EMIF banks. Example : 512MB of DDR3 on EMIF bank 0 and 512MB of DDR3 on EMIF bank 1, for a total system DDR3 memory of 1GB.

    For optimal system performance, symmetric configuration is HIGHLY recommended. Unless dictated by system cost and other constraints, Asymmetrical distribution of memory should not be considered.

    BR
    Pavel
  • Hi Pavel,

    Does that mean:
    DMM_LISA_MAP__0 = 0
    DMM_LISA_MAP__1 = 0
    DMM_LISA_MAP__2 = 0x80640300
    DMM_LISA_MAP__3 = 0xC0640320

    This option is optimal.

    Thanks,
    Mike
  • Yes, I think so.

    BR
    Pavel
  • Hi Pavel,

    Thanks for your response.

    I had one more question, the kernel runs out of memory, how do I configure low memory

    cat /proc/meminfo
    MemTotal: 1561076 kB
    MemFree: 1483556 kB
    Buffers: 0 kB
    Cached: 41464 kB
    SwapCached: 0 kB
    Active: 18396 kB
    Inactive: 36520 kB
    Active(anon): 13456 kB
    Inactive(anon): 564 kB
    Active(file): 4940 kB
    Inactive(file): 35956 kB
    Unevictable: 0 kB
    Mlocked: 0 kB
    HighTotal: 1207296 kB
    HighFree: 1144800 kB
    LowTotal: 353780 kB
    LowFree: 338756 kB
    SwapTotal: 0 kB
    SwapFree: 0 kB
    Dirty: 0 kB
    Writeback: 0 kB
    AnonPages: 13476 kB
    Mapped: 17120 kB
    Shmem: 568 kB
    Slab: 7644 kB
    SReclaimable: 2884 kB
    SUnreclaim: 4760 kB
    KernelStack: 656 kB
    PageTables: 588 kB
    NFS_Unstable: 0 kB
    Bounce: 0 kB
    WritebackTmp: 0 kB
    CommitLimit: 780536 kB
    Committed_AS: 90180 kB
    VmallocTotal: 540672 kB
    VmallocUsed: 375696 kB
    VmallocChunk: 133052 kB

    Thanks,
    Mike
  • # dmesg |grep Memory:
    [ 0.000000] Memory: 364MB 270MB 908MB 1MB = 1543MB total
    [ 0.000000] Memory: 1560900k/1560900k available, 71356k reserved, 1207296K highmem

    I want to Increase the low mem as the kernel is failing to allocate memory.

    My bootargs are:
    console=ttyO0,115200n8 rootwait ubi.mtd=14,2048 rootfstype=ubifs root=ubi0:rootfs rw mem=364M@0x80000000 mem=270M@0x9FC00000 mem=960M@0xC0000000 vmalloc=500M notifyk.vpssm3_sva=0xBF900000

    Is there a config to do it ?

    Thanks,
    Mike
  • Already discussed in the below e2e thread:

    e2e.ti.com/.../459590

    BR
    Pavel
  • Hi Pavel,

    I am still unclear, as why there is so much high mem as despite telling in in bootargs with mem.

    Kernel command line: console=ttyO0,115200n8 rootwait ubi.mtd=14,2048 rootfstype=ubifs root=ubi0:rootfs rw mem=364M@0x80000000 mem=320M@0x9FC00000 mem=960M@0xC0000000 vmalloc=500M notifyk.vpssm3_sva=0xBF900000

    [ 0.000000] PID hash table entries: 2048 (order: 1, 8192 bytes)
    [ 0.000000] Dentry cache hash table entries: 65536 (order: 6, 262144 bytes)
    [ 0.000000] Inode-cache hash table entries: 32768 (order: 5, 131072 bytes)
    [ 0.000000] Memory: 364MB 320MB 908MB 1MB = 1593MB total
    [ 0.000000] Memory: 1560900k/1560900k available, 71356k reserved, 1207296K highmem
    [ 0.000000] Virtual kernel memory layout:
    [ 0.000000] vector : 0xffff0000 - 0xffff1000 ( 4 kB)
    [ 0.000000] fixmap : 0xfff00000 - 0xfffe0000 ( 896 kB)
    [ 0.000000] DMA : 0xffc00000 - 0xffe00000 ( 2 MB)
    [ 0.000000] vmalloc : 0xd7000000 - 0xf8000000 ( 528 MB)
    [ 0.000000] lowmem : 0xc0000000 - 0xd6c00000 ( 364 MB)
    [ 0.000000] pkmap : 0xbfe00000 - 0xc0000000 ( 2 MB)
    [ 0.000000] modules : 0xbf000000 - 0xbfe00000 ( 14 MB)
    [ 0.000000] .init : 0xc0008000 - 0xc0034000 ( 176 kB)
    [ 0.000000] .text : 0xc0034000 - 0xc0532000 (5112 kB)
    [ 0.000000] .data : 0xc0532000 - 0xc0589f80 ( 352 kB)

    Thanks,
    Mike
  • Mike,

    Around 50MB of RAM is reserved for FB driver by default, thus when in example you set "mem=200M" you will get "Memory: 150MB = 150MB total"

    You can use HIGHMEM support to allocate more memory to Linux. This can be enabled/disable by menuconfig->Kernel Features->HIGH Memory Support.

    Refer to the below wiki for more info regarding HIGHMEM:

    processors.wiki.ti.com/.../TI81XX_PSP_04.04.00.02_Feature_Performance_Guide

    BR
    Pavel
  • Hi Pavel,

    If I am not wrong, highmem is user space and lowmem is kernel space ? So I need to increase kernel space ? Is there a option to do it ?

    Thanks,
    Mike
  • Default kernel build is set up with 3G/1G split for User/Kernel space. In addition, "High Memory" support in kernel (CONFIG_HIGHMEM) is enabled by default to accommodate larger physical memory/address space.

    It should be possible to allow larger direct mapped memory into kernel space by changing User/Kernel split to 2/2 or 1/3. Please note that these are NOT TESTED and may lead to unpredictable behavior - particularly some applications may fail.

    It is possible to indicate the kernel that the usable RAM is spanned across holes in between. This is achieved from passing multiple "mem=<size>@<start-address>" arguments to kernel.

    Even when passing memory with holes, the kernel reserves contiguous space incorporating the whole memory passed through all 'mem' arguments. This means, the actual lowmem mapped will be more than the total size of all memory arguments combined together.

    Without HIGHMEM support, the vmalloc and lowmem sizes are dependent - on RAM available to kernel (as specified by 'mem=<size-in-MB>M' boot argument) and vmalloc size required - restricted by the amound of space that can be directly mapped into kernel. As you provide more memory for kernel to map, the vmalloc space will be lowered. Vice versa is true when vmalloc region is changed by passing 'vmalloc=<size-in-MB>M' argument.

    Though HIGHMEM configuration is enabled by default, the mapping for highmem will only be created on need basis and in case where total space comprising of specified RAM, vmalloc and memory holes (see the section on Memory Holes Configuration) exceeds directly mappable space (that is, 888MB).

    BR
    Pavel