This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Advice on EDMA usage

Hi,

I've an array of buffers in L1D. They are used in a circular form. Buffer n is used for processing, n-1 contains the data that has just been finished and which needs to be written back to DDR. Buffer n+1 needs to be filled with data from DDR for the next processing cycle.

Can such an operation be performed in parallel? ie, If I put two transfer requests, one for buffer n-1 and one for n+1, in the different transfer controllers, will they actually run in parallel? Or will they still wait for each other because of some shared resource?

Otherwise, it might be simpler to chain the different requests, so only a single QDMA trigger will transfer everything.

 

Kind regards,

 

Remco Poelstra

 

  • Hi Remco,

    Thanks for your post.

    I assume that, you are using OMAPL13x/C6748 core in your board.

    We cannot perform the two transfer operations (n-1 & n+1 buffer) in parallel since it uses the same shared resources. Instead, we shall prioritize the transfer requests with in EDMA's transfer controllers.

    As you suggest, the channel chaining capability for the EDMA3 allows the completion of an EDMA3 channel transfer to trigger another EDMA3 channel transfer. The purpose is to allow you the ability to chain several events through one event occurrence. Please refer Section 16.2.8 in C6748 DSP's TRM as given below:

    http://www.ti.com/lit/spruh79

    Also, Please refer Section 16.3.4.5 for EDMA3 Transfer chaining examples in the above TRM

    Thanks & regards,

    Sivaraj K

    ---------------------------------------------------------------------------------
    Please click the
    Verify Answer button on this post if it answers your question.
    ---------------------------------------------------------------------------------

  • Hi Sivaraj K , 

    Can you please let us know what do you meant by  " the same shared resources"  for the current scenario.

    Do you meant L1D by shared resources or any other SOC entity which brings this limitation[ like EDMA engine in this case ] ?



    Ashish Mishra 

    [Banglore/ India]

  • Hi Ashish,

    Sorry for the mis-interpretation.

    Actually I mean, EDMA TC's will serve the transfer requests only one at a time because, it uses the same resource and it cannot be shared between EDMA TC's. Instead, we can prioritize the transfer requests with in EDMA's transfer controllers. So the two transfer operations (n-1 & n+1 buffer) cannot be performed in parallel as per Remco's Scenario raised in his post.

    Thanks & regards,
    Sivaraj K
    ---------------------------------------------------------------------------------
    Please click the
    Verify Answer button on this post if it answers your question.
    ---------------------------------------------------------------------------------
     

     

  • Hi Sivaraj , 

    Thanks for the input . 

    On same line can you please let us know is it possible to access the same memory [in this case its L1D] like say DDR 

    by two separate engine parallely .

    i.e 

    Can we have like MASTER 1 tries to fetch nth Buffer and MASTER 2 tries to fetch n+1th Buffer. 

    [We have an query  wherein the request is to involve DSP [DDR memory "X" ] and EDMA [accessing DDR memory "Y" 

     But in parallel & without any other handshaking between the two engine so as to accessing the DDR memory on  OMAPL13x/C6748 core ]


    Thank You ,

    Ashish Mishra 

    [Banglore / India]

  • Hi Ashish,

    I don't think, it is possible for DSP and EDMA master peripheral to access DDR memory in parallel, that too,without any handshaking. We can only prioritize the requests from different devices across the system and serve the requests.

    Regards,

    Sivaraj K

    ---------------------------------------------------------------------------------
    Please click the
    Verify Answer button on this post if it answers your question.
    ---------------------------------------------------------------------------------

     

  • Hi Sivraj , 

    Thanks for the input  can you please help me to understand the below mentioned point & 

    please correct me if my understanding is wrong for the same 

    a) " We can only prioritize the requests from different devices across the system and serve the requests." 

         -> In a scenario where we have Master1 & Master2 both continuously and parallely trying to access same

              resource [memory], by setting priority we can only ensure , which master gets access when we

              have simultaneous request to access the same resource [memory]

        -> Lets Master 1 with continuous  request of M1_Rq1 , M1_Rq2, M1_Rq3, M1_Rq4, M1_Rq5 , M1_Rq6 etc of high priority

             than Master 2 with continuous  request of M2_Rq1 , M2_Rq2, M2_Rq3, M2_Rq4, M2_Rq5 , M2_Rq6 etc. 

             at time T1 , T2 ,T3 , T4 ,T5,T6 etc

         

             Lets us consider that Rq3,Rq4,Rq5 for both master M1 and M2 occurred simultaneously and parallely at T3 , T4 ,T5. 

             So since Master1 is of higher priority than Master2 ,[ (M1_Rq3) ,( M1_Rq4), (M1_Rq5) ] will be serviced 

             But at the same time request from Master 2 ,[ (M2_Rq3) ,( M2_Rq4), (M2_Rq5) ] will be "DISCARDED" 

             so we will loose these request. So Master 2 can be serviced from M2_Rq6 and so on ...depending on 

             system load .


    Am i correct in this understanding or i am missing any crucial point...

    Thank You,

    Ashish Mishra 

  • Hi Ashish,

    You are absoultely fine in your undertanding. But my suggestion is to avoid discarding requests from Master 2 [ (M2_Rq3) ,( M2_Rq4), (M2_Rq5) ] which is set for low priority and we shall also serve these request from Master 2 by locking the resources to access DDR2 by using semaphore or mutex calls. I mean, lock the DDR2 resource to serve first the requests from Master 1 which is of higher priority and next, unlock the resource from Master1 and again lock the resource to serve for Master2 requests, sothat,  requests from peripherals of lower priority will not be discarded.

    Thanks for your undersanding.

    Regards,

    Sivaraj K

    ---------------------------------------------------------------------------------
    Please click the
    Verify Answer button on this post if it answers your question.
    ---------------------------------------------------------------------------------
  • Hi Sivaraj, 

    1. My problem is i have Master 1 accessing DDR [section 1] from DSP core and Master 2 accessing DDR [section 2] from ARM core .

    2. Both will dump the data to DDR .And while doing so , i don't have any known and proven [at least i am not aware]

        mechanism for the synchronization in such case .

    3. So as you confirmed above , if i allow both the engine [Master 1 from DSP and Master 2 from ARM ] i am getting data corruption 

    4. So i am treating an known memory in DDR as flag. Depnding on the condition of flag , the MASTER X either dump the data 

        or that operation of dumping is DISCARDED[as the access to DDR is not allowed]. 

    Is there any better mechanism to do this operation ...

    Thanks

    Ashish Mishra 

    [Banglore / India ]

  • Hi Ashish,

    As I mentioned in my previous post, there is no other mechanism for the syncronization betweem two masters accessing the same DDR device other than using semaphore/mutex calls through software programming in your application. By this way, you can mutually synchronize the DDR access events of ARM and DSP from Masters.

    Regards,

    Sivaraj K

    ----------------------------------------------------------------------------------------------------------
    Please click the Verify Answer button on this post if it answers your question.
    ----------------------------------------------------------------------------------------------------------