This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AIF2 DB module on C6670 report errors

Hi ,

I am using the C6670 AIF2 module on LTE 20MHZ,  CPRI mode .  Sometimes I can get an error report from  the DB module : db_ee_i_fifo_ovfl_err, then the AIF2 RX side will loss  some symbol data.

I had post this problem in:

http://e2e.ti.com/support/dsp/c6000_multi-core_dsps/f/639/t/315721.aspx


Normally,  I can receive the symbol in sequence,  from 0 to 139  in LTE mode. But when the error happens, I will lost some symbol. 

I tried these ways to optimize the usage of AIF2.

1:  Change the PKTDMA priority of AIF2 to the highest 0, and all other PKTDMA module to 1.

2:  Place the AIF2 RX descriptor in L2 SRAM,  TX descriptors in DDR.

3: Increase the DB buf size to the max value "CSL_AIF2_DB_FIFO_DEPTH_QW256"

These methods has some effect and the error is rare,  but it is disturbing me sometimes. Is there any way to solute this problem once for all?

Thanks for your reply.

Regards,

ziyang.

  • Hi Ziyang,

    sorry to hear that you still having trouble to make it run. it looks like you still have VBUS data or command transaction trouble. here is my additional recommendation for your problem.

    1. place the AIF2 RX descriptor in MSMC (instead of L2) and see what happen

    2. stop all other tasks except AIF2. if this is fine, increase the number of other data transaction one by one until you see the problem

    3. turn off AIF2 egress traffic and only run ingress traffic to see if the problem comes from VBUS command contention or pure data contention. if you see the good Rx result when you stop traffic on Tx side, it is VBUS command traffic issue and then you may need to place the Tx related memory to the different place.

    4. reduce the number of AxC for both ingress and egress: try this workaround just to confirm if your AIF2 current load is enough or too heavy weighted

    Let me know your test result for all of these.

    Regards,

    Albert  

     

  • Hi, Albert Bae. Thanks for your help.

    I have tried your suggestion and get some results:

    1: Place RX descriptor in MSMC has no effect, the error still happens.

    2:  I will describe it later.

    3:I do not know how to turn off the egress but not affect the ingress,  If I stop TX, the link is down  I think.  I did't  test this.

    4: Actually, we  just open  active one link with two AxC in it before,   the error I mentioned happened rarely.  Now I need to use two  active link, each with 2 axc in it.  The throughput is doubled  and this error happeds everytime.  I do not think it exceed the max throughput of the DSP now. We are using the LTE 20MHZ,  4X with CPRI mode.

     

    I tried your advice 2 and found something weird. I located to a code block which will be excuted every 10ms in our LTE physical uplink process.  It's a loop and I pick up some core code here and mark the loop count value.  

    This code is excuted on DSP Core 0, AIF2 RX desc are located in Core1 's   L2SRAM.  And I checked the "Bandwidth Management Registers",  their priority is the default value and lower than the AIF2 PKTDMA prority (I set the AIF2 PKTDMA priority to 0).

    Disable this loop or just delete the "_loll" and "_hill" instruction in the inner loop has the same effect and no error happeds.

    Why this code affect the AIF2 data receiving or It is not the true reason?  Wish to get your suggestion.

    Regards,

    ziyang.

     

     

  • Hi,

    I think it depends on where is the conj_temp and *fftc_data values are defined. if those are defined in Core0 L2 RAM, then it doesn't make trouble for Core1 AIF2 Rx operation but if one of those is located in MSMC or DDR, it will consume certain amount of VBUS BW every 10 ms. even if those variables are located in L2, if you have any dependency between core0 and core1 and it makes core1 access the L2 frequently, it may cause VBUS access trouble.

    Please check your application in detail and let me know if you find anything new.

    Regards,

    Albert  

  • Hi, Albert Bae;

    You are right I think,  I look into the data section of this two variable,  the conj_temp is local parameter in function but  fftc_data is an array defined in DDR  WITHOUT cache enabled.  I tried to enable the cache no errors happens by now.  Thanks to your help.

    But it is confusing me now. Even there is no CACHE enabled, I think the priority(defined in PKTDMA module and CorePac "Bandwidth Management Registers") is the major factor. The AIF2 PKTDMA has the system highest priority 0 and no other module has priority 0.  The AIF2 should not be delayed by any other master.  If the DSP core direct access the DDR may block the AIF2, why the EDMA has no influence by now?  In the   "Bandwidth Management Registers" there exist a  "MAXWAIT" value , will it cause this?   I just can't get the point.

    In your previous post,  "even if those variables are located in L2, if you have any dependency between core0 and core1 and it makes core1 access the L2 frequently, it may cause VBUS access trouble" . I do not know what does it mean.  Why these situation may overwrite the priority settings? Can I get some documents to talk about this? 

    Wish to get your help. 

    Regards,

    Ziyang.

  • AIF2 is one of the streaming based interfaces and it continuously transfers data without having rest time. if the priority is the only factor, than all other lower priority masters cannot get any chance. some interfaces and modules shares data and command bus and some bridges are also shared (ex. bridge7 is shared for AIF2 and EDMA TC2 and BCP DDR access) and it may cause a data contention when multiple master trying to access the data at the same time. additionally, we do not recommend using DDR memory for AIF2 related operation not to make any delay from DDR page threshing.

    When I mentioned Core dependency, it means certain period of data congestion could be occurred when those cores shares bus, memory and interface peripheral. 

    When you say "Bandwidth management register", which register you are pointing? (which document and which page)

    Regards,

    Albert

  • Hi, Albert Bae.

    Is the access from AIF2 to the Rx desc located in L2 also will be blocked as you  said?  Or it just only block TX access to DDR? 

    The "Bandwidth management register"  is in a document called " C6xx Corepac manaul.pdf"  ,there is a main section talking about it. The filename may be not so right but I will not in office for few days. I will post the file as soon as I get back to work.

    Thanks to all your help。:-)

    Regards,

    ziyang

  • Is the access from AIF2 to the Rx desc located in L2 also will be blocked as you  said?  Or it just only block TX access to DDR? [TI] mainly, DDR access can cause this problem even though it is used for egress (Tx) operation. but sometimes even L2 traffic can make issues when there are lots of data access processes are heavily running consuming much BW.

    bandwidth management registers are used only for corepac DMA control but not for EDMA or interface periphal are master. PktDMA module has its own priority setup register and you may have to use that register to control priority of each IP master which has PktDMA inside.

    Regards,

    Albert