This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

McASP dead periods when streaming through network interface

Other Parts Discussed in Thread: TLV320AIC3106

This is a continuation of my previous post, however I'm approaching it in a more detailed way now so I thought it best to start a new post.

I am trying to stream audio to the AM335x EVM.  I was having problems with network performance, especially over Wi-Fi.  The TI employee who is helping me in my post in the Wi-Fi Forum said it might be a resource issue...however this system should be able to handle this as it is a faster processor than that used in the Streaming Audio Reference Design.

I have now removed the daughterboard from the EVM and am working with the McASP1 port directly.  I plan to hook up an external codec, but first I am debugging the issues using a logic analyzer and software based I2S decoding.

I followed this example to create a new driver for my codec and am successfully decoding the I2S data and converting it into a .wav file and playing it back on my laptop.

The problem is that whenever I try to stream across a network interface - even loopback via localhost - there are 30.72ms gaps in my I2S data stream.  The clock and frame keep going, but the data line stays low during that time.

I streamed a sine wave and I can see that the samples are being separated - not dropped...in other words if I were to manually delete the 30.72ms gaps, the sound would be correct.

I've also tried streaming locally while exercising the Wi-Fi module using iperf to see if it was a contention or resource conflict issue, but that doesn't seem to be the case.  So it seems to be something specific to using GStreamer from a network interface (even loopback) to the audio device.

Here is a picture of the issue on the logic analyzer:

Here is a picture of what a sine wave looked like after decoding the I2S:

All help is appreciated!

I'm using the latest version of the SDK (01.00.00.03) and the kernel is Linux am335x-evm 3.14.43-g875c69b

Andrew

  • Hi,

    I will forward this to the SW team.
  • I've traced through the sound and McASP code and haven't been able to find anything that is different between the two cases.

    I also enabled EDMA debug messages and the trace is exactly the same for the two cases (standard playback vs stream via loopback), other than:

    dma-dma-engine edma-dma-engine.0: vchan ec91c9e0: txd eb236800[4]: submitted

    which comes from line 35 of virt-dma.c and I think the numbers are supposed to change based on what else is happening in the DMA engine?

    How can I further debug this?
  • Does the SW team have any suggestions? I have spent this entire week on it and I'm not getting anywhere. I'm at the point where I need to look at alternate platforms from other vendors.
  • Hi Andrew,

    Sorry for this delay. I have escalated the request.
  • The first thing I would do is:


    - replace GSTREAMER with something else.

    Something simplier, something easy to debug. Some handmade code.

    For streaming, there must be some buffering of data, and it must be done right.

    regards

    Wolfgang

  • Thank you.

    According to this, it is likely a problem with the alsasink module of GStreamer.  How did the streaming audio reference design get around this?  I'm trying to test it with pulse audio but not getting any output at all.

  • I just found this: http://rawplay.prosys.com.tw/

    Apparently TI's reference design wasn't actually made by TI, it was made by a 3rd party in Taiwan using their own software? That's pretty misleading on your website.
  • Andrew, I would try what Wolfgang suggests and do some tests without G-Streamer. At one point, I was doing some low latency network audio tests with AM335x, and I had a difficult time getting good results with G-streamer, so I ended up writing some simple custom code that worked very well for me. Also, here are some of the g-streamer pipes I created for the first round of audio streaming demos: http://processors.wiki.ti.com/index.php/Sitara_Linux_Audio_gstreamer. I'm sure it's possible to get g-streamer working well for your application, but maybe there are other routes you can take that will have better performance and require a little less debugging.

    Also, I did some tests with pulseaudio and also made some demo tests between am335x and linux PC. The main thing I had to do was put it in system mode, as in this wiki:http://processors.wiki.ti.com/index.php/Sitara_Linux_Audio_pulseaudio. From there, I could configure network audio device in pulseaudio and send and receive data from a linux PC.

  • Hi,

    Thank you for your reply.  I did get Pulse audio working in the end, it's just that it seems to use too much CPU so it creates skips/gaps itself.

    I was hoping to use GStreamer as I'm going to have a lot of channels to synchronize across multiple devices...so it would be a lot of custom code.  Basically implementing a lot of GStreamer ourselves, which seems kind of silly.

    I found the pipelines you put on that wiki page helpful when I was first getting started - thank you.

    Andrew

  • Yeah, it definitely would be best to utilize some existing libraries and tools if you aren't able to find a solution for the g-streamer issues you are seeing. It was a while ago, but I did test out MPD (like in this example: www.hifiwigwam.com/showthread.php and it worked well. I'm not sure if it fits your needs, but maybe it's worth looking into. Hopefully there are decent amount of alternatives that could prove useful to you.

    Also, maybe you said this and I missed it, but have you tried running your G-streamer tests on an EVM with the very latest Processor SDK? Furthermore, have you tried using ethernet connection instead of wifi as a baseline test? If you are using WL8, I specifically remember have a lot of issues in my streaming demos that I wouldn't see when using an ethernet connection or even with some USB wifi dongles.
  • Hi,

    I'm evaluating MPD for another project - it is a good piece of software...but not quite right for this purpose.

    I am using the latest SDK (1.00.00.03). Using it with the Wi-Fi, yes. I did get it working quite well over ethernet, despite the apparent bug in alsasink...but once I wrote my own driver for I2S out, even ethernet showed problems with gaps in the output.

    It's really frustrating that TI advertised this as a streaming audio reference design when it is clearly not working well enough and they have not gotten back to me in days.

    Thanks for your suggestions!

    Andrew
  • Interesting. I have a few more questions:

    1. If you were to use the default codec on the EVM (AIC310x), you do not see those gaps? Or is it only when you implemented your own codec driver using Sitara Audio DAC example that you see them?
    2.Are you certain that you are setting up proper parameters in your custom driver and in your alsa sink? Are the number of channels, sampling rate, and bit depth what you expect them to be?
    3. Have you verified with an o-scope that your BLCK and WCLK are what you would expect them to be for the parameters described above?
    3.If you were to take the big patch at the bottom of the Audio DAC wiki, apply it, and then use it as default audio sink, do you still see those issues? It might be a good test to run. I have tested that patch many times and haven't run into that issue, although most of my tests were on 3.12 kernel.
    4. When you say local loopback, do you mean playing a file from the filesystem? Have you tried using aplay to play the local file, or just gstreamer?
  • HI,

    1. Using the default codec, I can hear the gaps with headphones plugged in. That's what started the whole thing. They are more random though. When I made my own driver (using the example drive from the wiki here), it was more deterministic.
    2. Yes - see 3.
    3. Yes - I'm using a logic analyzer to capture the I2S signals
    4. aplay and gstreamer from a local file work fine both with my driver and the default one. When I say loopback, I mean one gstreamer instance doing a udpsink to localhost:5000 and another gstreamer instance doing a udpsrc to localhost:5000.

    Andrew
  • Andrew,

    Can you post a zoomed in capture that shows your clock and frame sync more clearly? From your sine wave above it appears like you are trying to use 32 bit data at 96kHz sampling rate. Is this correct?

    What bit clock rate and what frame sync frequency are you trying to get and what are you actually seeing on the scope capture?

    Jason Reeder
  • Hi,

    I don't have the zoomed in picture at the moment, but I was using 24-bit data at 96kHz.  Although I also went lower to see if it was a bandwidth / data transfer issue too.  The problem scaled down even to 8kHz.  The correct number of bits per frame etc were coming out when I made those changes.

    Andrew

  • Does your frame sync frequency match your sampling rate of the data being played?

    Sorry if I've miss this so far but what happens if you store the streamed file as a .wav file on your EVM and then use aplay instead of gstreamer to try and play it from the EVM to your codec?

    Jason Reeder
  • Hi,

    I believe it does match. I can double check.

    Streaming from GStreamer to a .wav file works great - I can play that .wav just fine with aplay.

    Andrew
  • My working assumption after seeing your scope capture was that the frame sync frequency may have been faster than the sampling rate of the audio which caused the gaps in the output until the next sample was available for transmit. However, after hearing that you can record the stream to a .wav file and then aplay that file just fine, I'm less convinced that this is what is happening. I'd still like to see a zoomed in capture that shows the bitclock and frame sync frequency compared to the audio sample rate though if it's possible.

    Since you can successfully save a .wav file from the stream over both WiFi and Ethernet this leads me to believe that network issues are not the problem.

    Since you can play the stored .wav files through the McASP using your codec this seems to rule out any issues with ALSA's ability to interface with your codec.

    It is beginning to seem that the issue lies in gstreamer's ability to connect the audio stream together with the McASP correctly. Whether that be a buffering issue or a data format issue I am not sure yet. Do you agree?

    Jason Reeder
  • I definitely agree.

    I should note that the clock and frame sync keep going when there are gaps on the data line...and I modified the alsasink GStreamer plugin to detect when the buffer was full of only 0 values.  It was never full of 0 values during a "stream" from disk (via filesrc), but often full of 0 values during a stream even over the localhost (udpsink, udpsrc, localhost:5000).

    I'm not sure if you saw my link above to a known bug in alsasink: link

    I tried using the pulse audio sink and it seems the dev board does not have enough CPU power to handle that.

    Andrew

  • Hi,

    I finally got a chance to measure the frame sync frequency and it did NOT match the sampling rate.

    For a 96kHz file, it was 250MHz.  The ratio of 250/96 is equal to the duty cycle of sound vs blank spots.  So I think you were right to suspect this issue.

    I went back through the example and found that I had set the clock frequency of the McASP in the device tree to 12MHz instead of 24MHz.  I did that based on advice I found somewhere while debugging, but I think that was bad advice.

    Anyway, now when I play my 96kHz file through gstreamer, the frame sync is at 100kHz.  When I play a 48kHz file using aplay, the frame sync is at 50kHz.

    During the 96kHz file, there are again gaps -- 96% of the period is the data, with 4% at 0, which must correspond to the clock rate being 100kHz vs 96kHz.

    What do I have wrong that would cause that small difference?

    The example documents where it sets the BCLK divider, and says it does not set MCLK or the BCLK/LRCLK ratio:

    Notice that we are only setting clock divider "1"; this is because the 24 MHz MCLK (which comes directly from the 24 MHz sysclk), doesn't need to be altered for our configured. Clock divider "2" is the bit clock / frame sync ratio, but it will be updated automatically when we set the bit clock based on the format of the audio to be played, so we don't need to set it here.

    Andrew

  • If you drive the McASP with 24 MHz Clock, and you have an I2S frame of 2 x 24 = 48 bits, and a sample frequency of 48KHz, you need a divider of:

    24000000 / 48 / 48000 = 10.4166666.

    The McASP driver will select the nearest divider of 10, and this will give you 50 KHz sample frequency == 50 KHz frame rate.

    If you want to have 48 KHz, you will need a McASP frequency of n * 2304000Hz. You might need a PLL to connect to the McASP clock to do this.

    This requirement is the cause why audio codec chips have a PLL building block. You can generate the McASP frequency from your codec.

     

  • Andrew,

    Wolfgang is right. Starting from the internal 24 MHz clock it is difficult (if not impossible) to get to some of the standard audio sampling rates if you want the AM335x to be the clock master and source the bit clock and frame sync clock.

    Can the codec that you are planning to interface to source the bit clock and the frame sync clock? This is what we do with the TLV320AIC3106 codec that we put on our EVMs. This codec has the ability to specify a fractional value as a divisor which allows it to create the exact frame sync frequencies for most standard audio formats. 

    It is possible to bring an additional clock frequency into the McASP and use that instead of the 24 MHz internal clock but the codec route is usually the most straight forward.

    Jason Reeder

  • HI,

    Sorry for the late reply - I was out sick.

    Thank you Wolfgang and Jason - that makes complete sense.  Yes, our codec can drive the clock...I just don't have one ready to test yet...but it seems like once that is in place things might actually be okay...and then I can get back to the original problem of streaming over Wi-Fi :-)

    Andrew

  • Can you check the Interface Configuration Register at 0x4C000054? What's its value at run-time?

  • Did you ever get a chance to check this register? This sounds very similar to issues I have encountered many times in the past related to command starvation in the EMIF. There's an entire section devoted to this topic in the TRM and yet the Linux SDK does not implement the extremely simple workaround to fix it. I suspect this might be at play in your case. The Linux team won't add the change I'm proposing because they've never seen evidence of it being a real issue. Maybe, just maybe you are the case in point. My change is simply to adjust the least significant byte of the above register from FF to FE (or maybe something smaller yet like 0x20).
  • Thank you for following up.

    I haven't yet, unfortunately I've been focused on other projects.  I will take a look when I can get back to it.

    Andrew

  • This is quite a lengthy thread, and I admit I had not read all of it! I just talked with Jason a bit, and I would have to agree with what he and Wolfgang are saying here. I think I may have gone off the rails with my previous reply!!! The key thing here seems to be getting the proper frequency generated.
  • No problem, thank you.