This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

HFP AG

Hi, I'm pretty new to this stack, I would like confirmation on what I'm planning to do, I'm using CC256X and DK-TM4C129X with bluetopia 1.2 R2, I'm trying to send audio from my cellphone to the board using the HFP.

First question: What callback should I use to see the audio I receive from the cellphone? (I was thinking about audio_data_indication)

Second question: Is it possible to set up a connection between my board and my headset using the same audio-data I receive from my cellphone (I was thinking to send the data with Send_Audio_Data after setting up the connection with the headset)?

  • Roberto,

    1. You are correct, audio data should be received in the HFRE_Event_Callback in the etHFRE_Audio_Data_Indication event. However, by default the audio is sent over the I2S lines. To configure the stack and radio to send the audio over the HCI port then use the following code anytime after stack initialization:

       int Result;
      
       Result = SCO_Set_Physical_Transport(BluetoothStackID, sptHCI);

       if(!Result)
       {
          printf("Status: SCO_Set_Physical_TransportSuccess.\r\n");
       }
       else
       {
          printf("Error: SCO_Set_Physical_Transport() Result: %d.\r\n", Result);
       }

    This code only needs to be ran once, each time after the stack is reset (after BSC_Initialize()).

    2. If you are receiving audio data from the phone then you can also send data to the phone. You are right, you can use the HFRE_Send_Audio_Data() function to send the data. Please let me know if this isn't the information you were looking for on this question.

    Thanks,
    Samuel

  • Hi Samuel,

    First thanks for the answers.

    • I tried your solution and it resulted in a successfull SCO_Set_Physical_TransportSuccess (as expected).

    • I then tried to set-up the music stream again from the phone to the tiva but the music keeped coming from the telephone and the only messages I received (on the tiva) where the ones from when I pressed start and stop on my cellphone. If i talk through my phone I do not receive the audio if I get a telephone call.

    • If I read the callback correctly it should prompt to video the audio data I receive but that's not happening, do you have any idea why?

    • I noticed that when I connect my headset to my phone the WP tab tell me "music and voice connection" while when I connect the phone to the tiva it only says voice, maybe there is some class related problem? 

    • I think it's worth knowing that my headset is supposed to only support HandsFree and HeadSet profiles.

    • Finally, I don't want to send the audio I receive to the phone but to an headset but this is annother problem.

  • Roberto,

    "If I read the callback correctly it should prompt to video the audio data I receive but that's not happening, do you have any idea why?"

    Did you answer the call using your phone or using the Tiva? I believe if you answer with your phone the audio will by default output to the phone's speaker. If you answer with the Tiva I believe the phone will initiate an audio connection. You can answer the call with the HFPDemo using the AnswerCall command. You can also initiate an audio connection from HFPDemo using the ManageAudio call.

    "I noticed that when I connect my headset to my phone the WP tab tell me "music and voice connection" while when I connect the phone to the tiva it only says voice, maybe there is some class related problem?"

    What kind of cellphone are you using? Music likely indicates an A2DP profile connection and voice like indicates an HFP profile connection. With your headset both profiles are likely connected but with HFPDemo only HFP is connected.

    "Finally, I don't want to send the audio I receive to the phone but to an headset but this is annother problem."

    Is the other device a COTS device or your own device? Tiva currently only supports the HFP Hands-free role, it doesn't support the Audio Gateway role. This will be added in an upcoming release. If your other device is your own device then you don't need to use the Hands-free profile to forward the audio and this is not a problem.

    Thanks,
    Samuel

  • "Did you answer the call using your phone or using the Tiva? I believe if you answer with your phone the audio will by default output to the phone's speaker. If you answer with the Tiva I believe the phone will initiate an audio connection. You can answer the call with the HFPDemo using the AnswerCall command. You can also initiate an audio connection from HFPDemo using the ManageAudio call."

    I tried both, while I keep talking i see only a couple of "HFRE Control Audio control" i was expecting to see a lot more of them since it should print all the data I send speaking am I right?

    "What kind of cellphone are you using? Music likely indicates an A2DP profile connection and voice like indicates an HFP profile connection. With your headset both profiles are likely connected but with HFPDemo only HFP is connected."

    I am using a windows phone 8.1 (model important?). My manual was wrong, looking on the online catalog I now see my headset use A2DP. Can I do what I'm planning to do using A2DP? (Receive audio from phone on tivia, send audio I receive from tivia to headset?)

     

    "Is the other device a COTS device or your own device? Tiva currently only supports the HFP Hands-free role, it doesn't support the Audio Gateway role. This will be added in an upcoming release. If your other device is your own device then you don't need to use the Hands-free profile to forward the audio and this is not a problem."

    It is a COST component indeed, so with HFP I can't receive audio from the phone and send it to a COST headset right? (I did know that AG was not supported but I though using the audio sent from the cellphone could be a workaround). 

  • Roberto,

    "I tried both, while I keep talking i see only a couple of "HFRE Control Audio control" i was expecting to see a lot more of them since it should print all the data I send speaking am I right?"

    You should see audio data indications continuously, even when no one is talking. Do you receive the etHFRE_Audio_Connection_Indication event? Have you tried the VS_Write_SCO_Configuration vendor-specific command? It's not included in BTVS.c but you can add it. Here's the necessary code:

    int BTPSAPI VS_Write_SCO_Configuration(unsigned int BluetoothStackID, Byte_t ConnectionType, Byte_t TxBufferSize, Word_t TxBufferMaxLatency, Byte_t AcceptBadCRC)
    {
       int    ret_val;
       Byte_t Length;
       Byte_t Status;
       Byte_t ReturnBuffer[3];
       Byte_t CommandBuffer[sizeof(ConnectionType) + sizeof(TxBufferSize) + sizeof(TxBufferMaxLatency) + sizeof(AcceptBadCRC)];
       Byte_t OGF;
       Word_t OCF;
    
       if(BluetoothStackID)
       {
          ASSIGN_HOST_BYTE_TO_LITTLE_ENDIAN_UNALIGNED_BYTE(&(CommandBuffer[0]), ConnectionType);
          ASSIGN_HOST_BYTE_TO_LITTLE_ENDIAN_UNALIGNED_BYTE(&(CommandBuffer[1]), TxBufferSize);
          ASSIGN_HOST_WORD_TO_LITTLE_ENDIAN_UNALIGNED_WORD(&(CommandBuffer[2]), TxBufferMaxLatency);
          ASSIGN_HOST_BYTE_TO_LITTLE_ENDIAN_UNALIGNED_BYTE(&(CommandBuffer[4]), AcceptBadCRC);
    
          Length  = sizeof(ReturnBuffer);
          OGF     = VS_COMMAND_OGF(VS_WRITE_SCO_CONFIGURATION_COMMAND_OPCODE);
          OCF     = VS_COMMAND_OCF(VS_WRITE_SCO_CONFIGURATION_COMMAND_OPCODE);
    
          ret_val = HCI_Send_Raw_Command(BluetoothStackID, OGF, OCF, sizeof(CommandBuffer), (Byte_t *)(CommandBuffer), &Status, &Length, ReturnBuffer, TRUE);
    
          ret_val = MapSendRawResults(ret_val, Status, Length, ReturnBuffer);
       }
       else
          ret_val = BTPS_ERROR_INVALID_PARAMETER;
    
       return(ret_val);
    }

    Call it with the following parameters:

       int Result;
       
       Result = VS_Write_SCO_Configuration(BluetoothStackID, 1, 0x00, 0x00, 0xFF);
    
       if(!Result)
       {
          printf("Status: VS_Write_SCO_Configuration() Success.\r\n");
       }
       else
       {
          printf("Error: VS_Write_SCO_Configuration() Result: %d.\r\n", Result);
       }

    "Can I do what I'm planning to do using A2DP? (Receive audio from phone on tivia, send audio I receive from tivia to headset?)"

    Phones don't support sending phone call audio over A2DP, only audio in your music library is sent over A2DP.

    "It is a COST component indeed, so with HFP I can't receive audio from the phone and send it to a COST headset right?"

    What is the other device? If the other device also supports the HFP Audio Gateway role then you can do this no problem. As an alternative solution you could send the audio to the other device using A2DP, there would be more latency though than HFP, plus a slight quality loss to encode the audio.

    Thanks,
    Samuel

  • What I  want to do: I need to use bluetopia to send some music from my phone to 2 headsets (logilink bt0029). 

    What I tried: I tried to use HFP to receive music and voice data, but I don't see continuos audio indication only very sporadic notifications when I push buttons.

    • If I answer a call I can see an audio connection notification and some sporadic other notifications (voice stop coming out from my phone)
    • If I send music I can see notifications only when I issue a command (start\stop or similar) (music doesn't stop coming from my phone)

    What I would like to understand: Where should I look to know that by default the stack is configured to send message to I2S? And why shoud I use that vendor specific function? Shouldn't the one on the callback suffice to show me that I'm receiving data (maybe I'm missing some important paper)?

    What i need clarified: 

    • Am i correct when I say that since AG is not supported for the HFP, I can't send audio data from the Tiva to my headset (logilink bt0029) and expect it to work? 
    • Is it possible to receive music from my phone on the tiva and send it to the headset using A2DP ?

    Tomorrow I'll try your solution and I'll provide a detailed log for the voice data.

    Thanks again for your disponibility and sorry if I look silly.

  • After I added your suggested code I see the continuos stream you where talking about while I use vocal commands (HFRE_control indicator status indication and some Unknown HFRE event).

    I'll switch to A2DP since you said AG is not supported (thus can't use HFP for what I need to do).
    I would be very gratefull if you could answer my previous questions.

    If needed I'll open a new post on A2DP after some trials.

    Thanks again Samuel.
  • Hi Roberto,

    "Where should I look to know that by default the stack is configured to send message to I2S?"

    Use the SCO_Set_Physical_Transport() function to control if the stack is configured to send and receive audio over I2S versus HCI.

    "And why should I use that vendor specific function?"

    The vendor-specific function is needed to tell the radio to send audio over HCI instead of I2S.

    "Shouldn't the one on the callback suffice to show me that I'm receiving data (maybe I'm missing some important paper)?"

    The callback is enough to show you that you are receiving data, but without using the SCO_Set_Physical_Transport() function and the vendor-specific function you won't receive audio over HCI, that's why you weren't seeing audio in the callback before.

    "Am i correct when I say that since AG is not supported for the HFP, I can't send audio data from the Tiva to my headset (logilink bt0029) and expect it to work?"

    Yes, you are correct. You need the AG role to send HFP audio to a headset.

    "Is it possible to receive music from my phone on the tiva and send it to the headset using A2DP ?"

    Yes you can. There would be more latency though than there would be with HFP. If full-duplex real-time phone call audio is your target application then there might be some latency issues with this approach.

    Another concern is support for the A2DP profile in headsets. Most, if not all, newer headsets support A2DP, but you may run into a few older ones that don't. You can check the Bluetooth SIG's qualified listings to see what profiles a device supports: https://www.bluetooth.org/tpg/listings.cfm. I checked for your logilink bt0029 device but couldn't find it on their site, maybe it has another model number?

    Thanks,
    Samuel

  • Roberto,

    HFP is for two-way voice audio and is of a much lower quality. You should only use that when you need voice input from the headset to make it back to the phone. Based on your aim to use Bluetopia to send music directly from your phone to 2 headsets, A2DP is very likely what you want to do.

    I can tell you that this is entirely possible, and I've done it. It works well as long as you have a fast enough UART and do not need to perform any sample rate/codec conversions or mix any other audio into it. I used the GAVD API, which is a bit tedious. You might look into the AUD API that the newer demo projects use instead, but I'm not as familiar with that API yet--not sure if it allows the same fine-grained control over how big your audio packets can be.

    I did this by registering one GAVD endpoint of type tspSNK and two GAVD endpoints of type tspSRC. I had to limit my service capabilities to the target headset's capabilities (i.e. by hardcoding max bitpool of 45 and MediaInMTU of 895), because otherwise the phone would send audio data with too big of a bitpool or too large of a packet size, and I wouldn't be able to directly echo from the phone to the headsets.

    The steps to registering a GAVD endpoint are numerous... something like this (it doesn't include setting up the SDP record, let me know if you need help setting that up too):

    A2DP_SBC_Codec_Specific_Information_Element_t A2DPSpecificInfo;
    GAVD_Media_Codec_Info_Element_Data_t *pMediaCodecInfo;
    GAVD_Service_Capabilities_Info_t Capabilities[2];
    int MyLSEIDs[3];
    int i;
    GAVD_Local_End_Point_Info_t MyEndpoints[3];
    
    Capabilities[0].ServiceCategory = scMediaTransport;
    Capabilities[1].ServiceCategory = scMediaCodec;
    pMediaCodecInfo = &Capabilities[1].InfoElement
                          .GAVD_Media_Codec_Info_Element_Data;
    pMediaCodecInfo->MediaType = mtAudio;
    pMediaCodecInfo->MediaCodecType = A2DP_MEDIA_CODEC_TYPE_SBC;
    pMediaCodecInfo->MediaCodecSpecificInfoLength =
        A2DP_SBC_CODEC_SPECIFIC_INFORMATION_ELEMENT_SIZE;
    pMediaCodecInfo->MediaCodecSpecificInfo = (Byte_t *) &A2DPSpecificInfo;
    
    /* Set up supported SBC configuration */
    BTPS_MemInitialize(&A2DPSpecificInfo, 0,
                       A2DP_SBC_CODEC_SPECIFIC_INFORMATION_ELEMENT_SIZE);
    /* SNK must support both 44.1 and 48 KHz. */
    A2DP_SBC_ASSIGN_SAMPLING_FREQUENCY(
        &A2DPSpecificInfo,
        (
        A2DP_SBC_SAMPLING_FREQUENCY_16_KHZ_VALUE |
        A2DP_SBC_SAMPLING_FREQUENCY_32_KHZ_VALUE |
        A2DP_SBC_SAMPLING_FREQUENCY_44_1_KHZ_VALUE |
        A2DP_SBC_SAMPLING_FREQUENCY_48_KHZ_VALUE));
    
    A2DP_SBC_ASSIGN_CHANNEL_MODE(&A2DPSpecificInfo,
                                 (A2DP_SBC_CHANNEL_MODE_MONO_VALUE |
                                 A2DP_SBC_CHANNEL_MODE_JOINT_STEREO_VALUE |
                                 A2DP_SBC_CHANNEL_MODE_STEREO_VALUE |
                                 A2DP_SBC_CHANNEL_MODE_DUAL_CHANNEL_VALUE));
    
    A2DP_SBC_ASSIGN_BLOCK_LENGTH(&A2DPSpecificInfo,
                                 (A2DP_SBC_BLOCK_LENGTH_FOUR_VALUE |
                                 A2DP_SBC_BLOCK_LENGTH_EIGHT_VALUE |
                                 A2DP_SBC_BLOCK_LENGTH_TWELVE_VALUE |
                                 A2DP_SBC_BLOCK_LENGTH_SIXTEEN_VALUE));
    
    A2DP_SBC_ASSIGN_SUBBANDS(&A2DPSpecificInfo,
                             (A2DP_SBC_SUBBANDS_FOUR_VALUE |
                             A2DP_SBC_SUBBANDS_EIGHT_VALUE));
    
    A2DP_SBC_ASSIGN_ALLOCATION_METHOD(
        &A2DPSpecificInfo,
        (A2DP_SBC_ALLOCATION_METHOD_SNR_VALUE |
        A2DP_SBC_ALLOCATION_METHOD_LOUDNESS_VALUE));
    
    A2DP_SBC_ASSIGN_MINIMUM_BIT_POOL_VALUE(&A2DPSpecificInfo, 2);
    A2DP_SBC_ASSIGN_MAXIMUM_BIT_POOL_VALUE(&A2DPSpecificInfo, 45);
    
    /* Set up the endpoint structures.
     * We have one sink (for receiving from phone)
     * and two sources (for playing to headsets) */
    MyEndpoints[0].TSEP = tspSNK;
    MyEndpoints[1].TSEP = tspSRC;
    MyEndpoints[2].TSEP = tspSRC;
    
    for(i = 0; i < 3; i++)
    {
        /* There are two capabilities: [0] transport, [1] codec */
        MyEndpoints[i].NumberCapabilities = 2;
        MyEndpoints[i].CapabilitiesInfo = &Capabilities;
        MyEndpoints[i].MediaType = mtAudio;
        MyEndpoints[i].MediaInMTU = 895;
        MyEndpoints[i].ReportingInMTU = 0;
        MyEndpoints[i].RecoveryInMTU = 0;
    
        /* Return value is the LSEID and is required for future GAVD
           calls to this endpoint. */
        MyLSEIDs[i] = GAVD_Register_End_Point(stackid, &MyEndpoints[i], MyEventCB, i);
    }

    I also had to connect the headsets to the GAVD endpoints myself, since the headsets I used would not discover/connect to the endpoints on their own--this may not necessarily be the case for your headsets if they are smart enough.

    Here's the gist of setting up the headset streams.

    --> GAVD_Connect(stackid, address, callback function, callback parameter)

    You'll need to know which LSEID of one of your local tspSRC endpoints that is not currently in use (use a bit somewhere to mark it as being used once you start a GAVD_Connect). If you tie the callback parameter that you pass here to the LSEID that you've chosen to use, or to an array index in a structure where you keep such data, it makes things a little simpler to hook up and start streaming to later.

    <-- callback with event type etGAVD_Connect_Confirmation

    You'll receive a GAVDID in this event, keep track of it in memory (not temporary) as you need it for the next function calls.

    If Event_Data.GAVD_Connect_Confirmation_Data->Status == GAVD_STATUS_SUCCESS, request a discovery of endpoints:

    --> GAVD_Discover_End_Points(stackid, gavdid)

    <-- etGAVD_Discover_Confirmation

    Loop through the endpoints from Event_Data.GAVD_Discover_Confirmation_Data->RemoteEndPoints, look for any that are .MediaType == mtAudio, .TSEP == tspSRC, and .InUse == FALSE. The RSEID for the first such endpoint that matches those three conditions is the one to hook up to. You need to retrieve the capabilities of this endpoint to make sure it can receive A2DP (which is almost certainly the case). Keep track of this RSEID too, you need it for connecting your local endpoint to the headset's endpoint.

    --> GAVD_Get_End_Point_Capabilities(stackid, gavdid, RSEID)

    <-- etGAVD_Get_Capabilities_Confirmation

    Check the capabilities, then set up the configuration. For the config, you need a GAVD_Service_Capabilities_Info_t Config[2]

    Config[0].ServiceCategory = scMediaTransport

    Config[1].ServiceCategory = scMediaCodec

    This is similar to setting up the endpoint capabilities, but since it is a configuration and not a capability list, your A2DP specific info needs to assign only one single sample rate, one single subband, one single channel mode, etc., as opposed to the multiple values that are OR'd together in the capabilities. If you just re-send the same array that you set up at the time of registering the endpoint, the headset will indicate failure to connect the endpoint (that one had me stumped for a few days). Once you have the configuration array set up,

    --> GAVD_Connect_Remote_End_Point(stackid, LSEID, address, RSEID, 2, &Config)

    <-- etGAVD_Open_End_Point_Confirmation

    Hopefully, that confirmation returns success. If so, then start the stream:

    --> GAVD_Start_Stream_Request(stackid, 1, &LSEID)

    <-- etGAVD_Start_Confirmation

    if Event_Data.GAVD_Start_Confirmation_Data->ErrorCode == GAVD_AVDTP_ERROR_SUCCESS, you can then stream to that headset.

    <-- etGAVD_Data_Indication from the phone:

    You'll need to forward the same parameters that you just received in Event_Data.GAVD_Data_Indication_Data. Let's call that pData.

    For each connected headset (i.e. each unique LSEID that is in use and is of type tspSRC), call

    --> GAVD_Data_Write(stackid, LSEID, pData->Marker, pData->PayloadType, pData->TimeStamp, pData->DataLength, pData->DataBuffer)

    You will need at least a 921600 bps UART connection, maybe even higher. I ran it with 2-3 mbit with no problem, but I did need large memory buffers - about 30 KB Bluetopia heap (in BTPSKRNL) and I think I cranked up the HCITRANS UART buffers to 8 KB each (DEFAULT_INPUT_BUFFER_SIZE and DEFAULT_OUTPUT_BUFFER_SIZE in HALCFG.h in the demo projects).

    Hope that helps.

  • To Samuel

    Using the website you suggested I did find BT0029

    • www.bluetooth.org/tpg/QLI_viewQDL.cfm?qid=23705 but in description it says that it is a motorola cellphone, that doesn't feel right.
    • This is the logitek website where you can see the BT0029 ID www.logilink.eu/images/products/katalog/LogiLink_2013_english/files/assets/basic-html/page134.html

    Thanks a lot for the clarifications they will come handy, my aim is to stream music so I'll try the solution posted by dwf

    To dwf

    Thank you dws your solution looks exactly what I'm looking for.

    As I said I started working with bluetooth\bluetophia and the tiva only a couple of weeks ago and I'm still new to this particular framework, any help you would be so kind to post to speed up the process would be really appreciated. I don't know what SDP record is yet but I'll try to figure it out. If it doesn't cost you much time please add the SDP configuration too.

    Again thanks both of you.

    .

  • Service Discovery Protocol is basically how other devices discover what kinds of things your Bluetooth application does. In this case, you would at least have A2DP sink (for receiving from the phone) and A2DP source (for sending to the headsets) in your SDP record. A device can look at that SDP record and see "ooh, I see a service I can use to [send/receive] A2DP, I like this device" and either allow you to connect to it, or connect automatically to you if it so desires.

    An even smarter device can then negotiate with your GAVD endpoint manager and figure out that it has an endpoint it can use. This is typically what a smartphone will do when you do a GAVD_Connect with it--in which case it'll go through all the motions I described above (discovering your endpoints, connecting, etc.), and at the end you'll receive etGAVD_Set_Configuration_Indication. At that point, it will tell you which configuration it has chosen (sample rate, bit pool, etc). You may wish to verify the contents of the configuration with your own endpoint capabilities for good measure. Once verified, you call GAVD_Set_Configuration_Response(stackid, Event_Data.GAVD_Set_Configuration_Indication_Data->LSEID, scNone, 0) to tell it that the configuration is successful, and then the phone will switch its audio output to that stream and you'll get A2DP audio data over etGAVD_Data_Indication.

    Here is a snippet that should set up your GAVD SDP record. I'm not sure if I'm missing something from this SDP record  that makes headsets have a hard time finding me or what, so if anyone passing by spots anything, I'd appreciate a heads up.

    Call this code after you've successfully registered GAVD endpoints.

    GAVD_SDP_Service_Record_t SDPRecordInfo;
    SDP_UUID_Entry_t SDPUUIDEntry[2];
    SDP_Data_Element_t SDPProfileInfo[4];
    DWord_t SDPRecordHandle;
    int Result;
    
    /* Set up the SDP record for A2DP sink and source (two UUID entries) */
    SDPRecordInfo.NumberServiceClassUUID = 2;
    SDPRecordInfo.SDPUUIDEntries = &SDPUUIDEntry[0];
    
    SDPUUIDEntry[0].SDP_Data_Element_Type = deUUID_16;
    SDP_ASSIGN_ADVANCED_AUDIO_DISTRIBUTION_AUDIO_SOURCE_UUID_16(
        SDPUUIDEntry[0].UUID_Value.UUID_16);
    
    SDPUUIDEntry[1].SDP_Data_Element_Type = deUUID_16;
    SDP_ASSIGN_ADVANCED_AUDIO_DISTRIBUTION_AUDIO_SINK_UUID_16(
        SDPUUIDEntry[1].UUID_Value.UUID_16);
    
    /* No additional protocols that need to be added */
    SDPRecordInfo.ProtocolList = NULL;
    
    /* Build the Bluetooth profile descriptor list. */
    SDPProfileInfo[0].SDP_Data_Element_Type = deSequence;
    SDPProfileInfo[0].SDP_Data_Element_Length = 1;
    SDPProfileInfo[0].SDP_Data_Element.SDP_Data_Element_Sequence =
        &(SDPProfileInfo[1]);
    
    SDPProfileInfo[1].SDP_Data_Element_Type = deSequence;
    SDPProfileInfo[1].SDP_Data_Element_Length = 2;
    SDPProfileInfo[1].SDP_Data_Element.SDP_Data_Element_Sequence =
        &(SDPProfileInfo[2]);
    
    SDPProfileInfo[2].SDP_Data_Element_Type = deUUID_16;
    SDPProfileInfo[2].SDP_Data_Element_Length = UUID_16_SIZE;
    SDP_ASSIGN_ADVANCED_AUDIO_DISTRIBUTION_PROFILE_UUID_16(
        SDPProfileInfo[2].SDP_Data_Element.UUID_16);
    
    SDPProfileInfo[3].SDP_Data_Element_Type =
        deUnsignedInteger2Bytes;
    SDPProfileInfo[3].SDP_Data_Element_Length = WORD_SIZE;
    SDPProfileInfo[3].SDP_Data_Element.UnsignedInteger2Bytes =
        0x0100;
    
    /* Point it to the profile descriptor list we just set up */
    SDPRecordInfo.ProfileList = SDPProfileInfo;
    
    /* Add the GAVD SDP record to the SDP database */
    Result = GAVD_Register_SDP_Record(stackid,
                                      &SDPRecordInfo,
                                      "A2DP Source & Sink",
                                      &SDPRecordHandle);

  • Thanks man you are an angel.
    I'll finish to read the pdf about a2dp and I'll give it a test, I'll let you know how it goes in a couple of days.
    I've got annother question about HFP (for better understanding how blue works), wouldn't be possible to implement the AG funcionality manually using low-level's APIs from bluetophia?
  • Roberto,

    Everything dwf said is correct. One thing to note though is that the AUD API is available for you and it does everything that dfw describes by default.

    1. It registers the SDP record.
    2. It registers GAVD endpoints.
    3. It automatically opens an endpoint for devices that connect to you.

    You only to need to use the AUD_Initialize() and AUD_Open_Remote_Stream() functions to create an A2DP connection, the rest is handled internally.

    Examples of using the API as an audio source can be found in the A2DPDemo or A3DPDemo_SRC apps.

    You could implement the HFP AG role on your own with the lower level APIs such as HCI, L2CAP, SCO, RFCOMM, and SPP, but it may cost you a lot of time.

    Thanks,
    Samuel

  • I'll try your solution first then and I'll give you feedback.
  • Hi, talking with my boss we decided to test the a2dp\a3dp capabilities first and then try to write our own AG GW for what concern HPF. I'll open a new post on A2DP and I'll come back here with more information on HFP when\if I'll be the one assigned to it.

    Thanks again guys.
  • Hi dwf I'm finally trying to study your code. I encountered an error on the definition of the sdp record, the return of the GAVD_Register_SDP_Record function is -1000 that should mean invalid parameters , I'm trying to find a solution on my own but if you can please help.
  • Roberto,

    Any reason you chose to take the GAVD path rather than AUD? AUD will save you a lot of time.

    1. It registers the SDP record for you, you don't need to use GAVD_Register_SDP_Record(), AUD_Initialize() will handle this for you.
    2. It also registers GAVD endpoints.
    3. It also automatically opens an endpoint for devices that connect to you.

    With AUD you only to need to use the AUD_Initialize() and AUD_Open_Remote_Stream() functions to create an A2DP connection, the rest is handled internally.

    Examples of using the API as an audio source can be found in the A2DPDemo or A3DPDemo_SRC apps.

    Thanks,
    Samuel

  • Hi Samuel,
    The reason I'm trying to make GAVD work is that I tried the a3dp multi-room-demo (which should do something close to my aims) but I couldn't make it work so I opted for the beaten path to provide results.

    I'll try to solve this a little more then I'll try a clean AUD setting trial.
  • Roberto,

    It might be easier for you if you let us help you with the multiroom problem instead of trying to use GAVD. It's difficult to inter-op with hundreds of different types of devices with GAVD. That's what AUD is for.

    Thanks,
    Samuel
  • I understand, please check my other post for the multiroom demo so I'll keep this conversation clean from that --> e2e.ti.com/.../1524712
  • Okay, I responded on that thread.