This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PGA305: Data representation / Coefficients calculation

Part Number: PGA305


Hello,

I have spent quite some time going through the documentation (and this forum!) while playing around with my PGA305EVM-034 kit - but I still can't get certain parts of the PGA305 to work properly. I hope I'm not missing something super obvious. I'd like to make my own calibration program for the PGA305 in order to automate everything and generally try and make the calibration and verification processes easier. I have about a million different questions at this point, but I guess the two most pressing ones are:

1) I'm having a hard time figuring out the data representation for coefficients, PADC and TADC values, etc. .. Take PADC, for example: How do I translate LSB, MID and MSB into Voltage in the table found under "Debug->ADC Settings->P. Sensor & P. ADC"?

The Gain is currently set to 25, and the bridge supply voltage is 2.5 V. At this point I can't get from A to B using any of the formulas that I can find in the PGA305 datasheet.


2) I found a DLL and a code example for generating coefficients in this thread:
https://e2e.ti.com/support/sensors/f/1023/t/837458?PGA305-Calculating-coefficients-without-EVM-GUI

But I can't quite fiure out how to use the DLL. There are three functions in there called "coeffgen24bit", "coeffgen24bit_initialize" and "coeffgen24_terminate". I guess the "coeffgen24bit" function corresponds to the "Calibration_Coeff_Gen" function found in the code, so I think I have managed to figure out the parameters to that particular function. It doesn't seem to produce anything, though. I also haven't figured out how to use those other two functions. Do you have an accompanying header file for that DLL by any chance? Maybe even some documentation for it? If not, then maybe some more code examples?


Any help with this will be greatly appreciated!
Thanks a lot! :-) 

Sincerely,
Mads

  • Hello Mads,

    I wasn't a part of the team that developed the GUI initially, so I'm not entirely sure how the voltage numbers from the table you have mentioned were arrived at. I have had a similar trouble myself in correlating that to the PADC/TADC data. The safest way to go is to just ignore the voltage readings from the GUI and calculate based on the formulas from the datasheet.

    As for the coefficient calculation DLL, there aren't currently any other C based code examples, although I do have a Labview VI that you are free to use to test the DLL.https://e2e.ti.com/cfs-file/__key/communityserver-discussions-components-files/1023/Test-DLL-Output.vi

    When you open the code, you will have to point the Call Library function to the proper DLL. Just right click it and select "Configure" to go to the menu to give it the path. This is a quick way to test out the DLL. There is example data already populated. And it is in a signed integer format.

    Regards,

  • (See update below!)

    Hi Scott,

    Thanks for the quick reply!

    This looks very promising! It also looks very much like a LabVIEW program that I made myself earlier. :-)

    Maybe I have the wrong DLL (the VI wants a file called "Win32Project1.dll", but the one that I found in the other thread is called "PGAcoeffgen24.dll"), because it does not seem to work right out of the box. The VI expects the DLL to contain a function called coeffget16bit, while the "PGAcoeffgen24.dll" DLL function is called "coeffgen24bit".

    I would have guessed that the two might be compatible, but if I use the coeffgen24bit function from the "PGAcoeffgen24.dll" file, LabVIEW crashes on me. 

    Any ideas as to what I might be doing wrong?

    Thanks again!

  • Update!

    Looks like the LabVIEW program doesn't crash, after all. It just takes ~30 seconds to execute. (Is it supposed to take this long? I have a fairly fast desktop PC.)

    So anyway, I now have a bunch of coefficients plus all the T gain/offset, P gain/offset and Fit_Error values.

    1) How do I translate the generated TC_data values (doubles) into data that I can store in the EEPROM? I guess this question might relate to my first question regarding data representation. If you don't know the answer, then maybe you can point me in the direction of someone who does?

    2) How do I interpret the Fit_Error output?

    Thanks! :-)

  • Hi Mads,

    It is expected that it will take a little while for the computation to complete. I have the same behavior when I run the code on my system.

    The TC parameters for the coefficients should all be multiplied by 2^30 and then converted to hex for programming into the EEPROM.

    I need to look into the fit error again, and I will get back to you on that.

    Regards,

  • Hi Scott,

    I still can't make sense of this. The PGA305 EEPROM only has three bytes for each coefficient. The LabVIEW program (at least when used with PGAcoeffgen24.dll) produces a series of coefficients that, when multiplied by 2^30, cannot fit into just three bytes.

    Example: If a calculated coefficient is 0.307126, then
    0.307126 * (2^30) = 329774031 (0x13A7F3CF)

    • Could it be that I need to multiply by something else (like 2^22) when using the 24-bit version of the DLL?
    • Could you maybe give me an example of how to calculate the coefficients generated by the LabVIEW code?
    • I can't figure out what to do with index 1 of the TC_size array - do you know what it is for?

    Thanks! :-)

  • Hi Mads,

    It looks like that VI was originally configured for 16 bit coefficients, and the default for the NormScaleBits was not updated. Change the NormScaleBitsOut to 22 and you should end up with the correct output. I recommend double-checking the output against the output from the PGA305 GUI as well. It is a bit of a chore to manually enter the data into the ADC & DAC Calibration page, but you should end up with the same results if everything went right with the DLL calculation. 

    The actual coefficient calculation algorithm is something that I unfortunately cannot share, since it is proprietary information to TI.

    Let me know if you run into any other trouble.


    Regards,

  • Hi Scott,

    I'm afraid I did run into more trouble. My colleague and I have been playing around with this for some time now, and we still can't figure out how to use the DLL properly. Here's an example:

    We did a quick calibration using the GUI's Guided Calibration tool with the follwing parameters:
    ADC Calibration Mode: 3P-1T
    Number of Temperatures for DAC: 1
    Output Mode Selection: Voltage
    Temperature Points: 25
    Temperature Unit: Celsius
    Pressure Points: 0, 0.5, 1
    Pressure Unit: Bar
    DAC Data: [(0x508, 1.0V), (0xF1D, 3.0V), (0x1934, 5.0V)]
    VBRDG_CTRL: 2.5 V
    P_GAIN: 25.00
    P_INV: No
    T_GAIN: 5.00
    T_INV: No
    TEMP_MUX_CTRL: VINTP-VINTN
    Normal pressure lower value: 0
    Normal pressure upper value: 100
    Clamp value: 0
    Clamp value: 100
    Diagnostics Enable: Off
    EEPROM lock status: Unlock
    AFEDIAG_CFG: 0x66
    AFEDIAG bit mask configuration: 0x55
    ADC_24BIT_ENABLE: Enable
    OFFSET_ENABLE: Disable
    (I left out a bunch of stuff. Let me know if I'm missing something relevant.)

    We captured the following values during calibration:
    DAC: 0x305, 0xF1D, 0x1934 (1288, 3869, 6452)
    PADC: 0xFFFFF5C2, 0x34ADA6, 0x636B23 (-2622, 3452326, 6515491)
    TADC: 0x1CD2D0, 0x1CD260, 0x1CD256 (1888976, 1888864, 1888854)

    And the GUI calculated the following coefficients
    h0: 0x18410A (1589514)
    g0: 0x149247 (1348167)
    n0: 0xFAA2E0 (16425696)
    PADC Gain: 1
    PADC Offset: 0xCE4F8E

    You said earlier that in order to convert coefficients from floating point representation to hex values, I should multiply the coefficients by 2^30 -- and so I guess I can determine h0, g0 and n0 like this:
    h0: 1589514 / (2^30) = 0.00148
    g0: 1348167 / (2^30) = 0.001256
    n0: 16425696 / (2^30) = 0.015298

    I would expect to be able to generate these three coefficients using the DLL, too. So I entered the same DAC, PADC and TADC values into my LabVIEW VI:

    It looks like I need to enter zeros on all of the "unused" DAC, PADC and TADC values - can you confirm this? The function returns immediately with "Fit_error" = 1 if I leave the unused fields blank. Also, I'm still not sure what to make of all the inputs and outputs to the coefficient generator function found in the DLL. This is what I got:

    Unlike the GUI, which produces three coefficients - h0, g0 and n0 - the DLL outputs 16 coefficients. And none of them looks anything like the data generated by the GUI.

    So my questions at this point are:

    1) Is this even the correct approach? Am I missing something?

    2) Do you have some better documentation for the DLL? For example, I don't know why "CalPoint" is 2 in the VI you provided. I don't know whether or not I should use "OffEn". The C# code from the other thread suggests that I should do both and then pick the best fit..? I'm still not 100% sure about "NormScaleBits" - my guess is that it should always be 22 when dealing with 24-bit functions..? Finally, I still don't know how to interpret "Fit_error".

    3) Of all the coefficients that the DLL creates, how can I tell which ones are h0, h1, etc.?

    4) Maybe all of this is easier to discuss via phone or Skype? Can you maybe walk us through this via remote desktop? (I guess this could save a lot of time for all of us.)

    Thanks, Scott! I appreciate you taking the time to helping me figuring this out! :-)

    Sincerely,
    Mads

  • Hi Scott,

    Any news? I'd really like to find a solution to this fast, so if there is more information that you need from me or maybe something that you'd like me to test, please let me know!

    Have a nice weekend!

    Sincerely,
    Mads

  • (I may have replied to my own reply before, so this is just to make sure that you get notified of activity in this thread.)

  • Hi Mads,

    Sorry for the delay. In answer to your questions:

    Yes, you will need to input 0s for all of the unused inputs.

    The issue at the moment appears to be the order that you're inputting the data, which is admittedly a fault of the documentation and the example. The pressure data is arranged into 4 rows of 4, where each row shows the pressure values for a single temperature. Each column represents a pressure value.

    For a 3P1T measurement, the data would be as follows:

    [P1T1, P2T1, 0, P3T1;

    0, 0, 0, 0;

    0, 0, 0, 0;

    0, 0, 0, 0]

    For a 3P3T measurement it would look like this:

    [P1T1, P2T1, 0, P3T1;

    P1T2, P2T2, 0, P3T2;

    0, 0, 0, 0;

    P1T3, P2T3, 0, P3T3]

    The CalPoint input is the number of calibration points:

    0 - 3P1T Calibration
    1 - 3P3T Calibration
    2 - 4P4T Calibration

    OffsetEn should be used if your sensor has a high offset, it just optimizes the algorithm for that case. For the 24bit calibration the NormScaleBits should always be 22. The fit error gives an idea of the error that you could find between actual results and the model outside of the calibration points. The higher the number the larger the error.

    The coefficients are output in the order that they are shown in the GUI and datasheet:

    h0

    h1

    h2

    h3

    g0

    g1

    g2

    g3

    n0

    n1

    n2

    n3

    m0

    m1

    m2

    m3

    Regards,

  • Hi Scott,

    Thanks for all the new information! I would never had figured this out myself.

    Still, while it looks like I'm closer to the goal, I'm not there yet.

    I just did a fresh 3P1T calibration at ~25 degrees Celsius, and this is the data that I got:

    DAC: 0x508 (1288), 0xF1D (3869), 0x1934 (6452)

    PADC: 0xFFFFF8E9 (-1815), 0x34A845 (3450949), 64D76F (6608751)

    TADC: 0x1CB91F (1882399), 0x1CB6D1 (1881809), 0x1CB59B (1881499)

    The resulting coefficients calculated by the GUI were:

    h0: 0x187F31 (1605425), so h0 = 1605425 / (2^30) = 0.001495

    g0: 0x1448A0 (1329312), so g0 = 1329312 / (2^30) = 0.001238

    n0: 0xFA64EE (16409838), so n0 = -367378 / (2^30) = -0.00034

    Now, when I enter those same DAC, PADC and TADC values into the DLL, I get this:

    As you can see, h0, g0 and n0 are different than the ones calculated by the GUI. Have I still managed to mess up the order of the DAC, PADC and TADC inputs?

    Also, what are the P_GAIN and T_GAIN outputs used for? Would it be easier if I send you my VI and the 24-bit DLL that I'm using? (It's the one that you posted in this forum at some point.)

    Thanks!

    Sincerely,

    Mads

  • Hi Scott,

    If it makes sense I'll be happy to do some real-time desktop sharing so that we can get this fixed.

    I'm guessing there's probably a simple solution to this, so just let me know what I can do to speed things up.

    Thanks!

    Sincerely,

    Mads

  • Hi Scott,

    Have you had a chance to look more into this?

    Let me know if there is something that I can test. Or if you need more information from me.

    Sincerely, 

    Mads

  • Hi Mads,

    Yes, it might be helpful for you to send the exact files that you are using so that I can better replicate what you are seeing. 

    The P_GAIN and T_GAIN outputs are digital gains applied to the raw ADC output before it goes through the compensation algorithm. Typically you want them to be 1 (it is a direct correlation, so a DEC value of 1 is a gain of 1). If the value is greater than that, then it's best to change the analog gain in the PGA to better make use of the ADC's input range. The offsets are the same.  They are digital values added to the raw ADC output before it enters the compensation algorithm.

    Regards,