This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AWR1243: on the validity and algorithm of the range bias and Rx channel phase/gain calibrations

Part Number: AWR1243
Other Parts Discussed in Thread: AWR1642,

Hi, 

When I was reading the "Doxygen" of mmw demo for AWR1642 in the mmwave_sdk_02_00_00_04 I jumped to different calibrations that are required for range, velocity, and angle estimations. One of them is the so-called "range bias and Rx channel phase/gain calibrations". 

I searched for all the threads about them and I still could not answer to the following questions:

1/ Why in the range bias calibration we need to do a summation of the square root of the received range-Doppler vectors from all antennas? I think the range bias is unique for each receiver chain! This understanding is coming from the fact mentioned in here that:

"Because of antenna routing length differences and other factors that are specific to each EVM and some delays in the RF chain ..."

2/ Is the range bias unique for all ranges? It should be based on the measurement reported here

3/ Is there any need to correct the range bias for azimuth calculation? There are two cases either the target is in the near field or in the far field. Though, what would be the answer in these cases?

4/ Phase/gain mismatch correction for all receivers does not make sense as explained in the Doxygen! It is explained that:

"The rx channel phase and gain estimations are done by finding the minimum of the magnitude squared of the virtual antennas and this minimum is used to scale the individual antennas so that the magnitude of the coefficients is always less than or equal to 1. ..."

Although, I could not understand the term "magnitude square of the virtual antennas" but what I inferred it is the vector on the zero Doppler for all ranges in the range-Doppler map. Please clear that for me as well.

5/ How we could make sure that the temperature is fixed to be able to use the calibration coefficients? The issue is mentioned in here

Thank you in advance and Merry Chrismas,

Mostafa 

  • Hello Mostafa,
    Please provide us some time to get back to you on this .

    Regards,
    Jitendra
  • #1, #2: Range bias is modeling the baseline delay that is equally suffered by all tx-rx paths, most of this is due to internal (to the SoC) RF path delays. The additions on top of this e.g due to routing length variations among the antenna paths are modeled by the phase corrections obtained from the calibration procedure. Range bias is not range dependent. The link you referred in #2 seems to have shorter distances so it involves near field estimation which is described in the doxygen and we take into account the estimated range bias from calibration procedure into the near field calculation if calibration was performed [note calibration procedure itself assumes far field hence recommended target distance is > 1 m]. The rest of the error in that post may be just the range accuracy limited by the FFT bin size [it can be increased by interpolation but will ultimately be limited by SNR].

    #3: Range bias itself is not required for azimuth calculation for far field (because range is not involved in the far field azimuth calculation) but in near field because the angle estimation depends on range itself, the range bias needs to be taken into account i.e you have to subtract the range bias before doing the geometry calculations. We have seen bias around 6-8 cm on our boards and in near field these distances are not insignificant so your near field (range and) azimuth could be in significant error if you did not perform and apply calibration.

    #4: The calculation is only for the calibration target, once its range position has been found (this is just based on searching the peak within the specified configuration window around the target in the detection matrix), the data from radar cube at that range position and at 0th doppler (during calibration, the target is assumed to be stationary) so that data is basically the array of size = #virtual antennas, this data will be used to estimate the calibration vector. The calibration vector is such that if it were applied (element-wise complex multiply) on this data, then it would bring all the points to zero azimuth angle (boresight). So basically it is the element-wise complex conjugate but scaled by the the ratio of minimum of the magnitudes / magnitude squared of the element. You can see the source code to see what is the formula or you can also read up the more recent documentation of this in 3.x release (it is described in the DPC (object detection HWA) documentation in which we document the formula.

    #5: Internal to SoC, there is handling of temp variation, there is a self-calibration application note that you can locate from the product page for more information. External to the SoC i.e routing length related, this can change with temp but I believe we haven't characterized this on our EVMs [we don't really need to as our boards are for evaluation purposes, not meant to deploy in field as is]. I guess whether you need to handle this depends on your application -- how much temp change is expected and desired range accuracy (accuracy is ultimately limited by SNR). I am not an expert in this are but some possible solutions I can think of:

    1. This is the more obvious one - temp controlled enclosure but this may be impractical/expensive.

    2. Perform a temp characterization of the board (maybe run manual calibration procedure for different temps and get different vectors and also do some averaging for each temp due to noise in the measurements) then code this in the form of a formula or table in the software and look-up table/calculate the compensation vector based on measured board temp (there is also a temp sensor inside the SoC and mmwavelink API is available to sense this, however since we are talking about outside of the SoC, a separate external sensor may be needed if internal is not representing too well the external). Because lengths probably vary linearly with temp, you may end up in a simpler coding.

    I noticed your post is for 1243 but you are referring to documentation for 1642. You may be aware that the mmwave SDK is not for 1243 i.e the out of box demos are for parts that have either HWA or DSP (and now newer parts like 6843 both), so to do the kind of things we do in the demos, you will need to implement on whatever external processor you may interfacing with the 1243.

  • One correction in #5: "..how much temp change is expected and desired range accuracy", here I meant "desired angle accuracy" although range bias accuracy will also be affected in general, but the bigger effect will be on angle [the coefficients are for angle (azimuth only on 1642) accuracy].
  • Thank you so much for your complete answer. There is no more questions left for the 1st, 2nd, and 3rd. But for the 4th question, I could not find the “object detection HWA” document you mentioned.

    For the 5th question, I know the reason that why the phase calibration is more affected by the temperature variation rather than the range bias. In fact, the group delays of the PA and LNAs depend on the temp. Hence, the temp variation is equal to the phase variation not the range variation. This must be compensated every time temp undergoes changes.

    The reason that I used the mmwave demo document for AWR1642 is that I am trying to figure out the calibration steps required to work with your mm-wave sensors. Nevertheless, I am using AWR1243.

    Best,
    Mostafa
  • Download SDK 3.x, go to docs/ folder and click mmwave_sdk_module_documentation.html. This has hyperlinks to various modules in the SDK, find datapath->Data Processing Chain (DPC)-> Object detection using HWA. Some additional explanation (that is not in the documentation, we may add this to future release) to understand the why behind the formula: There are two things that determine the calculation 1) We are equalizing the different tx-rx path gains hence the division by the magnitude squared for each element (so that when you multiply the input with this coefficient, you have same amplitude for all virtual antennas before it is fed to the angle FFT), this is the gain part of the "Rx gain and phase compensation..". 2) We want to maximally use the 16-bits of fixed point precision without underflow or overflow in the coefficients, so we normalize to the minimum of the data set i.e the compensation vector will have unity amplitude for the minimum element and less for the rest.