This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AWR1642BOOST: [SDK 3.2] Modifying mmWave OOB-Demo to send out additional data via UART

Part Number: AWR1642BOOST
Other Parts Discussed in Thread: IWR1642

Hi at all,

I want to alter the source code of the Demo to continuously obtain additional data along with the information about the detected objects. Preferably, I want to bypass the processing chain like depicted in this thread or get access to the data in intermediate steps like between the 1st and 2nd FFT.

Using the latest SDK (version 3.2), I was able to find 

static void MmwDemo_dssInitTask(UArg arg0, UArg arg1){

in Line 606 of the dss_main.c-file which sets 

dpmInitCfg.ptrProcChainCfg = &gDPC_ObjectDetectionCfg;

in Line 655 that should eventually trigger the mentioned processing chain by calling DPC_ObjectDetection_execute() which corresponds to the given Doxygen documentation.

My hope is now to find the variable containing (or memory address pointing to) the raw data at the very moment at which the processing chain starts to kick off. I then want to send out the content using UART_writePolling(). To fulfill the speed/bandwidth limitations of UART, I assume it is sufficient to choose a rather low frame rate in combination with a low amount of chirps.

So far, I have not yet succeeded in finding the point at which the raw data is processed, but I will update this thread once I make any progress. In the meantime, I would be happy to get any feedback telling me if I'm on the right track. Additional hints or complete solutions are welcome, too, as I am sure that this topic is of interest to many others.

I'd be glad to see any involvement in this!

  • Hi Raphael,

    SDK 3.2 OOB demo supports LVDS based raw data capture using the lvdsstreamCfg command as detailed in section 3. 3. 2. mmWave demo with LVDS-based instrumentation of the SDK user guide. Sending raw ADC data over UART is not a supported use case for this demo.

    Regards

    -Nitin

  • Hi Nitin,

    your proposal seems to require additional hardware. However, like many others on this forum, I intent to make use of just the EVM in order to get additional data.

    As described above, I am not interested in high data rates as long as I am able to continuously obtain data, be them only 1-2 FPS.

    I understand than the Demo is not designed to send raw ADC data over UART, but according to TI this demo seems to serve as a blueprint for any user application. Hence, I try to modify its code for my desired purpose.

  • === UPDATE ===

    In the DPC_ObjectDetection_execute-function (located in objectdetection.c at C:\ti\mmwave_sdk_03_02_00_04\packages\ti\datapath\dpc\objectdetection\objdetdsp\src) I was able to find the function that performs the first processing step, i.e., the 1D range FFT. Its call can be found in Line 2119 as 

    retVal = DPU_RangeProcDSP_process(subFrmObj->dpuRangeObj, &outRangeProc);

    and in the function's implementation I am now trying to make sense out of the various buffers and memories used.

    Question: What is the purpose of the SOC_translateAdress-function? I saw it also in the mss_main.c as part of the communication between MSS and DSS. To me, it seems like the same physical memory space has different addresses in different parts of the code, hence, the address needs to be converted when going from one code part to another. Please confirm or deny with elaborated answer.

    Right now, my goal is to either give the MSS access to the ADC buffer (which seems not to work?) as it already has communication via UART initialized or, alternatively, modify the DSS code such that data of interest (raw data, 1D FFT data) is deposited in the shared memory which is - in my understanding - used as an interface between DSS and MSS.

  • Hi Raphael,

    Besides the core specific program and data memories, the IWR1642 includes two memories which can be accessed both by the C674x (DSP) and the Cortex R4F cores. These are the L3 Shared RAM and the HSRAM. 

    The L3 Shared RAM (and similarly the HSRAM) as the name suggests, is one memory which is visible to both DSP and Cortex R4F cores but the address map is different for both cores. In other words, both cores access the same memory but using different addresses and mmWave SDK provides the above function to translate the address when accessing the same memory between cores.

    Please refer to the Device data sheet for information on the device memory map. The following threads may also be useful to you:

    CCS/IWR1642BOOST: IWR1642 shared memory allocation

    CCS/IWR1642BOOST: How does 1642 allocate a piece of memory to store data separately?

    Regards

    -Nitin