This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWR6843AOPEVM: How can I get output from IWR6843AOPEVM using PYTHON for people tracking with Vital Sign

Part Number: IWR6843AOPEVM
Other Parts Discussed in Thread: IWR6843AOP

Hi 

below is python Code to get value from IWR6843AOPEVM using python for people tracking with vital sign.

please check it for get right value from IWR6843AOP

import binascii

import serial
import numpy as np
import cv2
import queue
import struct

cnt = 0
i = 0
synPattern = [0x02, 0x01, 0x04, 0x03, 0x06, 0x05, 0x08, 0x07]
mmWaveUARTData = serial.Serial("COM6", 921600, timeout=3.0)
while True:
header = mmWaveUARTData.read(48)
sync1, sync2, sync3, sync4, sync5, sync6, sync7, sync8, version, totalPacketLen, platform, frameNumber, subFrameNumber, chirpProcessingMargin, frameProcssingMargin, trackProcessTime, uartSentTiem, numTLVs, checksum = struct.unpack('8B9I2H', header)
if sync1 == 0x02 and sync2 == 0x01 and sync3 == 0x04 and sync4 == 0x03 and sync5 == 0x06 and sync6 == 0x05 and sync7 == 0x08 and sync8 == 0x07:
print("#################################### found header ##################################")
print("total length : {}".format(totalPacketLen))
print("frame number : {}".format(frameNumber))
print("no of TLVs : {}".format(numTLVs))
data = mmWaveUARTData.read(totalPacketLen - 48)
if data == b'':
continue
if numTLVs < 1:
continue

cnt = numTLVs
while True:
print("********* tlv cnt : {}".format(cnt))
tlvType, tlvLength = struct.unpack('2I', data[:8])
print("tlvType : {}".format(tlvType))
print("tlvLength : {}".format(tlvLength))
data1 = data[8:]
if tlvType > 20:
break
if tlvType == 0x06:
pointCloud = data[8:tlvLength]
if ((tlvLength - 28) % 8) != 0:
break
else:
#print(data[8:tlvLength])
elevationUnit, azimuthUnit, dopplerUnit, rangeUnit, snrUnit = struct.unpack('5f', pointCloud[:20])
print("elevationUnit : {}".format(elevationUnit))
print("azimuthUnit : {}".format(azimuthUnit))
print("dopplerUnit : {}".format(dopplerUnit))
print("rangeUnit : {}".format(rangeUnit))
print("snrUnit : {}".format(snrUnit))
pointCloud = pointCloud[20:]
pointValue = binascii.hexlify((pointCloud))
#print(pointValue)
cnt1 = tlvLength-28

while True:
#for k in range(0, tlvLength-28, 8):
elevation, azimuth, doppler, range, snr = struct.unpack("2B3H", pointValue[:8])
print("elevation : {}".format(elevation * elevationUnit))
print("azimuth : {}".format(azimuth * azimuthUnit))
print("doppler : {}".format(doppler * dopplerUnit))
print("range : {}".format(range * rangeUnit))
print("snr : {}".format(snr * snrUnit))
cnt1 = cnt1 - 8
if cnt1 == 0:
break
pointValue = pointValue[8:]
elif tlvType == 0x07:
targetList = data[8:tlvLength]
targetValue = binascii.hexlify((targetList))
tid, posX, posY, posZ, velX, velY, velZ, accX, accY, accz, ec0, ec1, ec2, ec3, ec4, ec5, ec6, ec7, ec8, ec9, ec10, ec11, ec12, ec13, ec14, ec15, g, confidenceLevel = struct.unpack("I9f16f2f", targetList[:112])
print("tid : {}".format(tid))
print("posX : {}".format(posX))
print("posY : {}".format(posY))
print("posZ : {}".format(posZ))
print("velX : {}".format(velX))
print("velY : {}".format(velY))
print("velZ : {}".format(velZ))
targetList = targetList[112:]
#print(data[8:tlvLength])
elif tlvType == 0x08:
targetIndex = data[8:tlvLength]
#print(data[8:tlvLength])
elif tlvType == 0x0c:
presence = data[8:tlvLength]
exit = struct.unpack("I", presence)
print("Presence : {}".format(exit))
#print(data[8:tlvLength])
elif tlvType == 0x0a:
targetIndex = data[8:tlvLength]
unwrap_waveform1, unwra_waveform2,heart_waveform1, heart_waveform2, breathing_waveform1, breathing_waveform2, heart_rate1, heart_rate2, breathing_rate1, breathing_rate2, x1, x2, y1, y2, z1, z2, id1, id2, range1, range2, angle1, angele2, rangeidx1, rangeidx2, angleidx1, angleidx2 = struct.unpack("22f4H", targetIndex[:96])

print("heart_rate1 : {}".format(heart_rate1))
print("heart_rate2 : {}".format(heart_rate2))
print("breathing1 : {}".format(breathing_waveform1))
print("breathing2 : {}".format(breathing_waveform2))
#print(data[8:tlvLength])
cnt = cnt - 1
if cnt == 0:
break
data = data[tlvLength:]

else:
pass


and

output value is below .

#################################### found header ##################################
total length : 277
frame number : 209455
no of TLVs : 4
********* tlv cnt : 4
tlvType : 6
tlvLength : 76
elevationUnit : 0.009999999776482582
azimuthUnit : 0.009999999776482582
dopplerUnit : 0.0002800000074785203
rangeUnit : 0.0002500000118743628
snrUnit : 0.03999999910593033
elevation : 0.4899999890476465
azimuth : 0.5199999883770943
doppler : 6.9809601864544675
range : 3.5977501708839554
snr : 493.5599889680743
elevation : 1.0099999774247408
azimuth : 0.5399999879300594
doppler : 3.9558401056565344
range : 6.413750304636778
snr : 493.43998897075653
elevation : 0.4899999890476465
azimuth : 0.4899999890476465
doppler : 3.4689200926513877
range : 3.59800017089583
snr : 1048.4399765655398
elevation : 1.0099999774247408
azimuth : 0.9699999783188105
doppler : 3.8841601037420332
range : 6.360250302095665
snr : 493.43998897075653
elevation : 0.47999998927116394
azimuth : 0.9899999778717756
doppler : 3.899000104138395
range : 3.341250158700859
snr : 1027.9599770233035
elevation : 1.0099999774247408
azimuth : 0.9999999776482582
doppler : 3.812480101827532
range : 6.542250310740201
snr : 493.43998897075653
********* tlv cnt : 3
tlvType : 7
tlvLength : 120
tid : 0
posX : 0.1761176735162735
posY : 3.3187012672424316
posZ : 0.9841117262840271
velX : -0.00889516156166792
velY : 0.0030599513556808233
velZ : 0.021570570766925812
********* tlv cnt : 2
tlvType : 8
tlvLength : 21
********* tlv cnt : 1
tlvType : 12
tlvLength : 12
Presence : (1,)
#################################### found header ##################################
total length : 3072
frame number : 209456
no of TLVs : 5
********* tlv cnt : 5
tlvType : 6
tlvLength : 76
elevationUnit : 0.009999999776482582
azimuthUnit : 0.009999999776482582
dopplerUnit : 0.0002800000074785203
rangeUnit : 0.0002500000118743628
snrUnit : 0.03999999910593033
elevation : 0.4899999890476465
azimuth : 0.5199999883770943
doppler : 6.9809601864544675
range : 3.5977501708839554
snr : 493.5599889680743
elevation : 1.0099999774247408
azimuth : 0.5399999879300594
doppler : 3.9558401056565344
range : 6.413750304636778
snr : 493.43998897075653
elevation : 0.4899999890476465
azimuth : 0.4899999890476465
doppler : 3.4689200926513877
range : 3.59800017089583
snr : 1048.4399765655398
elevation : 1.0099999774247408
azimuth : 0.9699999783188105
doppler : 3.8841601037420332
range : 6.360250302095665
snr : 493.43998897075653
elevation : 0.47999998927116394
azimuth : 0.9899999778717756
doppler : 3.899000104138395
range : 3.341250158700859
snr : 1027.9599770233035
elevation : 1.0099999774247408
azimuth : 0.9999999776482582
doppler : 3.812480101827532
range : 6.542250310740201
snr : 493.43998897075653
********* tlv cnt : 4
tlvType : 7
tlvLength : 120
tid : 0
posX : 0.1761176735162735
posY : 3.3187012672424316
posZ : 0.9841117262840271
velX : -0.00889516156166792
velY : 0.0030599513556808233
velZ : 0.021570570766925812
********* tlv cnt : 3
tlvType : 8
tlvLength : 14
********* tlv cnt : 2
tlvType : 12
tlvLength : 12
Presence : (1,)
********* tlv cnt : 1
tlvType : 10
tlvLength : 2802
heart_rate1 : 55.24530029296875
heart_rate2 : 55.24530029296875
breathing1 : -1.043005166759384e+36
breathing2 : -8.122871264504283e-09
#################################### found header ##################################
total length : 289
frame number : 209466
no of TLVs : 4
********* tlv cnt : 4
tlvType : 6
tlvLength : 92
elevationUnit : 0.009999999776482582
azimuthUnit : 0.009999999776482582
dopplerUnit : 0.0002800000074785203
rangeUnit : 0.0002500000118743628
snrUnit : 0.03999999910593033
elevation : 0.4999999888241291
azimuth : 0.47999998927116394
doppler : 3.540040094550932
range : 3.3402501586533617
snr : 997.1599777117372
elevation : 1.0099999774247408
azimuth : 0.9699999783188105
doppler : 3.8841601037420332
range : 3.2780001556966454
snr : 493.43998897075653
elevation : 0.4999999888241291
azimuth : 0.47999998927116394
doppler : 3.683400098379934
range : 3.0850001465296373
snr : 555.2399875894189
elevation : 1.0099999774247408
azimuth : 0.5399999879300594
doppler : 3.9558401056565344
range : 3.2245001531555317
snr : 493.43998897075653
elevation : 0.47999998927116394
azimuth : 0.4999999888241291
doppler : 3.8844401037495118
range : 3.341000158688985
snr : 1036.239976838231
elevation : 1.0099999774247408
azimuth : 0.5099999886006117
doppler : 4.027520107571036
range : 6.222000295529142
snr : 493.43998897075653
elevation : 0.47999998927116394
azimuth : 0.4999999888241291
doppler : 7.110040189902065
range : 3.0845001465058886
snr : 575.5599871352315
elevation : 1.0099999774247408
azimuth : 0.5399999879300594
doppler : 3.9558401056565344
range : 6.2967502990795765
snr : 493.43998897075653
********* tlv cnt : 3
tlvType : 7
tlvLength : 120
tid : 0
posX : 0.1761176735162735
posY : 3.3187012672424316
posZ : 0.9841117262840271
velX : -0.00889516156166792
velY : 0.0030599513556808233
velZ : 0.021570570766925812
********* tlv cnt : 2
tlvType : 8
tlvLength : 17
********* tlv cnt : 1
tlvType : 12
tlvLength : 12
Presence : (1,)
#################################### found header ##################################
total length : 256
frame number : 209467
no of TLVs : 4
********* tlv cnt : 4
tlvType : 6
tlvLength : 60
elevationUnit : 0.009999999776482582
azimuthUnit : 0.009999999776482582
dopplerUnit : 0.0002800000074785203
rangeUnit : 0.0002500000118743628
snrUnit : 0.03999999910593033
elevation : 0.47999998927116394
azimuth : 0.5199999883770943
doppler : 3.9569601056864485
range : 6.350000301608816
snr : 997.279977709055
elevation : 1.0099999774247408
azimuth : 0.9999999776482582
doppler : 3.812480101827532
range : 6.348250301525695



  • tlvType : 06 = Point cloud, 07 = Target object list, 08 = Target index

    But In my output, there are two tlvType as 10, 12. 

    I think that tlvType 12 is presence, tlvType 10 is vital sign.

    Is it right?

    However, in my thought, my output result is not good

    How to get right value from IWR6843AOPEVM with People tracking with vital sign

  • Hi Kevin,

    Yes, 10 is Vital Signs and 12 is Presence. 

    For vital signs I would look for breathing_rate1 and breathing_rate2 instead o f using the waveform. 

    presence should be a uint8 type 

    Thank you,

    Angie

  • Hi Angie,

    Does the parameters such as "profileCfg/ chirpCfg in Config file impact the output of mmWave IWR6843AOPEVM?

    If so, How could I test the vital sign and tracking people above parameters?

    moreover, Please advice me more on important parameters to impact the output of radar.

    I already refer the mmwave_sdk_user_guide.pdf which includes explaining parameters

    Thanks 

    Kevin,

  • Hi Kevin,

    Yes these parameters effect the output and you can modify them by modifying the .cfg file you are sending to the device. This can be modified in a text editor.

    The SDK parameters (including profilecfg / chirpcfg) are documented in the SDK user's guide. The detection and tracking layer parameters also have user's guides in our ti resource explorer toolbox. They can be found by navigating to:

    Thanks,

    Angie

  • Hi Angie,

    Thank for your support.

    I am testing the Vital_Signs_With_People_Tracking in mmWave Toolbox 4.11 using Python with interface RS232 to get TLVs such as point cloud, target list, breath and heart rate.

    there is some advice like below

    Note: No Source Code Provided

    This lab is provided as a binary file without source code. This can be used to test the application in various different use cases. For more information on the source code and implementation, please reach out to your local TI sales representative.

    Can I get this source code above mentioned from you?

    If not, how can I manage it to get source code for Vital_Signs_With_People_Tracking.

     

    Thank you

    Best Regards,

    Kevin


    Seoul, S. Korea.

  • Kevin,

    Angie is out of office today and will get back to you regarding your source code request for the Vital Signs with People Tracking Lab. In the meantime, I have gone ahead and edited your last reply to hide your personal contact info, since this is a public forum and anyone could see it. I have noted it down for Angie when she returns.

    Best Regards,
    Alec

  • Hi Angie,

    Testing the people tracking with vital sign using TI iwr6843AoPEVM, I changed the value of config file which set the Detection layer, Tracking Layer.

    While doing these, I got thought in my mind about below.

    If I changed some config parameters such as snr parameter, I can classify the Rotating Fan and moving People respectively.

    If you have some advice on above my concern, Please let me know on which config parameters will be to be changed by myself to classify rotating fan and moving people.

    Thank you.

    Best Regards,

    Kevin,

  • Hi Kevin,

    First, I am going to move our conversation on code access to email.

    Second, I am going to look into your request on classification and get back to you on this tomorrow. However, for this type of classification we usually see customers using machine learning algorithms. We do not have any examples of this at this time. 

    Thank you,

    Angie

  • Hi Angie,

    Thanks for your support.

    Vital sign is our concern not osillating fan nearby. I want to get vital sign of person, however the oscillating fan affect vital sign which I want to get.

    So I asked about classification between osillating fan and moving person because of above.

    May I use deep learning algorithms such as CNN or LSTM above issue?

  • Hi Kevin,

    Yes, CNN would work well.

    Thanks,

    Angie

  • Hi Angie,

    Thanks for your support.

    I checked toolbox from TI for radar sensor. while doing it, I got to understand about classification source which TI have provided with source code in people counting Lab.

    If so, Can I classifiy the moving object not person and moving person with above classification source instead of deep learning CNN?

    Thank you.

    Best Regards,

    Kevin,

  • Hi Kevin,

    The kNN classifier designed for the Sense and Direct lab is made for such a use case (remove false people detections such as rotating fan) however you would need to do the integration effort and experimentation on your end to determine the most effective way to merge this classifier (or one like it) into your existing source code. If you are trying to use the existing model without retraining, you will need to ensure to make many pieces of the configuration be similar to what is used with the Sense and Direct offering, to ensure you are able to maintain the robustness of the model.

    Best Regards,
    Alec

  • Hi Alec,

    Thanks for your support and information.

    I can code for deep learning CNN for classification or Object Detection using python and tensorflow or pytorch.

    Python library sklearn has k-NN algorithms also.

    However, If I get raw data from IWR6843AOPEVM using vital sign with people tracking lab, Maybe output are point Cloud, Target Lists, Target ID, Presence, and Vital Sign.

    To train and evaluate for deep learning or k-NN with dataset, How can I build the dataset with point cloud. Is it right for dataset with point clould?

    Thanks again 

    Best Regards,

    Kevin,

  • Hello Kevin,

    Please allow us until no later than Tuesday 7/26 for us to answer this, we appreciate your patience.

    Best Regards,

    Pedrhom Nafisi

  • Hi Kevin,

    To understand how statistics from a point cloud output can be used to train a ML model please start on slide 7 of the Sense and Direct 68xx Algorithm Overview in the toolbox. This document details the methods used to train the Sense and Direct lab.

    Thank you,

    Angie

  • Hi Angie,

    Thank for your information.

    After reviewing your recommended " the Sense and Direct 68xx Algorithms Overivew in the toolbox", I can understand how the Sense and Direct lab works.

    Meanwhile, my goal is to get data for vital sign while tracking multiple person with classification between moving object and moving persons.

    In this regard, the classification between moving object and moving persons is answered by the Sense and Direct lab, furthermore, vital sign while tracking person is answered by Vital_Signs_With_People_Tracking lab.

    1. The merge of two labs will be answered for my goal which I mentioned as above. Is my logic true or false?

    2. Can Vital_Signs_With_People_Tracking lab provide multiple-persons's vital sign if I change the config file?

    Thank you.

    Kevin,

  • Hi Kevin,

    1. Yes, the merge of these two labs would be required. This is not a small task but that would be the path to getting classification of a moving object vs a moving person.

    2. This lab currently supports single person vitals detection only. If you modify the tracking config to 2 people it will still only be able to detect the vitals of 1 person at a time, whichever person is the first track.

    Thank you,

    Angie

  • Hi Angie,

    Thank you for your answers.

    I can understand that Vital_Signs_With_People_Tracking lab still support single person vitals detection even if i change the config file.

    If so, how about modifying the Vital_Signs_With_People_Tracking lab source code?

     

    Q. If I modify the source code for Vital_Signs_With_People_Tracking lab, is it possible to get vital sign of multiple person while tracking?

     

    Thank you.

    Kevin,

  • Hi Kevin,

    With modifications to the source code and the memory allocation this is possible but it takes a deep understanding of how the information needed for the vital signs processing is stored on the device. The source code has a developer's guide which can help with this. 

    At this time we only have the space for processing the vital signs of 2 people while tracking. However, if the window size is reduced (currently set to 300 frames) you could trade of detecting vital signs for more people with less accurate vital signs measurements. 

    Thank you,

    Angie

  • Hi Angie,

    Thanks for your support and Information.

    Based on my finding on developer's guide for vital signs with people tracking and source code as below, I think that vital signs is available as output tlv for two persons while tracking them.

    However, Viewer don't support it. Is it right? I am reviewing source code very carefully and with detail. Please support me ^^.

    typedef struct Output2DSP_VS_Data_t
    {
    float x[2];
    float y[2];
    float z[2];
    float id[2];
    float reserve[2];
    uint16_t rangeidx[2];
    uint16_t Azimuthidx[2];
    uint16_t VSIndexLockEnable;
    uint16_t refresh_frame_count;
    uint16_t window_size;

    } Output2DSP_VS_Data;

    Thank you.

    Best Regards,

    Kevin,

  • Hi Kevin,

    Even those these are are 2D arrays the second input is currently being used for debug purposes. You can better see what is being loaded for each of these inputs when you access to the source code but I cannot talk about this here since the source code is under NDA and this is public platform.

    Two TLVs (one per person) would need to be outputted for multi-person vital signs to work. However, multi-person vitals is possible and is currently being evaluated by our engineers.

    Thank you,

    Angie

  • Hi Angie,

    I am also afraid to get reply from here, because source code is under NDA.

    If then, how can I get support from you?

    I will get more detail in source code, If I ask you later.

    Please support me, I have to modify source code to get vital signs for two persons or multiple persons

    Our demo for customers will be soon.

    Thank you.

    Kevin,

  • Hi Kevin,

    Once you have an NDA signed with TI and access to the source code, we can discuss this over the e2e direct messaging function.

    Thanks,

    Angie