This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-AM62A-LP: Inference accuracy on CPU and C7x

Part Number: SK-AM62A-LP


Tool/software:

In a previous question I asked whether it was possible to use the Google Mediapipe facial landmark model with TIDL, but received the answer that it wasn't practical.
I replied that I would search for the alternative model, but before doing so, I tried replacing PReLU nodes with ReLU nodes and running it.

face_landmark_relu.zip
Compile: Ubuntu 22.04, EdgeAI-TIDL-tools 10_01_04_00
Run : SK-AM62A-LP , Processor-SDK 10_01_00_05
Inference results is berow (Left is on the CPU, Right is on the C7x)

Result (Left is on the CPU, Right is on the C7x)

The results shows a difference in accuracy between running on CPU and on C7x.
When the input is a frontal face image, the offloaded model returns better results.
Is there any reason for this, or is it just a coincidence?
Calibration doesn't make the model more accurate than the original, right?

  • Hello,

    This is an interesting observation. It does look like both the CPU and C7x respectively have some instances where they do better than the other. I see what you mean

    I tried replacing PReLU nodes with ReLU nodes and running it.

    This means that the model you are looking at one both CPU and C7x does not have any PreLU, correct? Can you confirm it is the same model? Did you quantize the model at all with tensor_bits setting (default is 8-bit)?

    I am inclined to call this a coincidence. It is difficult to say what will happen deep within the model when something like an altered activation function has changed data distributions without being retrained. 

    Calibration doesn't make the model more accurate than the original, right?

    Correct. If it appears more accurate, it is likely random chance. Calibration to fixed-point is an inherently lossy process. Some information and precision is unavoidably lost during this process, and the goal during quantization is to minimize this with good calibration data (as well as a few other techniques). 

    Perhaps some of that lost information was beneficial on a subset of your dataset. Ordinarily, that loss of information reduces accuracy, but since this model was modified without retraining, that change surely has some unintended effects that can appear beneficial.

    I would not suggest using this model, but it is a good experiment -- the outputs are more reasonable than I had expected

    BR,
    Reese

  • Hello Reese

    Thank you for your reply.
    I thought that was probably the case, but it's good to hear an expert's opinion.
    I'll explore this model a bit more.

    Best regards,
    Fumiya