Tool/software:
Hello,
I'm working with the **SVC220 sensor** on the **TDA4 VPAC ISP**, and I noticed that the output from the VPAC differs from my CPU implementation, particularly in contrast and sharpness.
I'm using the VPAC to perform demosaicing and undistortion on raw images. When I compare the results from the VPAC with my CPU implementation, the VPAC output has better contrast and appears sharper. These differences may affect the performance of the neural network, which processes the output images. My concern is that the differences between the VPAC and CPU outputs could impact the consistency of the neural network's results.
Here's the code snippet I'm using for the CPU demosaicing:
```cpp
cv::demosaicing(convImg16.getCpuConst(), demosaicImg16.getCpu(), cv::COLOR_BayerRG2BGR, 3);
```
**SVC220 Sensor Properties**:
```plaintext
SENSOR_ID 250
PRJ_DIR ../svc_output
SENSOR_NAME svc220
SENSOR_DCC_NAME svc220
SENSOR_WIDTH 1920
SENSOR_HEIGHT 1280
# 0=RGGB; 1=GRBG; 2=GBRG; 3=BGGR, 4=MONO
COLOR_PATTERN 3
# Sensor mode: 0 for linear (no decompanding), 1 for WDR (decompanding)
WDR_MODE 0
# Raw sensor image BIT_DEPTH: 8, 10, or 12 for linear sensors; typically 12 for WDR mode
BIT_DEPTH 12
# WDR BIT_DEPTH: Bit depth after decompanding, typically 20 or 24
WDR_BIT_DEPTH 16
# WDR decompanding knee points (comma-separated without spaces)
WDR_KNEE_X 0,512,837,1162,1487,1812,2137,2462,2787,3112,3437,3762,4087,65535
WDR_KNEE_Y 0,512,1024,2048,4096,8192,16384,32768,65536,131072,262144,524288,1048575,1048575
# Sensor black level to subtract before decompanding (for linear sensors only and some Sony WDR sensors)
BLACK_PRE 0
# Sensor black level to subtract after decompanding (for most WDR sensors and all linear sensors)
BLACK_POST 168
# GAMMA value for compressing 20/24-bit WDR raw to 16-bit ISP internal
# Typically around 50 (0.5) for 24-bit WDR sensors and 70 (0.7) for 20-bit sensors
GAMMA_PRE 50
# LSB location for H3A input bit range (from bit-H3A_INPUT_LSB to bit-H3A_INPUT_LSB+9)
H3A_INPUT_LSB 2
```
**My Question**:
Is there a way to adjust the runtime parameters of the VPAC (such as contrast or sharpness) to make the output closer to that of the CPU? Ideally, I'd like to achieve consistent output between the VPAC and CPU versions to minimize the impact on the neural network's results.
Any suggestions on where I might tune parameters or settings (perhaps related to gamma or black level) would be greatly appreciated!
Thanks in advance for your help!