This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DLP Spectrum Library fails at scan interpretation

Hello everyone,

I've built my PC application with mingw x64 gcc compiler on MSYS2. Libraries hid-api and dlpspec were also built using the same compiler and linked to the executable. I've reused API.cpp and USB.cpp from GUI sources.

In my current setup I've
1) built hidapi to a dll using this make file
2) built libdlpspec to .a file using provided build-lib.bat
3) Linked both the libraries to my executable

Problem: The application works only when resolution was at factory default (Column 228 or Hadamard 228). When this setting is changed to any value other than 228, I get a segmentation fault/stack collision at dlpspec_scan_interpret() api call. Strange.

Returned size of serialized TPL for all configurations (default, modified and new) is 3822 bytes (Same as Qt application). There seems to be no problem in other API calls.

Here is a short snippet of my code :

// All API calls are checked for errors, but the following snippet does not explicitly lists them

void* scan_data = (scanData*)malloc(SCAN_DATA_BLOB_SIZE);
scanResults scan_result; NNO_PerformScan(false); // Wait for scan to complete scan_result_bytes = NNO_GetFileSizeToRead(NNO_FILE_SCAN_DATA); bytes_read = NNO_GetFile((unsigned char *)scan_data, scan_result_bytes); dlpspec_scan_interpret(scan_data, SCAN_DATA_BLOB_SIZE, &scan_result);

I tried debugging deep into dlpspec and tpl but did not find anything. Therefore posting here if anyone has came across this problem.

  • Hi,
    Does the failure happen even when the digital resolution value is made greater than 228 or only fails when modified to less than 228 also? My initial guess is some kind of memory issue (insufficient stack space or heap space). Have you been able to include the spectrum library source files (rather than lib) into your application and debug, that might help you pin point exactly where the segmentation fault happens?

    Regards,
    Sreeram
  • Hi,

    Sreeram Subramanian said:
    Hi,
    Does the failure happen even when the digital resolution value is made greater than 228 or only fails when modified to less than 228 also?

    Yes the problem also occurs when the resolution is greater than 228. (To correct what I stated in the question. Application built this way does not work for any scan configuration other than Factory set default Column 1 and Hadamard 1)

    Sreeram Subramanian said:

    Have you been able to include the spectrum library source files (rather than lib) into your application and debug

    I built the application by including dlpspeclib sources to my sources, and the problem continues. (If I'm not wrong, I think using static library is same as building the library sources along with the application)

    However, I added a debug flag and did some debugging. Here are my findings :

    When the default factory set Column 1/Hadamard 1 Scan config are used. the function call dlpspec_is_slewdatatype() returns false in dlpspec_scan_read_data(), thus deserializing tpl data using dlpspec_deserialize(pBuf, bufSize, SCAN_DATA_TYPE)

    However, when any other scan configurations are used, dlpspec_is_slewdatatype() returns true. Therefore the else part of the if-construct is executed and the segmentation fault occurs at dlpspec_scan.c:579 dlpspec_deserialize(pHeadBuf, size_cfg_head, SLEW_CFG_HEAD_TYPE)

    At dlpspec_helper.c:476

    std_cfg_format is S(uc#cccccccjjvvu$(f#f#)c#vccccvc#c#vvcvvi#)

    And following are the formats read from NIR Scan Nano using tpl_peek

    150 Resolution S(uc#cccccccjjvvu$(f#f#)c#vccc)
    624 Resolution S(uc#cccccccjjvvu$(f#f#)c#vccc)
    Factory Set Hadamard 1 S(uc#cccccccjjvvu$(f#f#)c#vccccvc#c#vvcvvi#)

     

  • I found the solution to this problem. I've been building 64 bit applications, but the dlpspectrum library has certain pointer arithmetic that is only suitable for 32bit systems.

    For example in dlpspec_scan.c:578 in function dlpspec_scan_read_data()

    pHeadBuf = (void *)((int)pBuf + size_data_head);

    the pBuf pointer is typecasted to int and incremented by size_data_head bytes. This works well when on a 32bit machine where size of pointer and integer is 32bit. However, on a 64bit system, the size of pointer is 64bit but the size of integer is 32bit.

    Therefore, when typecasted to int, 32 most significant bits of the pointer are trimmed and then the offset is added and as a result the actual pointer is lost.

    This can be solved for example by typecasting to char * instead of int

    pHeadBuf = (void *)((char*)pBuf + size_data_head);

    The GUI was installed in C:\Program Files\. So, I thought the GUI application is 64bit.

  • Hi Rhik,
    Glad to know that the problem is resolved. Thanks for sharing the details with us so that we can address it during the next release.

    Regards,
    Sreeram