This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Calibration for DLP NIRscan

Other Parts Discussed in Thread: DLP4500NIR

Dear all,

We are done with upgrading our DLP version 1.0 to version 2.0 by following the suggested procedure. Then, we tried to do measurement to compare the spectra of sample using both DLP NIRscan. DLP 1 refer to the DLP that we upgraded from version 1.0 to 2.0 whereas DLP 2 refer to the DLP that comes with version 2.0 from TI.
We used palm oil as the sample and observed that the pattern of spectra is likely to be same but the recorded wavelengths are different.
We also found out that for DLP 1, the absorbance value is higher compared to DLP 2 (as attached below). I would like to seek your kind advice regarding the calibration for our DLP NIR scan as we are struggeling in getting similar result for DLP1 (reproducibility problem).

I already share this problem in TI e2e community >> https://e2e.ti.com/support/dlp__mems_micro-electro-mechanical_systems/f/983/t/458723

Really appreciate your kind response and advice.

  • Hello Yaa,

    As far as I understand you have two questions:

    • Why do the reported wavelengths differ between two scans defined by the same parameters?
    • What is the expected deviation for absorbance measurement between multiple scans and multiple units?

    To answer the first question, it may help to understand that the optics are dispersing and focussing the spectrum on the DMD, and then we are controlling individual pixels on the DMD by creating patterns that direct certain wavelength(s) to a single detector. Therefore, there is a transfer function between the wavelength of light entering the slit and which DMD pixel column it will be focussed onto. The coefficients of this function are determined during calibration as described in the DLP Spectrometer Design Considerations application note, section 4.

    Once the coefficients are found, the user requests a scan defined with parameters including the start and end wavelength and how many wavelength points to measure, which translates into how many groups of pixels that will be selectively measured between those two wavelengths. The system then:

    1. Computes the nearest center pixel for each rectangular group of pixels based on the coefficients to measure each wavelength point.
    2. Generates patterns to display on the DMD
    3. Displays the patterns and records the measurements
    4. Computes the wavelength center for each pattern

    Because we have a finite number of pixels in the array, there will be some rounding in step 1. For instance, we might request a pattern at wavelength 1400nm, but that may map to pixel column 245.6. In this case, it would be instead created at pixel column 246. As such, this measurement is not centered at 1400nm. After the measurement is complete, the wavelength which is centered on pixel column 246 would then be found, which may be 1400.3nm. Finally, since the calibration parameters are different on different units based on opto-mechanical tolerances, this 1400nm might map to pixel column 245.6 on one unit, and pixel column 225.9 on another unit. Since the difference between 245.6 and the used 246th pixel column in unit A is larger than the 225.9 to 226th pixel column in unit B, the reported wavelength for this pattern in unit A will be further from 1400nm than unit B. For instance, unit A may report 1400.3nm while unit B might report 1400.07nm.

    In practice, the quantization performed by DMD pixels is much less than both the quantization performed by other spectroscopy methods (array detectors) which typically have fewer horizontal resolution elements (256 or 512) when compared to the DLP4500NIR DMD (1824). Furthermore, the optical transfer function of the system is much broader than the distance in nanometers represented by the distance between two adjacent pixels, such that the accuracy is primarily determined by the optics in the system rather than the resolution of the DMD. See section 2.1 of the aforementioned application note for more information.

    In most applications, it is desirable to compare scans on different instruments. Because of the very slight shift in wavelength locations of each point of the two scans, it is common to interpolate the data by spline interpolation or another algorithm into a common wavelength vector before performing any matching or constituent identification analysis.

    To answer your second question about the magnitude, it is a little hard to identify the exact cause of this difference. When we test units in our lab, we see typical variability of single scan on the order of ±1 miliabsorbance. We have not seen any discrepancies between different units as large as you show (it looks like around 1-4%?), but there are some things which can affect the repeatability even within a single unit. You may try to isolate or control for these items and see if your measured consistency increases:

    • Cuvette or sample angle. As the sample tilts, its effective path length changes. Absorbance as calculated with a base 10 logarithm is linear with path length, so an increase of path length will cause an increase of absorbance.
    • Increased time between reference and sample measurement. As the time between taking the reference and taking the sample increases, low frequency drift sources (thermal, ambient conditions, lamp brightness) can cause the computed absorbance to deviate from actual absorbance. For instance, 
    • Beyond instrument linearity. In areas of very high absorbance (above 2.5), instruments can become non-linear due to very low-level stray light and other effects. It is possible that this effect happens at slightly different absorbances for different units if manufacturing tolerances change the stray light condition enough. For best results, the path length of transmitted materials should be adjusted so that absorbances are between 0 and 2.

    Finally, because of the above factors, typical analysis algorithms meant for material identification are usually designed to identify the substance by the shape of the spectrum without being sensitive to the absolute magnitude of the measured spectrum. An example of a simple algorithm which applies this is cosine similarity, which is normalized to the magnitude of the two vectors which are being compared.

  • Thank You! Your response is greatly appreciated;)