The LP5036 offers a logarithmic scale dimming option for LEDx_BRIGHTNESS registers, which is described on page 14 of the datasheet. The right hand graph in Figure 11 shows a transfer curve:
I would like to know what electro-optical transfer function (EOTF) was used to produce this curve. It does not match the sRGB EOTF (linear knee followed by 2.4 gamma exponent), nor does it match the simplified 2.2 gamma exponent variant. I attempted regressions in both ax^b and b^(x-1) form, using the three most clear points - [0,0],[160/255,0.2],[1,1]
- but neither of these fits produced a curve that matched the graph.
I would also like clarification on the effects of quantisation in the linear operation mode vs. the logarithmic operation mode. This is a bit involved, so some background is in order:
When operating in linear mode, the 8-bit values map to linear (not perceptual) brightness of the LED. However, the perceptual brightness delta of each step is not constant, since our eyes are much more sensitive to brightness changes in dimmer light than in brighter light, i.e. the step from 1 to 2 is a much more significant perceptual brightness change than the step from 251 to 252. Gamma correction aims to solve this by mapping between linear and perceptual space. However, if gamma correction is performed in software and the hardware is still controlled via an 8-bit quantised linear brightness value (translated from perceptual space via an EOTF) this results in significant crushing of the lower brightness values, producing stepping and banding. While bit depth can be increased to alleviate this problem (with tradeoffs in driver Tr/Tf requirements), the optimal solution is for the hardware to accept quantised values in perceptual space and utilise non-quantised (or higher bit-depth quantised) adjustment of the constant-current or PWM control loop. This results in each quantised value step (0 to 1, 1 to 2, ... 254 to 255) producing a perceptually constant change in brightness, eliminating the crushing and banding problems. Presumably this was the intent of the logarithmic dimming curve mode in the LP5036.
The datasheet states that "if a special dimming curve is desired, using the linear scale with software correction is the most flexible approach". The linear mode certainly does make it trivial to implement an arbitrary EOTF in software instead of using the hardware logarithmic curve, but the trade-off for simplicity is that you run into the very problem that the logarithmic mode seeks to solve. It seems to me that if I wanted a better quality result, I should define some function that translates perceptual brightness to the LP5036's hardware-specific logarithmic brightness space. For example, if the hardware's EOTF was H(x) = x^3.0 and my target EOTF was E(x) = x^2.2, I could define E'(x) = x^(2.2/3.0) as an intermediate translation function, such that H(E'(x)) = x^2.2.
I tested this in a graphing calculator: https://www.desmos.com/calculator/ismfjbchot
The blue line represents the typical approach of performing software gamma correction and then quantising in linear space. You can see significant stepping at the low end, with a minimum non-black perceptual brightness of around 8%, creating a contrast ratio of just 12.5 which is very poor.
The green line represents the alternative approach of creating an intermediate function to map from the desired EOTF into the hardware's EOTF, then quantising in the hardware's nonlinear brightness space. This still produces nonlearity in the step sizes, but has the benefit of making the steps much smaller in the low brightness region. For all cases where the hardware's internal transfer curve approximates an exponent greater than 1, this produces improved results. The trade-off is increased quantisation error in the high brightness regions, but the overall magnitude of perceptual error is far lower (as little as one third for typical exponents).
However, this assumption of improved performance only holds if the 8-bit quantisation in intermediate gamma space is the only quantisation being applied within the hardware. If the hardware simply translates from 8-bit "log" space back into 8-bit linear space using a LUT, such that the first ~8% of perceptual values map to zero brightness in practice, the crushing effect will occur regardless. If the hardware translates from 8-bit "log" space into a higher bit-depth (e.g. 10-bit or 12-bit) linear space using a LUT, the efficacy of the intermediate gamma translation approach should hold but requires further characterisation. This is what I would like to better understand.
How is the LP5036's "logarithmic" transfer curve implemented in terms of linear quantisation? Does the curve LUT directly map the 8-bit quantised input value (in log space) into a 12-bit value in linear space, which is further separated into 9-bit PWM and 3-bit temporal dithering depths?