Hi Team,
When using the DSI interface, how much of the total bandwidth should be allocated to "overhead" (not video data)?
Thank you,
Jared
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hi Team,
When using the DSI interface, how much of the total bandwidth should be allocated to "overhead" (not video data)?
Thank you,
Jared
Hello Jared,
Non video data periods would be:
1. Short Packet:
- 32 bits total
2. Long Packet:
- Packet header (32 bits)
- Packet footer (16 bits)
3. Any LP to HS or HS to LP Transition
The time for each of the above periods is dictated by the DSI transmission rate and the transmitter D-PHY timing configuration. How you would consider these overheads varies across different DSI modes, burst, non-burst with sync pulses, or non-burst sync events and also how the source is configured to handle LP modes throughout the video frame.
For example in non-burst modes, typically overhead from all of the above are compensated for automatically by the source based on the timing of HSS/HSE/VSS/VSE packets in the DSI driver, so that DPI timing can be maintained (Absolute timing of sync events, line times, etc.)
For example, if the source is configured to send a specific HSYNC width in us, the DSI driver may choose to enter LP11 between HSS and HSE packets so long as the HSS and HSE packets have the corresponding HS time in us between them. Or the source may choose to send HS blanking packets during the time between HSS and HSE in which case the number of blanking packets can be chosen such that HSS to HSE timing matches the HSYNC time in us. Since the HSS and HSE packets themselves have 4 bytes, then that means 2 bytes can be removed from the number of blanking packets between HSS and HSE to compensate.
This is just one example and there are many ways to implement DSI at the driver level. Generally those configurations are outside the scope of what is needed to configure 941AS itself. Does that answer your question? Is there a specific use case you are trying to evaluate?
Best Regards,
Casey
Hi Casey,
Thank you for the detailed response! To help make sure I understand clearly, do you need to add the short/long packet bits to the LP/HS transition times to get the total non video data time? Or is your example essentially saying that the short/long bits can be embedded into the LP/HS transition times, so really the LP/HS transition times is the only contributor to the overhead (depending upon the mode of course)?
-Jared
Hey Jared,
There's no one size fits all answer to this since there are many different ways to implement DSI. At a high level, your takeaway should be that typically for DSI, you do not need to consider overhead as an issue when considering how much video bandwidth you can support because the fact that DPI timing must be maintained for all horizontal and vertical video parameters means there is plenty of left over dead time which can be used to compensate for any overhead from the protocol.
This is one of the main differences between CSI-2 and DSI, because for CSI-2 there is no requirement from the standard that DPI timing must be maintained for horizontal timing parameters. Meaning you can send a line packet as quickly as desired and there is no need to convey HFP, HSYNC, or HBP widths to the sink. This is what makes CSI-2 desirable for aggregation purposes. However since DSI is used to feed a display which will traditionally have requirements for all discrete elements of line timing, then you do need to convey timings within horizontal blanking and this introduces time periods where you have dead time in the transmission.
For 941AS, the two boundaries that you need to consider are a maximum per-lane DSI speed of 1.5Gbps, and a maximum PCLK speed of 210MHz. As long as you stay within those bounds, then you are ok. If you have a specific case you are trying to analyze, then we can help take a look at that
Best Regards,
Casey