Hi
It seems the receiver of the eUSCI in UART mode requires the number of stop bits selected by UCSPB to receive correctly.
Is this correct?
It is not explicit described how the UCSPB setting influence TX and RX individually.
The only reference between RX and stop bits we can find, is in the description of framing error:
“Framing error
UCFE
A framing error occurs when a low stop bit is detected. When two stop bits are used, both
stop bits are checked for framing error. When a framing error is detected, the UCFE bit is set.”
So we can see from the UCFE whether or not the required number of stop bits was received.
But we don’t need to handle framing error in our protocol. It has build in error detection, e.g. by CRC check.
What we see is, that data transmitted with one stop bit is not received correctly.
The first byte is received correctly, but the following ones are incorrect – in a way that could match a delayed reception out of sync.
So it seems that the stop bit setting for the receiver part not only influences the framing error function, but also controls when the receiver is open for a new start edge.
Our protocol specifies two stop bits for transmission. This is only specified to get extra margin for re-synchronization at the next start edge in the receiver.
In the embedded devices we have used in the past (both TI’s and others), the receiver has always only required one stop bit, independent of the UART stop bit setting. E.g. in the USART peripheral of the MSP430x4xx family.
Because our devices interface with a wide range of other devices (both our own and 3rd party, and both embedded and generic devices), some of them only use one stop bit (by mistake or because it can’t be controlled in the specific device). We haven’t been able to control this – and didn’t wanted to, because it worked okay.
If our device TX’s two stop bits and RX’s one stop bit, it can work with all other combinations of one and two stop bits.
If our device TX’s two stop bits and requires two stop bits on RX, it can only work with combinations where the counter part TX’s two stop bits.
We know the “always one RX stop bit” is “text book incorrect”.
But in normal asynchronous communication, it can provide important flexibility and margin.
In our case, we could theoretically try to force all the other devices to follow the protocol to the letter.
But in reality, in many cases we need the backward compatibility because it shall work with exiting devices or with devices that can’t be changed (e.g. if our costumer has bought their own devices from a subcontractor that no longer exists).
Is there anyway this could be changed in existing or upcoming TI devices or families?
We suggest one of the following solutions:
1) The UART RX only checks one stop bit, independent of the UART stop bit setting.
2) The UART could receive a new start edge after maximum 1 stop bit, independent of the UART stop bit setting.
3) The stop bit setting in the UART is split into separate settings for RX and TX.
On our current project, we have the possibility to use two separate eUSCI’s for TX and RX respectively with different stop bit settings. But it means that we will never be able to use the opposite part of each of these interfaces.
In other products we will need more interfaces with this feature, than we can support with two eUSCI’s per interface.
We really hope for a positive answer!
/Mads