This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Serial Port Break Clarification 2

Many years ago, when the serial port protocol was started, a break was defined as the time between bursts of bytes.  Bursts of bytes (one or more) are transmitted ... then there is a pause before the next burst of bytes is transmitted.  That pause is a break.

To help keep things simple, a byte of data begins with a start bit and ends with a stop bit.  This is followed by a second byte with a start bit and stop bit.  The start bit can occur immediately after the last stop bit ... or it can occur later. 

The stop bit is the opposite polarity of the start bit.  If the start bit is a zero, then the stop bit is a one.  In this case, when the signal goes to zero after the stop bit, it is interpreted as another start bit. 

TI design folks have decided that the signal must be zero to be a break.  This is an error in logic.  Multiple start bits (more than 10) is a framing error.  A break is when there are multiple stop bits ... which is standard RS232 communication. 

It is unfortunate that some TI person decided to changed this protocol.  The current TI micros require us to use a timer to determine when there is a break in the communication.  This consumes time and power.  If TI micros used multiple stop bits as a break, it would reduce time and power consumption.

It would be good if TI modified their micros to handle this correctly.  I doubt it would impact anyone ... unlikely anyone uses the TI break system as it is unusable in its current form. x

  • Clyde Eisenbeis said:
    Multiple start bits (more than 10) is a framing error.  A break is when there are multiple stop bits ... which is standard RS232 communication. 

    Is it?

    If so, where' sthe difference in sending a break and sending nothing?

    Old modems did have a dedicated command for sending a break as the 257th character. Which seems to be superfluous if jus tsending nothing for some time serves the same purpose. Why enterign command mode, sending thr break command, then returning to dat amode when all that there is to do is just doing nothing for some time?

    The TI break conforms with what was a break all the time during the youth of telecommunication.

    Also, a frtaming error is not multiple start bits (after a start bit, you don't knwo whether the next bit is a start or data bit).
    A framing error is if the stop bit has the wrong polarity - except if the whole thing is detected to be a break, by the 11th bit being still low for 11 bits in a row.

    Or in other words: a break is a 0x00 byte with a low stop bit immediately followed by another start bit. :)

    If I start practicing again, I think I can still whistle a break with 300Bd. :)

    "standards are, when people try to do things different from the traditional way"
    and my favorite (supposedly a Microsoft dogma):
    "why should I care for my standards from yesterday"

  • A break is "sending nothing".  That lets the code know the string of information being transmitted is complete.  This how computers, etc. work.  They send a string of bytes and then send nothing (break). 

    I know of no equipment that sends a bunch of start bits ... or one start bit with many bits that are the same voltage as the start bit ... which is the same as multiple start bits. This is a framing error.

    If TI micros detected "sending nothing" periods, it would reduce time and power consumption.

  • Give me a break!

  • I can't really follow your argumentation. You can setup the micro in such a way, that when something is received it wakes up, handles the byte being received and goes to sleep until the next start bit occurs. Where is the difference between sleeping between two bytes and sleeping because the micro received a break?

    Both times the micro must be ready to receive something new - so it's not like you can enter different low power modes because of the break. But maybe I'm misunderstanding something.

  • Typically I have an interrupt capturing all of the bytes ... with no notification that this is happening.  Only after all of the bytes have been received do I want to retrieve the bytes and interpret them. 

    This is far simpler than interpreting each byte as it is received. 

     

  • Clyde Eisenbeis said:
    A break is "sending nothing".

    No. Sending nothing is sending nothing and a break is a break. Anynchronous mode does not require that something is sent at all. if something is sent, it is sent, and if not, then not.

    Imagine you are typing to a serial terminal. if you don't type with >480 keystrokes per second (9600Bd), you'll send a break after each single character, which would by your claim mean that each character is a separate, unrelated transmission.
    Somehow this sounds stupid. :)

    That some software interprets a gap larger than a certain time as an intended end of transmission has nothing to do with the UART bus protocol. It is a high-level protocol to which both sides may have agreed. Just like they have to agree about what kind of data is sent: binary or ASCII number, 0- or CR-terminated strings, whatever.

    On synchronous transfer, things are different. There you have to say when there's nothing more to say, or the transfer will continue.
    Also, on RS485 or other protocols, all peers may agree to a convention that a gap of more than one byte time is considered an end of transmission and the bus is free for the next peer. This has nothing to do with a 'break' in the sense of the original RS232/V.24 protocol. And the MSP does not support half-duplex protocols liek RS485. (while you may implement them by hand)

    Clyde Eisenbeis said:
    If TI micros detected "sending nothing" periods, it would reduce time and power consumption.

    And if they would detect complete command sequences and would answer them the way you want, it would save even more. Not likely to be happen, though.
    But you can easily set up a 'timeout timer' that is reset on each received byte and when no more bytes are received for some time, it will wake your main thread. Simple, and adjustable just like you want it.

  • It can be done with a timer ... which is what I'm doing ... but would be simpler if it were done automatically (by enabling a feature like this).

    I know of no equipment that operates the way you describe ... other than using something like Hyper Terminal. 

  • Clyde Eisenbeis said:
    A break is "sending nothing".

    Wrong!

    Jens-Michael Gross said:
    No. Sending nothing is sending nothing and a break is a break.

    Correct!

    See Also: http://e2e.ti.com/support/microcontrollers/tms320c2000_32-bit_real-time_mcus/f/171/p/152248/554822.aspx#554822

     

  • Apparently there was a misinterpretation of a "break" more 30 years ago.  Back then we viewed a pause as a break.  I know of no equipment that uses a break as described.

    It would be good if TI implemented a "pause  detection" feature ... which would adjust automatically to handle the different baud rates.

  • Clyde Eisenbeis said:
    It would be good if TI implemented a "pause  detection" feature

    It is there, for multiprocessor communication. But not for "freestyle" async data transfer.
    The descriptions is, however, not as detailed as it could be.
    It looks like an idle event won't trigger an interrupt, though.

    However, if you don't know how many bytes to expect, eaiting for an idle time isn't the best method. As I pictured out, imagine someone typing the message. Or an interrupt on the sender delays sending of the remainder of a message.
    It's better you either specify an end character (0 or CR) or precede each data block by the number of bytes.

    A break (and I mean break, not idle) can be used to resynchronize the message chain in case something goes wrong. (and this was the primary reason for introducing the break)

    But you can also say that after a complete message, a break has to be sent. All UARTs are able to generate a break, and the MSP will detect it and can trigger an interrupt (RX interrupt with a '0' byte in RXBUF and UCBRK set)

  • Since much communication is via a computer, generating a break would be difficult.  Plus CRC's are often used ... the last char sent will not be the same.

    There are many standards that follow a specific protocol.  One that comes to mind is Modbus.  There are no breaks ... the number of bytes varies ... and the last bytes are a CRC.

  • Clyde Eisenbeis said:
    Since much communication is via a computer, generating a break would be difficult.

    No. generating a break is part of the COM port API. Just a seldom used part.

    Clyde Eisenbeis said:
    One that comes to mind is Modbus.  There are no breaks ... the number of bytes varies ... and the last bytes are a CRC.

    But IIRC, the total size is part of the datagram.

    On those protocols I know where there is neither size info nor 'stop byte', there are no constraints about how quick the bytes must follow each other. Just the first byte is analyzed and then the number of bytes to follow is known.

    In my own code, I just start reading data from teh input buffer, interpretign it. When I need more data and it hasn't arrived, teh read function will go to sleep until the data is there or a configurable timeout expired. Straight, simple and compatible. There is no need to have all data available when you start working on it,. You only need to ensure that you don't work past the amount you have.

    However, if you have a CRC, then you must wait until all is there before you can act. I know of no protocol that says "data can have any length and you only know that's all when nothing more arrives"

    Even a file you write an arbitrary number of bytes to, is closed by a close instruction, not by a write timeout.

    Anyway: the solution has been given: start a timer on byte arrival. that's exactly what you wanted. And if you insist that it should be a hardware function, set up a DMA that triggers on byte receive and configures the timer. All in hardware then. Just a few lines of setup.

  • Clyde Eisenbeis said:
    Since much communication is via a computer, generating a break would be difficult.

    Not at all!

    Break generation is a standard function of the UARTs commonly found in many computers.

     

    Clyde Eisenbeis said:
    Plus CRC's are often used

    CRCs protect "messages"  - Break signals, where used,  do not usually form part of a "message"

     

    Clyde Eisenbeis said:
    One that comes to mind is Modbus.  There are no breaks

    Indeed; MODBUS/RTU uses idle time between "messages" - not Break signals.

     

**Attention** This is a public forum