This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
I have an application that utilizes the UCB1 as an SPI slave feeding through DMA channel 0. The UCSWRST bit is UCB1 is initialized once at startup and the UCSWRST bit is used to enable/disable the port thereafter. The behavoir is the same in both 3 pin and 4 pin slave modes.
This has been working fine on the 'F5438 Rev L chip, but on the 'F5438 Rev B chip, the UCB1 shift register does not reset between uses. It appears as though UCB1 'keeps' an extra bit following its first use which then forces all further uses to be off by one bit.
1st: 138 bytes - all are ok.
2nd: 138 bytes - all bytes are shifted right by 1 bit. 0xFA becomes 0x7D; 0x8A becomes 0x45; 0x33 & 0x00 become 0x19 & 0x80, etc.
3rd, 4th, etc., are identical in form to 2nd usage.
Workaround:
If I initialize UCB1 to master mode and then switch back to slave mode between each usage, the shift register is reset correctly and the port works properly.
Further info:
The UCB1 and DMA0 are initialized in active mode, the CPU then enters LPM3 until awakened by an external interrupt indicating the end of reception.
Is this an unpublished errata? Has this problem been previously identified? Any idea if the workaround is sufficient?
Thanks for any help/confirmation
I checked the old errata sheets and from version b on (I don't have version a) until the current version f, only the L revision of the 54xx is covered.
So I guess, using any older silicon is not reliable. And as you already noticed, the problem does not appear for the 'L' revision.
If you have any malfunctioning older silicon, you can try to send it to TI for exchange, but I guess the warranty period has timed out already :)
But to be serious, I don't know what's wrong. Maybe the SWRST has influence on the settings for clock phase and this causes an initial bit to be detected the moment you release SWRST. But that's just a guess. After all, I have never seen a chip older than 'L'.
The title indicates that the chip in question is the MSP430F5438A, however I omitted the 'A' within the body of the message. Just to be clear, the problem exists in the MSP430F5438A Rev A and MSP430F5438A Rev B.
My apologies for omitting the 'A' suffix within the body of my message.
Both of the available '5438A' revisions, Rev A and Rev B, suffer the same problem. Limited testing shows that entering/leaving low power mode plays some role in the bug as it can be avoiding by not entering LPM 3; I've not tested the othe LPM modes yet.
Ah, okay.
The reply window does not show the thread title, so I didn't see the 'A' :)
The errata sheet of the 'A' does not list the problem. Maybe you encountered a new silicon bug. This is likely when your code did run on the non-A version.
It might have something to do with timing too, since I guess you're clocking the new processor with 25MHz instead of the 16 you had before. But that's just guessing.
Also, LPM3 disables some clocks if not in use. And in the errata sheet, there is noted that the source clock for the USCIs is not disabled when in UART mode even if the hardware is idle. This means in SPI mode it should be disabled. And might cause problems when it comes up.
I can also think of possible problems with the different clock sources when MCLK and the UART clock source are from different origin. If so, it might be that one clock comes up faster than the other, causing an erroneous SPI clock impulse being detected?
I don't have any experience with the 'A' series (yet), so I cannot be of further assistance her. But maybe one of my 'Wild Guesses' (TM) will lead you to a solution.
Matthew,
I am not aware of a specific errata that explains your problem.
However I have seen this behavior before from the USCI SPI module.
Sometimes when the pins are being configured (or reconfigured) by altering the PxSEL/ PxDIR and PxOUT settings - it can cause an inadvertent transition on the SPI lines.
The SPI device on the other end looks at this transition as a single 'bit' of information - causing all following (valid) bits to be shifted by one.
Your workaround seems to do the job. Another one I would suggest is doing a SWRST of the module right before starting the second transaction (before loading TXBUF).
This should clear any extra phantom bits.
Hope this helps!
Regards,
Priya
I re-read your original post once again and I wonder why the port is ever reinitialized. After all, you write you use it in slave mode.
As an SPI slave, you should be always ready to receive data (3-pin slave mode) or start receiving synchronized with the transition of the STE line (4-pin slave mode). But reading the family datasheet, it only states that 'the shift operation is halted' and not that the shift operation is canceled and all already shifted bits are discarded. If so, this is not really conforming to the usual SPI behaviour.
When using DMA in repeated single mode, you should get an IRQ each time a complete packet has been received/send. It's not 100% clear who's getting the extra bit, the external master reading from the MSP or the MSP getting stuffed by the master (I assume the first, based on the 'feeding through DMA' expression)
One drawback of the DMA method is that it is prone to failure in case anything goes wrong. The SPI specification allow the master to prematurely end any transfer by releasing the CS line. In this case, you won't get the desired number of bytes through DMA and therefore won't get an IRQ. Next transition will fill up the DMA buffer instead of starting at the beginning.
It would be best to connect the STE line externally with an IRQ capable port pin and set up an ISR that will handle when STE goes inactive before the DMA transfer completed.
Also, you should re-check the clock polarity and phase settings to match the master. There are combinations possible where enabling the SPI with inactive master will cause a clock pulse to be detected.
After all, the shift register only has 8 bits. There is no chance that it will send an 'extra bit'. There is no spare bit storage or such. :) If the master gets an additional bit, it is because 1) the slave does not respond to the first clock impulse (so all data sent appears shifted), 2) the slave did not send the last bit of the previous transmission or does not consider the last transmission (clock cycle) being finished or 3) the master takes the bit before the slave has actually shifted it out. All three could be explained with mismatched polarity/phase settings.
Also, since you're using the 'A' types now, I guess you also raised the core frequency from 16 to 25 MHz. This means that the CPU works faster. And since the DMA trasnfer is finished when teh last byte has been stuffed into TXBUF, and not when the last byte in TXBUF has been sent, it is possible that you're disabling the SPI too early and on the (slower clocked) non-A it was late enough per coincidence but now it isn't anymore. So the last bit still remains unsent in the shift register.
Thanks for your detailed reply.
While the SPI port is used to transmit and receive, the problem only happens when receiving. The 'extra' bit is actually the least significant bit of the previous character; that is, the entire received data stream is shifted by one bit. The first byte in the stream has a zero bit as its MSB, the second byte has the LSB of the first byte, the third byte has the LSB of the second byte and so forth.
The application always gets the correct number of bytes, only the bits are incorrect. Since the application does not know how many bytes will be sent in advance, the DMA is initialized to receive a maximum number of bytes and an external interrupt signals the end of the transmission. In practice, the DMA never triggers the interrupt, the DMA interrupt is used as a failsafe to prevent buffer overflow.
I've monitored the SPI lines with a logic analyzer and a DSO 'scope. All signal lines and timing are correct (SIMO, CLOCK, MISO & STE) with respect to both the slave and the master. The '5438A is running at 8MHz with the VCore set to 2 (much higher than required, but I was troubleshooting). I can only anticipate the approximate time of reception from the master, so the SPI port has to be initialized early. The STE line is connected to an interrupt which is used as a time transfer mechanism. It may be this interrupt that interferes with the SPI port, but I would expect the peripherals to operate on their own. The slave's clock is provided byt the master, the DMA is triggered by slave
The slave's shift register must be detecting an extra initial bit that then has the entire stream off by one. It is very consistent (it always happens) and it is always the first bit. Alternatively, the slave may trigger the DMA one bit early on the initial byte; this would also account for the behavior I'm seeing.
All is not lost, though, setting the slave to master mode and then immediately back to slave mode as part of the setup does eliminate the problem.
Normally, there should be no need to switch between master and slave mode. SPI should stay in slave mode and receive data properly. Once or many times. So there is still something wrong. Even if your workaround solves the problem for now, you might face the same problem when re-using your code for a different project or applying a different master. Or you might come into a situation when you cannot/may not reprogram the SPI without losing data.
One more thing, how do you know how many bytes have already been received? Unfortunately, the DMA controller does not have a counter you can read. Only the latched maximum number. Of course you might know by the data itself how many bytes have to follow (header byte or implicit command length knowledge), but what it the master interrupts or cancels a transfer? You'll never know. As much as I love the DMA capabilities, the lack of a 'transfers done' counter is a real showstopper for everything with a variable number of bytes. And completely useless for sending/receiving e.g. UART data from/to a ringbuffer.
Comparing the datasheets, there are significant differences betweeen the 5438 SPI slave timing an the 5438A:
5438 | 5438A | |
STE lead time (STE low to clock) | 40ns typ | 8ns min |
STE lag time (last clock to STE high) | 10ns min | 3ns min |
STE access time | 40ns typ | 30ns max |
STE disable time | 40ns typ | 13ns max |
SIMO input data setup time | 15ns min | 2ns min |
SIMO input data hold time | 10ns min | 5ns min |
SOMI output data valid time | 50ns max | 40ns max |
SOMI output data hold time | 0ns min | 8ns min |
You see, every value is different and and depending on the used clock speed, wrong phase and polarity setting may cause erroneous bit/clock detection. The 'A' processor responds much faster to everything.
Maybe your oscilloscope reading seem okay, but this means just that the master (who's providing the clock) puts out things correctly. It does not show what the slave is detecting/interpreting into it.Since STE disable time is much faster now, it might be that the end of the last master clock pulse isn't counted anymore and continued on next transmission while on the non-A the slave stays enabled long enough to detect the last edge and finish the last bit transfer. Even if this last edge is not part of the master transmission anymore and is caused by the master disabling its output etc.
It shouldn't be a problem testing different polarity/phase settings (after all, there are only 4 combinations) whether things work without your workaround in a different combination than what you're using now.
I understand your point, Jens-Michael, there is something wrong; what is wrong is not so clear.
The master has been used successfully with the MSPF430F2370 and with the MSP430F5438; it is with the transition to the MSP430F5438A that the problem occurs. The clock phase and polarity has been checked, double checked and triple checked. The SPI port is used to transmit packets and to receive packets, so the clock phase and polarity are easily observed. The master is an RF transceiver that bit streams the data over the SPI port. The master does not know how many bits it is going to receive and ends the reception when its RF signal strength fades. The master always sends an extra bits, between 16 to 20 on average. I only concern myself with full bytes transferred via the DMA channel and count upon resetting the SPI state machine to dump the excess bits.
From my perspective, the SPI port should reset its state machine every time the UCSWRST is asserted and should not need to be reinitialized. With all previous versions of the MSP430, this worked as expected. The '5438A version does not work as expected and reinitializing the port was the first corrective measure to be taken. This also proved to be insufficient, so I started to 'flip modes' to see if the state machine could be successfully reset. The first mode tried, flipping from slave to master and then back to slave did work. Not entering LPM3 also works, but that is not an acceptable solution.
By the way, the DMAxSZ register contains the count of characters remaining to be transferred. From that it is easy to calculate the count of characters that were actually transferred.
You're right about the DMAxSZ register. I misinterpreted the documentation. While the address registers are copied to an internal register and stay constant to the CPU (so the temporary internal registers can get reloaded from there after a complete cycle), the SZ register is rather backed-up to an internal register and restored from there, decrementing the CPU-visible register on every transfer. So indeed it is possible to use it for calculating the number of transfers remaining.
I had only transfers with a known number of bytes, so I didn't dig deeper into it. After some thought about the possibilities, I wish I had 20 DMA channels available :)
Back to th SPI: in 4 pin SPI mode, the input and output shift registers should synchronize with the STE line. When STE goes inactive, input and output shift registers should be cleared as well as the output buffer. Without any reconfiguring of the SPI hardware. The diagram in the datasheet, however, does not show any reset/clear condition. It might have been omitted to avoid the diagram getting too complex.
Anyway the documentation talks only about 'shift operation is halted' and not resset by STE. And SWRST is only told to 'reset the UCRXIE, UCTXIE, UCRXIFG, UCOE and UCFE bits and sets UCTXIFG'. while setting UCTXIFG seems to imply clearing of the TX buffer and resetting the output state machine, it is never mentioned anywhere.
You might try to use 4 pin master mode instead of slave mode, as this will automatically switch from master to slave mode when STE is pulled and back when released. It will, however, cause output direction conflicts if the master on the bus activates its outputs before pulling STE (or has them enabled all the time). But your workaround bears the same risk.
It is strange that LPM3 causes the problem. In LPM3, the ports are not affected.
Which clock is selected for sourcing the SPI module? In slave mode, it shouldn't need a clock, but maybe it is still needed for internal transitions? And in LPM3, SMCLK is disabled.
The 5438 errata sheet lists the following problem:
PORT14
Digital I/O Module, Port 1 and 2
Function
GPIO pins set to high impedance on exit from LPM2/3/4
Description
When automatic SVS control is enabled (SVSMHACE or SVSMLACE are set), all I/Os
Workaround
None
On the 5438A, this is not listed anymore. It might affect SPI functionality and be the reason why you didn't see any problems on the 5438 (port pins are set to and remain in high impedance for a long time while you handle the interrupt). This could cause a clock transition to be detected and therefore the missing last bit transfer to complete internally. But then I wonder why it works with not entering LPM3 at all.
I'm out of ideas now. Perhaps with the hardware on my workbench I could figure out what's going on. But I really don't have the time to set up something unless I need it for my own projects.
p.s: did you add a NOP instruction right after the LPM entering instruction? There are many errata listed for the cpu and the general solution is to add a NOP right after setting the CPUOFF bit (or clearing the GIE bit) to avoid most of them. (almost all other errata are related to strange PC manipulations)
Has anybody managed to get to the bottom of this problem in the last year?
I ask as I am working on an MSP43F5438A project that I believe exhibits the same ( or very similar problem ).
After much seraching this original post is the the only reference I can find to this problem on these devices - so here goes!
In essence we have a SPI data stream from another device acting as a Master on the bus. It periodically indicates a resync, which means that we need to resynchronise the data stream. To do this we put the SPI into SWRST and back out. However we then consistently receive only the first 7 bits of the data shifted on the first RXBUF read, with all subsequent data shifted one bit. We are using DMA to read the data from the UCB0RXBUF.
There are no erroneous clocks on the SPI bus when viewed with a DSO or logic analyzer so this looks like an internal problem with the MSP430. If we purposefully hold the data high, we receive 0x7F as the first byte, which gives the impression that the SPI runs in 7 bit mode for the first byte and then drops back to 8 bit mode. Alternatively it is internally sampling the first bit which seems always to be a zero.
Any latest thoughts on this would be much appreciated, as it looks like an undocumented silicon bug to me.
Thanks
Andrew
Andrew - I worked with Matt O. on this issue (he posted this over a year ago). We've experienced this issue with the ..5438A revE device. Reading the SPI RXBUF (in slave mode) via a DMA channel. This issue has recently surfaced again on our other design that uses the 5438A (we used the non A version before and had no problems). Although this design doesn't go into LPM3 but uses the DMA to transfer the data from the SPI port to a UART.
Anyway, I find it surprising that after a year, hardly no one has encountered this, at least enough to bring it to TI's attention to pursue. It is a new device (...5438A), maybe it just needs more field time. I agree with you that it appears to be an undocumented silicon bug. I did bring this to our local FAE and TI (Brandon Elliott). They looked at it and we answered some basic questions, but heard nothing back. I suppose they couldn't reproduce it in a timely manor.
From the previous posting, you'll see what we did for the work around. Basically disable the spi port, change the mode of the machine to master and back to non-master, and that seem to reset the machine and then it worked.
Matt K.
Just a day ago, I replied to a thread where there was weird behavior on USCI I2C on the A devices - sometimes the UCSTT bit and UCBUSY bit won't clear after the transmission is done. Here it too looks like the last clock cycle isn't properly finished.
maybe a silicon bug/racing condition on the A devices that isn't there (or 'won' by the right one) on non-A devices.
**Attention** This is a public forum