Other Parts Discussed in Thread: SYSCONFIG
Tool/software:
Reading the "CC13xx Long Range Modes" written by Sverre Helen and published on December 2018 [link], I had to stop at this part:
"There are several takeaways from Table 2. First of all, higher DSSS rates offer higher sensitivity gain. This, of course, comes at the expense of longer packet durations."
This does not make sense.
The whole concept behind the direct sequence spread spectrum is to increase the chip rate while the pulse duration is the same.
This is a direct application of Fourier transform and the core concept of time-domain-frequency domain relation in which we know smaller pulse width in time domain results in a wider sinc squared function in the frequency domain and that's the basis of the direct sequence spread spectrum.
What DSSS systems do, is maintain the symbol duration while replacing the original symbol pulse with so many short-duration pulses.
This will spread the spectrum as I said due to the Fourier Transform of the new signal.
So, if I am right, then it is not correct to say that increasing the spreader code length would increase the packet duration.
I am wondering why it is mentioned that "this, of course, comes at the expense of longer packet duration"
Also, is there any data on the symbol duration for different physical layer configuration? If so, where I can find that document?
Thanks.