I've found that the SDK limits the frame period to about 1300ms. Documentation states that "typical" frame periods are 1ms to 1000ms.
I plan to try this soon, but it appears that it's possible to achieve longer frame periods than 1000ms using external triggering, since the external trigger period can be longer than the frame periodicity. It seems inconsistent to allow longer periods by external triggering but limit the period when internal triggering is used.
Can you explain what issues would arise by using a frame period greater than 1000ms? As far as I understand, this would reduce the frequency that the software detects objects, but each frame on its own should still yield the same results. Giving the hardware more downtime between chirps, if anything, would help performance slightly by reducing heat dissipation. Reduced detection update rate is only an issue when you are going to process the detections somehow, as in a tracking algorithm. As the SDK does not provide any tracking algorithms, the performance of the system as far as this API is concerned shouldn't depend in any way on the frame period.
Perhaps the reasoning was to make the API easier to use and prevent users from creating a configuration that doesn't work. However, the people using this platform are going to want to customize it to many use cases, so providing flexibility should be a priority.