The TAS1020B eval kit code v1.8 uses a default Endpoint 0 maximum packet size of 8 bytes. My application needs a 64-byte packet to support a big Feature report. So, to that end, I've set the bMaxPacketSize0 field in the device descriptor to 64 by changing the #define of DEV_MAX_EP0_PKT in devDesc.h. This has the effect of setting the USBENGINE_EP0_SIZE macro (actually UsbRequest.ep0MaxPacket) to 64 too.
But then things get really confusing. Looking at the original eval kit code, we see a bunch of #defines in devmap.h. (Aside: the comments at the top of this file think it's Mmap.h.) A comment at the top of this file says, "The Ep0 temporary save buffer size is defined by the application. This buffer is not the same as the IN/OUT EP0 pipe buffers." Then there are two #defines:
// the size of EP0 temporary save buffers:
#define DEV_USB_INEP0_SIZE 0x08
#define DEV_USB_OUTEP0_SIZE 0x08
So what exactly are the "EP0 temporary save buffers?"
Then there's another #define:
#define USB_EP0_XFERDATA_SIZE 0x10
and which is of course the same size as the two temporary save buffers. Why? This is used to (as far as I can tell) locate the start of endpoint 0 transfer data in the xdata-space RAM. Then the two "temporary save buffers" plus the transfer buffer are used to locate the USB_APP_ADDR_START address, which is where one of the FIFOs for the streaming audio data starts.
Now since I only need to Set this large feature report, and not Get it, my assumption is that the OUT endpoint buffer needs to be 64 bytes but the IN buffer can be 8. Is this correct? And why is the USB_EP0_XFERDATA_SIZE set to 16 above, and must I set it to 64 to accommodate my 64-byte EP0 max packet size?
This stuff is rather baffling ... any advice is appreciated.