This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CCS: DSS loadRaw function

Tool/software: Code Composer Studio

Hi,

From the loadRaw Doc:

public void loadRaw(int nPage, long nAddress, java.lang.String sFilename, int nTypeSize, boolean bByteSwap)
throws ScriptingException
Load a raw file from the host to target memory (specified by start address, page and length). Filesize is automatically determined and the entire file is loaded *
Parameters:
nAddress - is the first address in the block.
nPage - the memory page. Use one of Memory.Page enumeration.
sFilename - specifies the name of the file that will store the target data.
nTypeSize - specifies the type size of the data. Affects how data is byte-swapped to account for differences in host and target endianess. For example: if the target is big endian (the host is always little endian) and you specify nTypeSize = 16 then the upper and lower bytes are swapped after loading the file - but before writing to target memory.
bByteSwap - Force a byte-swap of the data before writing to target memory. If Host and Target are difference endianess - this will effectively disable the automatic endianess conversion.
Throws:
ScriptingException

Can You please explain the meaning and usage of the nTypeSize - i could not figure out how it works and what values should accept (when i loaded a file and set it to 8 assuming it means loading the file byte by byte it seemed to write to memory that exceeds the file length while if ste it to 32 it seemed ok - why ?).

Also how this relates to the bByteSwap  parameters and how this also is being used.

Thanks

Guy

  • Hi Guy,

    nTypeSize and bByteSwap were used to support raw binary data transfers to account for endianness and word size differences for the target CPU that will access the data. This functionality was first implemented long ago to support binary loads of raw image data (the target was a dual core ARM and C54x - the binary transfer done while connected to the ARM but the processing of the data done by the C54x).

    What device are you working with?

    Thanks
    ki
  • Hi,
    I am currently working on TDA3xx.
    but what i am asking is general - if this parameters are suppose to control Endianess than how come when setting Type to 8 the affect was that more memory was written than the actual file size - how than values being set to these parameters actually affect the write?

    Thanks
    Guy
  • Sorry for the delayed response. What is the size of the extra bytes? I recall the the original implementation of this functionality required some "padding" of extra null bytes depending on the data and how the data is buffered.
  • Hi,

    i did not check the size of the extra bytes (i did not paint the memory before as i tried to load something in a middle of another data structure)

    Isn't there a proper documentation that describes what it does?

    Thanks

    Guy

  • The API doc is the main documentation. What exact device are you working with?

    Thanks
    ki
  • Hi,

    I am currently using TDA3xx / TDA2xx

    I have already looked at the main document and since the documentation there is very general and missing a lot (i.e. description of what exactly each of the functions parameters do)  i posted the question.

    I hoped you would have a more detailed document and/or can help from your own knowledge

    Thanks