This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hello,
C functions that work on ARM CPU correctly produces crazy values on x49c CPU even after adding #pragma shown in the code below. Oddly the call to strln() is showing 4 characters when only 2 exist in header passed bytes. This is not debug related since the function still fails to produce correct hex2decimal conversions out of debug. I have also tested strtol() for same conversions, is not converting hexadecimal characters to base 16 integers as it should, seemingly related to this same issue of strln().
Also modified hex2dec() below several ways to get correct conversions and the only near correct answer requires multiplication of input hex char base 16 by decimal base 10. And then 0xC8/200 decimal is converted to 238 decimal, not exactly the correct answer.
Oddly debug stepping F5, F6, F7 has turtle slow speed problems since TM4C129x Stellaris debug probe (8Mhz) speeds over these same C code functions shown below. Why would a more advanced XDS110 debug probe than XDS100v2 behave worse when it should behave better, even at 8.5MHz it does not?
XDS110 probe step into strln() takes ridiculous time in Do while process CCS v12.2.
x49c: TI v21.6.1.LTS compiler
TM4C129 ARM Cortex M4: TI v20.2.6.LTS compiler
Hello,
I see two potential issues - one related to the compiler and one related to debugging.
For the debug one:
Oddly debug stepping F5, F6, F7 has turtle slow speed problems since TM4C129x Stellaris debug probe (8Mhz) speeds over these same C code functions shown below. Why would a more advanced XDS110 debug probe than XDS100v2 behave worse when it should behave better, even at 8.5MHz it does not?
XDS110 probe step into strln() takes ridiculous time in Do while process CCS v12.2.
I will split the debugging part to another thread and leave this thread for the compiler one.
Thanks
ki
Hi Ki,
I mentioned in another post that int16_t from (char*) in C lib array[n] debug simulator shows 2 appended zeros 0x00C8 to 1 byte. However, 2 extra zeros (x00) are blanked from C array[n] (argv, *argc[]) byte passed into called function for processing. The zeroes are still there but hidden via debug simulator. The extra zeros shouldn't even be there according to clang or strln() and strtol() would work correctly.
Wanted to mention TM4C1294 the XDS100v2 is 8Mhz JTAG via Stellaris XML. I had thought it had XDS110 firmware yet TM4C125 probe MCU is much smaller.
Thank you for continued efforts
I don't understand what the function HexToDec is supposed to do. Please provide a complete program, in a single C file, that demonstrates how HexToDec works. This program calls HexToDec with typical input, then prints out the result. I'll compile it on my laptop and try it out.
Thanks and regards,
-George
Hi George,
The (Hex2Dec) converts hexadecimal value (0xC8) to decimal 200, see that in debug simulator results above. The count being returned from strln() is incorrect. Also strtol() is not producing correct conversions of base 16 hexadecimal values into integers from C array cells defined const char. Seemingly strln() incorrectly counts 4 nibbles when it should count only 1 byte, perhaps uint16_t const char has defined 4 nibbles 0x0000.
You can see the problem via strln() returning count of 4 bytes from passed uint16_t vai C array[n]. Seemingly passing 0x00nn when there is only 1 byte (0xnn) shown being passed to the Hex2Dec() function. For now I just subtract len-3 in the 4 loop but this seems relative to (const char) adding 2 extra zeros to a single byte.
There are other issues with uint16_t type as C array[n] 0x0000 causes bizarre returns into (argv, *argc) in the lower function passing data into Hex2Dec(). More specifically SCI RXFIFO randomly enters OE/OVF status flags as it can't handle 2 bytes from uint16_t being copied into C arry[n], a buffer #defined const char 0x00. We can see the 2 bytes in CCS debug where the leading byte is always 0x00 and trailing byte is the real data 0x--nn causing big problems.
A description of the problem, by words alone, is no match for code that actually runs and clearly demonstrates what happens. Please provide a complete program, in a single C file, which demonstrates this behavior. I want to compile it on my laptop and try it out.
Thanks and regards,
-George
Hi George,
I had put the function in the PM to you before you posted last message. Also wanted to describe how a uint16_t is being appended with 2 extra leading zeros. Odd part being strln() seems to count 4 nibbles for one input 8-bit byte and not at all expected C behavior.
Hi! Can you send the function in a PM to me as well? I will take a look while George is unavailable for a few days. I appreciate it, thanks.
Hi Alan,
This is not a program issue and more so how the compiler or CPU is right shifting uint16_t when only 1 byte is being placed into array cell buffer. The result of moving data into the buffer should be 0xC8, not 0x00C8
The 2 zeroes are being passed into the called function but only 1 byte 0xC8 is real, the 2 zeros 0x00 are imaginary do not exist. The ARM Cortex 32-bit CPU does not do that for the very same function and strln() works correctly for the same buffer and array cells.
Oddly enough data leaving the same buffer to SCI TXFIFO does not have 0x0000 and only data being buffered via RXFIFO into C array cells. There is only 1 line of code that inputs data from RXFIFO into (char) that is shifting the data 8 bits right and not removing the shifted place holders 0x00. Hence the CPU believes accumulator has 4 nibbles when only 1 byte is present in the buffer. Masking char &0xFF does not stop the extra nibbles from being appended.
Thank you for sending a test case. This is what you shared:
void test(void)
{
int OnebyteNot4 = 0xC8;
char array[1];
*array = OnebyteNot4;
array[0] = OnebyteNot4;
int len = 0;
len = strlen((char*)array);
printf(">> ByteLength \%i", len);
}
This reason for the request was not only so that we could test strlen(), but also to see how the compiler was handling the call in your specific use case, including what code it was generating to handle the function arguments that might result in what you observe.
Oddly the call to strln() is showing 4 characters when only 2 exist in header passed bytes.
...
This is not a program issue and more so how the compiler or CPU is right shifting uint16_t when only 1 byte is being placed into array cell buffer. The result of moving data into the buffer should be 0xC8, not 0x00C8
The 2 zeroes are being passed into the called function but only 1 byte 0xC8 is real, the 2 zeros 0x00 are imaginary do not exist. The ARM Cortex 32-bit CPU does not do that for the very same function and strln() works correctly for the same buffer and array cells.
Based on your test case, it seems that the compiler is not generating anything that would account for what you observe. You're passing a hex value via a pointer to a single char, and so strlen() ought to return one (byte), and it does in my reproduction for C2000. If there's something CPU-related that isn't being accounted for here, I'm not aware of what that could be.
You also state in another thread:
Note: strtol() is not converting hex to decimal as it is supposed to do in any compiled object.
You haven't demonstrated in a test case how strtol() is being called, so I can't speak to that. The purpose of strtol() is to convert a string representation into a long integer (with a given base), not simply to "convert hex to decimal", so I would have to see how you're calling the routine.
That above code snip strlen() would not compile it was bugged, below is a working snip with only 1 array cell. You would have to test on x49c MCU to know if it is CPU related. Again the C lib function incorrectly mandates 2 array cells for (char) when it should only require one cell, e.g. array[0] versus array[1]. C lib calls are not specifically compiler related in the parsing of Clang in functions. The passing function in debug stepping only shows one byte not two hence should not need 2 cell array to read 0x00c8 that is not even showing in debug step simulator.
The caller function debug
void test(void) { int OnebyteNot4 = 0xC8; char array[0]; *array = OnebyteNot4; array[0] = OnebyteNot4; int len = 0; len = strlen((char*)array); printf(">> ByteLength-> %d\n", len); }
In the HexToDec() function the debug hex folder shows only 1 byte was passed from the caller. However 2 bytes were passed from the input array[8] and should only be 1 byte wide (char) not parsed as uint16_t. That is why I had no idea what was going on until I parsed hex[1] as a double byte versus properly being hex[0] as the debug simulator is showing hex[1].
Hence the debug simulator is bugged that it leads the programmer on a wild goose chase. We should be able to use strtol() to do the same thing as HexToDec(). Yet it requires srtlen() to find the exact number of integers which it obvious is not able to since it parsed 1 byte array cell as double byte array cell.
strtol(const char *s, char **endp, int base)
strtol converts the prefix of s to long, ignoring leading white space; it stores a pointer to any unconverted suffix in *endp unless endp is NULL. If base is
between 2 and 36, conversionis done assuming that the input is written inthat base. If base is zero, the base is 8, 10, or 16; leading 0 implies octal and leading Ox or ox hexadecimal. Letters in either case represent digits from 10 to base-1; a leading Ox or ox is permitted in base 16. If the answer would overflow, LONG_MAoXr LONG_MINis returned, depending on the sign of the result, and errno is set to ERANG
Again the C lib function incorrectly mandates 2 array cells for (char) when it should only require one cell, e.g. array[0] versus array[1].
I'm not sure I understand what you are doing. "char array[0];" constitutes a zero-length array, not one. Accessing elements of zero-length arrays is undefined.
I'm not sure I understand what you are doing. "char array[0];" constitutes a zero-length array, not one.
No, zero represents 1 cell width defined (0xFF) or 2 nibbles and 2 cell array[1] as (0xFFFF) = {0.0}, e.g. uint32t or uint16t for floating point values requires 2 cells 0&1=2 cells. Zero is a real integer in CPU binary, C array minimum width is 8-bits or 1 cell. The x49c CPU is instantiating 8-bit width cell as being 16-bits wide in C-lib functions that expect an exact number of characters, not (0x00FF) counting MSB 8 bits (0x00--) as valid characters that don't exist. Odd part is shifting the MSB << 8 for the cast 0x00FF does not fix the array cell width issue.
These x49c array cell issues do not occur on TM4C1294 32-bit MCU class via ARM compiler. So there is something to compare as to why. It's no problem to set x49c for 2 cell array, called functions are casting 2 bytes (0x00FF). Yet there is only 1 byte debug simulator shows it cast 0xFF but it really cast 0x00FF.
And retrieving data stored into several array cells via linear progression of cell number is causing 0x00 zero data (blank) be placed into cells >16. The 8bit data is being shifted 8 bits right or 0x00FF casting it to other functions that expect 0xFF but were actually cast 0x00FF.
I was out for a few days. Thank you to Alan Phipps for helping out while I was gone.
I go back to this ...
Please provide a complete program, in a single C file, which demonstrates this behavior. I want to compile it on my laptop and try it out.
This has not happened. I think doing this exercise will be useful for both us and you. Get this one file program to work, as you intend, not on a C28x or Arm system, but on your host system. Then post the code, and show the output of that code when you run it on your host system.
Thanks and regards,
-George
Hi George,
Why not make sure the C lib calls to convert array cell data remain consistent between C2000 and ARM compilers. That seems to be why it requires 2 array cells for an int16_t via C2000 compiler to retrieve 8bit data from an array defined uint16_t and ARM Cortex only 1 cell array[0]. Other than for float32_t mandates 2 cells or array[1] = {0.0) 1 row 1 column is required.
If you change snip to array[0] the LSB 8-bit data of unit16_t is being truncated when it should convert uint16_t to uint8_t for CPU to cast into char functions headers. The caller is casting the full uint16_t to the called function defined as 1 row, no columns or array[0].
Again debug simulator is incorrectly showing CPU cast (0xC8/200) when CPU did NOT convert unit16_t to uint8_t in the casting process. So the XDS110 debug simulator is showing what the CPU should have cast but did not as it actually cast 0x00C8 into the function, not 0xC8 as debug shows.
Briefly the uint16_t (char) was not converted to uint8_t when CPU casts to any function as char array[0]. That complicates retrieval of 8-bit data in the linear progression of numbered array cells as shown above in the yellow region of the debug output data. That also seems to violate C88-99 rules for how strlen() and other C lib functions process char as 8-bit character data from an array of 1 row with no columns, e.g. array[0]. Even if strlen() counted 2 characters from uint16_t if the array[1] (0x00C8) that might work in some cases of only 8-bit data but not for two array cells being added together to form uint16_t from 2 unit8_t characters from the outside world. Gets even dicer with uint32_t for 3 or more uint8_t characters input from the outside world.
void test(void)
{
int OneByteNot4 = 0x00C8; // Replace with 0xC8
char array[0]; // Replace both array[1] to see strlen() indicate 4 characters
*array = OneByteNot4;
array[0] = OneByteNot4;
int len = 0;
len = strlen((char*)array);
printf(">> ByteLength-> %d\n", len);
}
I asked for a one file program. You didn't give me that. But that test function comes close. I made a few changes to create a single file C program that runs. Please visit this link to see one way to build and run that program. Note you can easily change which compiler is used. Many choices are available. As long as you select one of the x86 variants, the program is executed, and that printf shows you the result computed by strlen.
In my experiments, I saw many different results printed out. I saw 0, 2, and 5. I also saw many compilers issue diagnostics about the code.
Please learn why that call to strlen yields inconsistent results. And why those diagnostics are issued. None of that is because of errors in the compilers. Teaching these sorts of things is beyond the scope of what we do in this forum. Thus, I am unable to help you. I suggest you turn to other online communities that work with new programmers.
Thanks and regards,
-George
Hi George,
Please learn why that call to strlen yields inconsistent results.
So the CCS debug simulator is a liar as it casts uint16_t to functions and shows uint8_t char that does not exist in C2000ware stdint.h?
It seems the main issue is C2000ware stdint.h is all highlighted and char has been defined as uint16_t (wide char) by the compiler when forever (char) is a uint8_t. The ARM compiler stdint.h (char) is uint8_t , does not cast a (wide 16bit char) into 8bit char as x49c is doing. That is causing huge problems that ARM compiler never had with same code and should be cross compatible with any C88-C99 embedded CPU.
There is no way to define uint8_t char as the compiler has locked char data type by #define somewhere.
Again I have yet another odd issue with C lib wide uint16_t char, C2000 compiler truncates char strings after the first byte is cast pointed to by (char *) in the quoted example:
array[0] = "This_is_8bit_text" . The compiler or the CPU is not moving past the first char in the array in 4 loops though it shows the full quoted string.
It's plain to see C200ware has not been full vetted x49c CPU and has many issues with basic C code that work perfectly on ST Arm 4-7 Cortex CPU even TI Arm 4 Cortex CPU. They are to be fully compatible where C88 - C99 code works universally on any embedded CPU, not restricted by hard #defines on char types.
Why has C2000ware stdint.h not been updated since 2002 or have symbol __TMS320_F2800x with (char) defined as uint8_t?
/*****************************************************************************/
/* STDINT.H */
/* */
/* Copyright (c) 2002 Texas Instruments Incorporated */
/* http://www.ti.com/
Seems the project preprocessor TI-Assembly Built-in symbol (red box) in part is causing char array cell length issues. Why would (char) be defined 16 bits amazes and seems to be partly the cause of CPU casting uint16_t from C lib (array.c) functions. So strlen() in this area of the debug code was producing odd lengths for 16-bit char array cells cast a uint16_t. This seems to break existing code when external HID sends 8-bit bytes to SCI FIFO and #defined char buffer. Oddly ARM compiler has (_stdint.h) where it includes uint8_t though it would not fix the C2000 compiler issues defined char=16 bits.
Yet the debug code simulator caught this casting error behind closed curtains, not visible to the human eye.
And the other part being #include <stdint.h> in ARM compiler project has Built-in compiler settings symbol __TMS470__ , missing in C2000 compiler. That missing symbol voids #include <stdint.h> for uint8_t. Several issues can be easily fixed by first updating stdint.h to include newer C2000 MCU classes. Secondly change char=8bits and add wchar=16bits.
The odds were against 8bit wide (char) being added via User supplied CDT list. Removal of TI Assembly predefined symbols macro list made no difference as the compiler forcibly sets 16bit minimum word length. Have to wonder who at ISO came up with such a brain fart as it backfired thus wasted many hours debugging what without fully reading the entire compiler guide, I had no idea of the C2000 C library being an ISO pet project.
Further explanation: (1) LAUNCHXL-F280049C: 16bit DataTypes with Clib functions - C2000 microcontrollers forum - C2000︎ microcontrollers - TI E2E support forums