This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

global variable problem

hi, i got a strange thing that when i use global variable in a function of algorithm codec, my app used much more times, but if i used local variable, it runs ok. what's wrong with it?

here is my code:
1. when i used global varible:
int16_t g_var = 0;
int foo(int index, int16_t* tx_var, int16_t* rx_var, int len)
{
    int i = 0;
    for (i = 0; i < len; i++) {
        g_var = rx_var;
        // other code
    }
   
    return 0;
}

2. when i used local varible
int foo(int index, int16_t* tx_var, int16_t* rx_var, int len)
{
    int i = 0;
    int16_t l_var = 0;
    for (i = 0; i < len; i++) {
        l_var = rx_var;
        // other code
    }
   
    return 0;
}

my board card is c6a816x

  • If I understand you right, that with the global variable your application needs much more time for the same results, then that seems somewhat naturally for me, as a local variable will easier be left within the cache or even held within a register, whereas for the global variable the external memory has to be accessed. The latter might be done via a caching as well, but I am not sure about how the compiler handles those cases, because anyhow it would have to ensure that no other process nor hardware handles the content of the same variable. And it might be a question of cache enabling/disabling as well. Unfortunately I didn't yet find a paper with clear instructions about how to get the wanted behaviour - seems to be put back on trial and error in lots of cases.

    Maybe you find some more concrete hints behind the links of this page: Optimization Techniques for the TI C6000 Compiler.

    Regards,
    Joern.