This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IRQ: IntMasterIRQEnable() v. IntIRQEnable()



Hi,
Typically in ARM18x when we need to enable IRQ interrupts we follow the following steps:
IntMasterIRQEnable();// Enable IRQ in CPSR.
void IntMasterIRQEnable(void)
{
    /* Enable IRQ in CPSR.*/
    CPUirqe();
}
/*Wrapper function for the IRQ enable function*/
void CPUirqe(void)
{
    /* Enable IRQ in CPSR */
    asm("    mrs     r0, CPSR\n\t"
        "    bic     r0, r0, #0x80\n\t"
        "    msr     CPSR_c, r0");
}
 
IntIRQEnable();
/*Enables IRQ in HIER register of AINTC*/
void IntIRQEnable(void)
{
    /* Enable IRQ Interrupts */
    HWREG(SOC_AINTC_0_REGS + AINTC_HIER) |= AINTC_HIER_IRQ;
}
IntGlobalEnable();

Essentially, IntMasterIRQEnable() clears bit 7 of CPSR register to enable IRQ, and IntIRQEnable() sets IRQ bit of HIER register to enable host IRQ interrupt.
There hasn’t been any problem when using these standard subroutines in practice and we could use IRQ interrupts successfully. However, we would like to know what is the difference between clearing bit 7 in CPSR and sets IRQ ibt in HIER? What is the purpose of separating the bits and call one function “master..enable” and another looks like an implicitly “ordinary..enable”?
 
Paul