In the last post, we built the foundation for exception handling on AArch64 by focusing on synchronous exceptions like svc calls. This was an essential first step, but synchronous exceptions are only part of the picture. To fully embrace event-driven design — and move beyond polling — we now need to handle asynchronous exceptions, namely IRQs (interrupt requests). In this post, we’ll set up the infrastructure to catch and respond to hardware interrupts, starting with configuring the interrupt controller and writing a basic IRQ handler that ties into our existing exception framework.
GIC Architecture
To handle hardware interrupts in AArch64 systems, we rely on a dedicated peripheral called the Generic Interrupt Controller (GIC). The GIC serves as the central coordinator for interrupt requests — receiving signals from devices like timers or UARTs, prioritizing them, and forwarding them to the appropriate CPU core. In this operating system, we’ll use GICv3, a modern revision of the architecture designed to better support multi-core systems, virtualization, and more granular control over interrupt delivery. Conceptually, GICv3 breaks its responsibilities into three key areas:
- CPU-side interaction with the interrupt controller
- Global interrupt configuration and enable/disable logic
- Per-core delivery of interrupts
These roles are implemented through a set of hardware components within the GICv3 architecture. Although the specification includes advanced features such as the Interrupt Translation Service for PCIe/MSI support and secure-world interfaces for TrustZone or EL3-based firmware, these are optional and not required for a basic kernel running at EL1 in non-secure mode. For this operating system the focus is on just three core elements: the Distributor, the Redistributor, and the system registers that make up the CPU interface.
Distributor
The Distributor is the central unit responsible for managing all shared peripheral interrupts in the system — these are interrupts that originate from devices external to the CPU cores, such as timers, UARTs, or other peripherals. Since there is only one Distributor in the system, it acts as the global controller: configuring interrupt properties like trigger type (level or edge), priority, group (secure/non-secure), and routing information that determines which cores are eligible to receive each interrupt. While it doesn’t handle private or core-local interrupts, it is essential for enabling and routing any interrupt that comes from a device shared across cores. Before any interrupt can reach the CPU, the Distributor must be properly initialized and the specific interrupt ID must be enabled here.
Redistributor
While the Distributor manages global interrupts across the system, each individual core has its own Redistributor, which handles interrupts that are private to that core. These include Software Generated Interrupts (SPI), which allow one core to signal another, and Private Peripheral Interrupts, which are typically generated by per-core hardware timers or local devices. The Redistributor also plays a key role in delivering global interrupts routed by the Distributor to the correct CPU. When a device interrupt targets a specific core, the Redistributor ensures it’s properly delivered — but only if that Redistributor is active and awake. By default, each Redistributor starts in a sleep state (as part of power-saving mechanisms), and must be explicitly woken up by the kernel before it can deliver any interrupts. Skipping this step will result in interrupts being silently dropped, even if everything else is configured correctly.
CPU Interface
Once an interrupt has been routed by the Distributor and accepted by a Redistributor, it still needs to be acknowledged and handled by the core itself. This is where the CPU interface comes into play. Unlike the Distributor and Redistributors, which are memory-mapped peripherals, the CPU interface is accessed through system registers (ICC_*) defined by GICv3. These registers allow EL1 software to interact directly with the interrupt controller: reading the interrupt ID (ICC_IAR1_EL1), signaling end-of-interrupt (ICC_EOIR1_EL1), setting the interrupt priority mask (ICC_PMR_EL1), and enabling or disabling interrupt groups (ICC_IGRPEN1_EL1). This interface replaces the memory-mapped CPU interface from GICv2 and is now the only way EL1 code can manage interrupts on modern AArch64 platforms. Without configuring these registers, no interrupt — even if routed correctly — will ever be acknowledged by the core
With these three components — the Distributor, the Redistributor, and the CPU interface — the GICv3 interrupt flow becomes clear: interrupts are configured and routed by the Distributor, delivered per-core through each Redistributor, and finally acknowledged and processed via CPU system registers. The following diagram illustrates this flow:
+-----------------------------+ | Peripheral Device | | (e.g., UART, Timer) | +-------------+---------------+ | | (Interrupt Signal - SPI) v +-----------------------------------------+ | GIC Distributor | | (Shared Across All Cores) | | - Enables/disables global interrupts | | - Sets priorities, routing targets | +----------------+------------------------+ | | (Routed to Target CPU) v +------------------------------+ | GIC Redistributor (per CPU) | | - Handles PPIs, SGIs | | - Wakes up for delivery | +--------------+---------------+ | | (Delivers interrupt to CPU) v +-------------------------------+ | CPU Core (EL1) | | - Uses ICC_* system registers | | to: | | • Acknowledge IRQ | | • Read IRQ ID | | • Signal End-of-Interrupt | +-------------------------------+
Configuring the GIC
With a basic understanding of the GICv3 architecture, let’s look at how we configure it in this OS. The setup happens in two main parts: the Redistributor (per CPU) and the Distributor (shared across CPUs). Once both are configured, we enable interrupts using system registers.
The process begins with the Distributor, because it acts as the central authority for global interrupt configuration. In our kernel, the function init_gic_distributor is responsible for this setup. It writes to the GICD_CTLR register, enabling both Group 1 secure and non-secure interrupts.
#[unsafe(no_mangle)]
pub unsafe fn init_gic_distributor(dist_base: usize) {
let final_ctlr;
let mut ctlr_val;
ctlr_val = utilities::read_reg(dist_base as *mut u32, GICD_CTLR);
ctlr_val |= GICD_ENABLE_GRPS;
utilities::write_reg(dist_base as *mut u32, GICD_CTLR, ctlr_val, 0);
asm!("dsb sy", options(nostack, nomem));
final_ctlr = utilities::read_reg(dist_base as *mut u32, GICD_CTLR);
AFFINITY_ENABLED = (final_ctlr & GICD_CTLR_ARE_S) != 0;
asm!("dsb sy", options(nostack, nomem));
}
Enabling these groups ensures that non-secure EL1 software (like our kernel) is allowed to receive and handle interrupts. Immediately after, the kernel checks whether Affinity Routing is enabled by examining the ARE_S bit in the same control register. Affinity routing determines whether interrupts are routed based on the target CPU’s affinity fields, rather than using legacy CPU ID mappings. This affects how interrupts are delivered and how redistributors handle them later on.
Once the Distributor is configured, attention shifts to the Redistributor, which must be initialized for each CPU. By default, each Redistributor starts in a low-power state to conserve energy. To make it functional, the kernel must first wake it up. This is done by clearing the ProcessorSleep bit in the GICR_WAKER register:
But waking the Redistributor isn’t immediate. The hardware needs time to power up internal logic and bring associated components online. The kernel waits in a loop, polling the ChildrenAsleep bit until it clears — a hardware guarantee that the Redistributor is fully awake and ready to process interrupts. This step is essential. If the Redistributor is left in its default sleep state, any interrupts targeted at that core will be silently dropped, regardless of how well the rest of the system is configured.
#[unsafe(no_mangle)]
pub unsafe fn init_gic_redistributor(rd_base: usize) {
let mut waker_val;
waker_val = utilities::read_reg(rd_base as *mut u32, GICR_WAKER);
waker_val &= !GICR_WAKER_PSLEEP;
utilities::write_reg(rd_base as *mut u32, GICR_WAKER, waker_val, 0);
asm!("dsb sy", options(nostack, nomem));
loop {
let status = utilities::read_reg(rd_base as *mut u32, GICR_WAKER);
if status & GICR_WAKER_CASLEEP == 0 {
break;
}
}
}
With the core’s Redistributor active, the kernel can now begin configuring individual interrupts. This involves three key steps for each interrupt:
- Setting the Priority: Each interrupt is assigned an 8-bit priority value. Lower values represent higher priorities. The kernel writes this value to the GICR_IPRIORITYR register, indexed by interrupt ID.
#[unsafe(no_mangle)] pub unsafe fn set_int_priority(rd_base: usize, id: u32, prio: u8) { let reg_index = id / 4; let reg_offset = reg_index * 4; let byte_index = id % 4; let shift = byte_index * 8; let mut reg_val = utilities::read_reg((rd_base + GICR_SGI_BASE) as *mut u32, GICR_IPRIORITYR + reg_offset as usize); reg_val = (reg_val & !(0xFF << shift)) | ((prio as u32) << shift); utilities::write_reg((rd_base + GICR_SGI_BASE) as *mut u32, GICR_IPRIORITYR + reg_offset as usize, reg_val, 0); asm!("dsb sy", options(nostack, nomem)); }
- Assigning the Interrupt Group: We assign the interrupt to Group 1, marking it as non-secure and EL1-handled.
- Enabling the Interrupt: Finally, we enable the interrupt by setting the appropriate bit in GICR_ISENABLER0.
#[unsafe(no_mangle)] pub unsafe fn enable_int(rd_base: usize, id: u32) { let bit = 1 << id; utilities::write_reg((rd_base + GICR_SGI_BASE) as *mut u32, GICR_ISENABLER0, bit, !bit); asm!("dsb sy", options(nostack, nomem)); }
#[unsafe(no_mangle)]
pub unsafe fn set_int_grp(rd_base: usize, id: u32) {
let mut reg_val = utilities::read_reg((rd_base + GICR_SGI_BASE) as *mut u32, GICR_IGROUPR0);
reg_val |= 1 << id;
utilities::write_reg((rd_base + GICR_SGI_BASE) as *mut u32, GICR_IGROUPR0, reg_val, 0);
asm!("dsb sy", options(nostack, nomem));
}
At this point, the hardware is ready to signal an interrupt to the CPU — but the CPU still won’t respond unless the CPU interface has been enabled.
The CPU interface is configured through a series of system registers (ICC_*), which are only accessible once we explicitly allow their use:
mrs x0, ICC_SRE_EL1
orr x0, x0, #1
msr ICC_SRE_EL1, x0
This enables the System Register Interface, allowing EL1 code to manage interrupts directly. From there, the kernel performs the following steps:
- Set the priority mask: This sets the minimum priority level for interrupts that will be taken by the CPU. Any interrupt with a lower priority (numerically higher value) will be masked out.
#[unsafe(no_mangle)] pub unsafe fn set_priority_mask(priority: u8) { asm!("msr ICC_PMR_EL1, {}", in(reg) priority as u64, options(nostack, nomem)); }
- Enable the GRP1 interrupts: This enables Group 1 interrupts, which are non-secure and typically handled at EL1. It sets bit 0 in ICC_IGRPEN1_EL1.
#[unsafe(no_mangle)] pub unsafe fn enable_grp1_ints() { asm!( "mrs {tmp}, ICC_IGRPEN1_EL1", "orr {tmp}, {tmp}, #1", "msr ICC_IGRPEN1_EL1, {tmp}", "isb sy", tmp = out(reg) _, options(nostack, nomem) ); }
At this point, the GIC is fully configured: interrupts are routed by the Distributor, delivered per-core via Redistributors, and received by the CPU through the system register interface.
Finally, the kernel unmasks IRQs at the processor level:
# Unmask IRQs only
msr DAIFClr, #0b0010
Implementing the IRQ Handler
The final piece of the puzzle is the interrupt handler itself. This is the code that runs in response to an IRQ, and it's critical for processing events like timer ticks, device input, or inter-core signals.
In AArch64, IRQs are delivered through the exception mechanism, and specifically routed to the IRQ vector entry in our exception table — located at offset 0x280 from VBAR_EL1. This means we must set up the IRQ vector to jump into a handler routine. To do so we can modify the EVT we set up in the last post, by replacing the corresponding entry with something like:
.balign 0x80
irq_trampoline:
b irq_handler
Once the CPU branches to irq_handler, the first task is to identify the source of the interrupt. GICv3 provides system registers for this purpose. In particular, ICC_IAR1_EL1 (Interrupt Acknowledge Register) tells us which interrupt is pending, returning an interrupt ID in x0. Depending on the system’s configuration, this ID could correspond to a UART, timer, or another peripheral. With this ID in hand, we can dispatch the event to the appropriate Rust handler.
After handling the interrupt, we must notify the GIC that we're done by writing the interrupt ID back to ICC_EOIR1_EL1. This step is mandatory — if skipped, the GIC will continue to consider the interrupt active and won’t deliver subsequent ones from the same source.
Putting it all together, a basic irq_handler might look like this:
irq_handler:
sync_handler:
alloc_stack 256
saveregs
mrs x0, ICC_IAR1_EL1
bl do_irq
msr ICC_EOIR1_EL1, x0
restoreregs
dealloc_stack 256
eret
For the IRQ dispatcher I decided to write a rust function that based on the IRQ ID, will perform a predefined action. So far it only handles the IRQ 30 (EL1 NS timer).
#[no_mangle]
pub fn do_irq(id: u32) -> u32 {
match id {
30 => {
uart::print(b"Timer interrupt!\n");
unsafe {
asm!(
"mrs x0, CNTFRQ_EL0",
"msr CNTP_TVAL_EL0, x0",
"isb",
options(nostack, nomem),
);
}
}
_ => {
let mut buf = [0u8; 10];
let id_str = u32_to_str(id, &mut buf);
uart::print(b"Unhandled IRQ: ");
uart::print(id_str);
uart::print(b"\n");
}
}
id // Return the ID, to be written back to ICC_EOIR1_EL1 by the handler
}
Enabling the AArch64 EL1 Timer
For testing purposes, we'll begin by enabling the AArch64 EL1 NS physical timer. This timer is ideal for initial interrupt testing because it's always-on, per-core, and accessible directly from EL1. It allows us to generate periodic interrupts without relying on any external hardware.
In AArch64 systems, this timer is exposed via system registers like CNTP_TVAL_EL0 (Timer Value), CNTP_CTL_EL0 (Control), and CNTFRQ_EL0 (Frequency). When enabled, the timer will count down and raise a Private Peripheral Interrupt (PPI) — which we can then catch through our IRQ handler.
The process of enabling this timer is straightforward and involves interacting with three key system registers, assuming our Interrupt Controller (GIC) is already configured to receive and forward the corresponding PPI.
The setup can be broken down into these conceptual steps:
- Discover the Clock Frequency: Before we can set a timer for one second, we need to know how many "ticks" the system clock performs per second. The hardware provides this value in a read-only register, CNTFRQ_EL0. By reading this register, the kernel learns the system's fundamental frequency (e.g., 100 MHz), which is essential for calibrating any time-based events.
- Set the Countdown Duration: Next, we need to tell the timer when to fire the interrupt. This is done by writing a value to the CNTP_TVAL_EL0 register. This value isn't an absolute timestamp, but rather the number of ticks from the current moment until the interrupt should be triggered. For our first test, we can simply use the frequency we read in the previous step to set a one-second timer.
- Enable the Timer: With the countdown value set, the final step is to activate the timer. This is controlled by the CNTP_CTL_EL0 register, which contains two important bits:
ENABLE
: This is the master switch. Setting this bit to 1 starts the timer countdown.IMASK
: This bit masks the interrupt signal. To allow the interrupt to be sent to the GIC, this bit must be 0.
By writing a 1 to the ENABLE bit (and ensuring the IMASK bit is 0), the timer begins counting down. Once the internal counter reaches zero, the hardware automatically triggers the interrupt.
A crucial detail is that once the timer fires, it disables itself by clearing the ENABLE bit. To generate periodic interrupts, our interrupt handler will be responsible for "re-arming" the timer by writing a new countdown value to CNTP_TVAL_EL0 each time it executes. This simple mechanism forms the foundation of the system scheduler tick.
Below is a short recording of the output in QEMU demonstrating that the timer is working correctly.

Next steps
At this point, we've successfully brought our interrupt system to life. With a complete exception vector table, a properly configured GICv3, and a working handler for our first hardware interrupt from the AArch64 generic timer, our kernel can now react to hardware events asynchronously. This is a fundamental milestone for any modern operating system, moving us beyond simple, linear execution.
In the next post, we will transform our UART driver from this busy-wait model to a clean, efficient, interrupt-driven model. Instead of our CPU waiting on the UART, the UART will notify our CPU when it needs attention. We will configure the UART to fire an interrupt whenever a specific event occurs.
If you'd like to follow the development or explore the full source code, the project is available on GitHub. Join us next time as we dive into real-world device interrupt handling!