XDP: The Kernel-Level Powerhouse Behind Modern Network Defense
XDP: Kernel-Level Packet Magic in Rust
Introduction
Traditional packet handling in Linux is like your Nan climbing a ladder - slow and full of steps.
By the time a packet hits userspace, the window to stop it has passed.
eXpress Data Path (XDP) fixes that. It runs inside the NIC driver - before sockets, Netfilter, or even skb allocation - letting you drop, redirect, or mutate packets the instant they arrive.
It's like spotting who's heading to the pub before they've even found their shoes.
The Core Idea
XDP extends the kernel with programmable packet logic using eBPF bytecode.
Instead of pushing packets up the stack, you decide what happens at the driver layer.
Execution flow:
- Packet lands on NIC
- XDP hook fires
- eBPF program runs in-kernel
- Returns one of:
XDP_PASS→ continue normallyXDP_DROP→ discardXDP_TX→ bounce backXDP_REDIRECT→ send elsewhere
Why It Matters
Performance:
XDP handles tens of millions of packets per second per core - enough to saturate 10 GbE links with headroom.
Programmability:
Load/unload eBPF filters at runtime. No reboot, no downtime.
Security:
Block floods or dodgy ports before a TCP handshake even starts.
Observability:
Stream metadata straight from kernel space via ring buffers - zero pcap overhead.
XDP in the Wild
Meta (Facebook) uses XDP in Katran, its load-balancer handling millions of connections per second.
Cloudflare runs XDP at the edge for real-time DDoS mitigation, dropping bad traffic in-kernel before it touches their proxies.
Both proved that XDP scales beyond the lab - it's production-ready warfare.
Writing an XDP Program in Rust
With Aya, you can write eBPF/XDP entirely in Rust - no C toolchains or kernel headers.
// src/xdp.rs
use aya_bpf::{
bindings::xdp_action,
macros::{map, xdp},
maps::HashMap,
programs::XdpContext,
};
#[map(name = "SYN_COUNTER")]
static mut SYN_COUNTER: HashMap<u32, u64> =
HashMap::<u32, u64>::with_max_entries(1024, 0);
#[xdp(name = "count_syns")]
pub fn count_syns(ctx: XdpContext) -> u32 {
let hdr = match ctx.ip() {
Ok(Some(h)) => h,
_ => return xdp_action::XDP_PASS,
};
if hdr.protocol != aya_bpf::bindings::IPPROTO_TCP as u8 {
return xdp_action::XDP_PASS;
}
let tcp = match ctx.transport::<aya_bpf::bindings::tcphdr>() {
Ok(Some(t)) => t,
_ => return xdp_action::XDP_PASS,
};
if unsafe { (*tcp).syn() } != 0 {
let key = hdr.protocol as u32;
unsafe {
let n = SYN_COUNTER.get(&key).copied().unwrap_or(0);
SYN_COUNTER.insert(&key, &(n + 1), 0);
}
}
xdp_action::XDP_PASS
}
And the userspace loader:
// src/main.rs
use aya::{Bpf, programs::Xdp};
use std::{env, process, thread, time::Duration};
fn main() -> Result<(), anyhow::Error> {
let iface = env::args().nth(1).unwrap_or_else(|| {
eprintln!("Usage: cargo run -- <iface>");
process::exit(1);
});
let mut bpf = Bpf::load_file("target/bpfel-unknown-none/release/xdp-example")?;
let prog: &mut Xdp = bpf.program_mut("count_syns").unwrap().try_into()?;
prog.load()?;
prog.attach(&iface, aya::programs::XdpFlags::default())?;
println!("XDP program attached to {iface}");
loop { thread::sleep(Duration::from_secs(60)); }
}
Build and run:
cargo xtask build-ebpf
cargo run -- eth0
That's it - a full XDP filter in Rust, running in-kernel at nanosecond speed.
Real-World Uses
| Use case | What happens |
|---|---|
| DDoS filtering | Drop SYN floods before kernel allocates sockets |
| Inline firewall | Enforce ACLs at driver level |
| Telemetry | Export headers only, zero packet copies |
| Custom routing | Redirect traffic to AF_XDP sockets |
Challenges
- Not all NIC drivers support XDP yet
- eBPF verifier enforces safety bounds
- Debugging kernel code isn't fun - use
bpftool prog trace - You do need to know what you're doing down there
Conclusion
XDP turns the Linux kernel into a programmable network processor.
For performance, telemetry, or on-the-fly defense, it's the sharpest tool you can embed in a modern stack.
Your Nan would approve - if she knew what nanoseconds were.
In Other News
You can read another blog over here with some additional resources.
