NullRabbit Logo
Back to Research Hub

How Solana Shrugged Off a 6 Tbps DDoS

NullRabbit Research6 min read

And Why That's Not the End of the Story...

Over the last week, the Solana network was reportedly hit by what can only be described as an industrial-scale DDoS attack.

Not a meme transaction flood. Not a buggy smart contract. Not a consensus edge case.

An attack running at 6 Terabits per second is costly. Solana co-founder Anatoly Yakovenko noted that the attackers were "spending as much revenue as the chain makes" just to sustain the traffic. Rough calculations suggest approx. $50m to run this. You are my hero.

By most reports, the attack ran for roughly December 9-16, peaked at around 6 terabits per second, and pushed billions of packets per second toward validator infrastructure. That puts it in the same class as some of the largest volumetric DDoS events discussed publicly in recent years.

And yet:

No outage. No halt. No visible degradation.

Solana, you survived it! You are my hero.

That matters. But why it stayed up -- and what was still exposed -- matters more.

1. What actually happened

This was a volumetric L3/L4 flood.

The goal wasn't to exploit application logic or confuse consensus. It was simpler: overwhelm links, saturate ingress, and force validator hosts to burn CPU cycles just acknowledging garbage.

  • High packet rates.
  • Mostly unauthenticated traffic.
  • Designed to hurt infrastructure, not protocol semantics.

This is what we've been banging on about for months now. Solana survived because its modern stack is materially better than it used to be. QUIC helped. Stake-weighted QoS helped. Professional validators with serious bandwidth helped.

But survival is not the same thing as control. The devs were sweating behind the scenes. Someone probably cried a little. 7 days in hell for the teams.

2. What Solana did right

This is worth stating clearly. Solana's resilience here is not an accident.

Several architectural decisions paid off:

  • Stake-weighted Quality of Service ensured validator-to-validator traffic remained prioritised even under extreme load.
  • QUIC adoption reduced per-packet CPU cost and enabled faster rejection of low-quality connections.
  • Validator professionalisation (better hardware, higher bandwidth, upstream mitigation) raised the baseline across the network.

The result: consensus held, slots kept advancing, and users didn't notice. That is real progress.

But it also highlights where responsibility ends.

Consensus survived. Hosts still had to endure the storm.

3. Why "we stayed up" still hides real risk

A 6 Tbps attack doesn't need to knock a chain offline to cause damage.

Even when consensus holds:

  • NICs still DMA packets
  • Interrupts still fire
  • Kernel memory gets touched
  • Conntrack tables can fill
  • Non-consensus services remain exposed

And here's the uncomfortable part:

A significant portion of validator hosts still expose far more than consensus ports -- SSH, metrics, admin RPCs, exporters. Stake-weighted QoS does nothing for these.

This is where "we were fine” quietly turns into:

  • wasted resources
  • increased blast radius
  • latent failure modes

4. How Guard would have treated this like water off a duck's back

Guard exists to make attacks like this boring.

  • Not heroic.
  • Not dramatic.
  • Just uneventful.

Guard runs at XDP / AF_XDP, at the NIC ingress. That means packets are dropped before they reach the kernel, before sockets exist, before QUIC sees a byte.

In a 6 Tbps scenario, Guard does not attempt to understand the attack.

It simply refuses to acknowledge traffic that does not belong.

  • No backpressure.
  • No slow degradation.
  • No "we survived but..."

Just silence. We shall not be sobbing into our computers, engineering team!

5. The attack, technically -- and why it still hurts even when you "win"

Volumetric floods exploit a basic truth: every packet costs something.

Even "cheap” packets cost:

  • memory touches
  • interrupts
  • cache pollution
  • scheduler pressure

Traditional mitigations often kick in after that cost is paid.

Guard flips the model.

Packets are filtered at line rate, at the NIC, using deterministic rules that cost nanoseconds -- not microseconds.

6. Guard, technically -- XDP without the fluff

Guard is a kernel-native inline defence layer.

  • Runs at XDP / AF_XDP
  • Stateless, deterministic filtering
  • No kernel networking stack
  • No userspace context switches
  • No per-packet allocation

It enforces:

  • strict port allowlists
  • protocol sanity checks
  • rate ceilings
  • structural invariants

This is not detection. It is enforcement.

7. Where Guard sits in the stack (and why that matters)

Ordering is everything.

Where Guard sits in the network stack -- filtering malicious traffic before it reaches the kernel

Guard does not compete with QUIC or stake-weighted QoS. It ensures those layers only ever see traffic worth caring about.

Consensus should never be your first line of defence.

8. Why this is not Akamai, Cloudflare, or "just use a WAF”

Traditional DDoS products assume:

  • centralized routing
  • proxyable traffic
  • HTTP dominance
  • cloud-native control planes

Validators live in a different world:

  • bare metal
  • mixed providers
  • non-HTTP protocols
  • ports you cannot front with a CDN

Guard is designed for that reality.

It does not sit in front of your infrastructure. It is part of it.

9. Validator checklist (if you're running a node today)

If you operate validator infrastructure, ask yourself:

  • Do I know exactly which ports are exposed?
  • Are non-consensus services rate-limited or isolated?
  • What happens if upstream mitigation fails?
  • Can I drop junk traffic before the kernel sees it?
  • Do I have deterministic guarantees, or just hope?

If the answers are fuzzy, the risk is real.

10. What's next

We're preparing a video demo showing:

  • a live volumetric flood
  • packets dropped at XDP
  • zero kernel impact
  • real telemetry, not marketing graphs

Same class of attack. Different outcome.

We'll share it shortly.

Fin

If you operate Solana validator or infra-adjacent systems and want to quietly sanity-check assumptions around volumetric defence, kernel-level filtering, or host-side blast-radius reduction, we're open to low-key technical conversations.

Related Posts