I *built Spliff, a high-performance L7 sniffing and correlation engine in pure C23. The goal is to build a fully working, Linux-native EDR that isn't a resource-hogging black box.
The core innovation – "Golden Thread" correlation:
Most eBPF sniffers capture SSL data OR packets. Spliff correlates both:
XDP (NIC) → sock_ops (socket cookies) → Uprobes (SSL buffers)
↓ ↓ ↓
packets TCP 5-tuple decrypted data
↘ ↓ ↙
unified per-flow view
This maps raw decrypted TLS data back to the exact TCP flow, PID, and process—something commercial EDRs struggle with.Technical highlights:
• XDP + sock_ops + Uprobes – Three BPF program types working together via shared maps
• Lock-free threading – Dispatcher/Worker model with Concurrency Kit SPSC queues
• Full HTTP/2 – HPACK decompression, stream multiplexing, request-response correlation
• No MITM – Hooks OpenSSL, GnuTLS, NSS, WolfSSL, BoringSSL directly via uprobes
• Static binary fingerprinting – Build ID matching for stripped binaries (Chrome)
• BPF-level filtering – AF_UNIX IPC filtered in kernel, not userspace
Current status: Working L7 visibility engine. Captures and correlates HTTPS traffic in real-time.
What's next: Process behavior tracking, file/network anomaly detection, event streaming (NATS/Kafka), threat intel integration.
Linux-only – Requires kernel 5.x+ with BTF, XDP, libbpf.
---
The project is GPL-3.0 and we're inviting anyone interested to contribute—whether it's code, architecture feedback, security research, or ideas for EDR features that actually matter (not compliance theater).
GitHub: https://github.com/NoFear0411/spliff
*Note: The codebase was written with Claude Opus. I provide the research, architecture decisions, and review every line.
> "eBPF/XDP hardware offload to SmartNICs",
> So eBPF for a WAF isn't worth it?
The code has the infrastructure for XDP hardware offload:
- XDP_MODE_OFFLOAD enum exists in bpf_loader.h:61
- XDP_FLAGS_HW_MODE flag mapping in bpf_loader.c:789
But it's not usable in practice because:
1. No CLI option – There's no way to enable offload mode; it defaults to native with SKB fallback
2. BPF program isn't offload-compatible – The XDP program uses:
- Complex BPF maps (LRU hash, ring buffers)
- Helper functions not supported by most SmartNIC JITs
- The flow_cookie_map shared with sock_ops (can't be offloaded)
3. SmartNIC limitations
– Hardware offload typically only supports simple packet filtering/forwarding, not the stateful flow tracking spliff does
What would be needed for SmartNIC support:
- Split XDP program into offloadable (simple classification) and non-offloadable (stateful) parts
- Use SmartNIC-specific toolchains (Memory-1, Netronome SDK, etc.)
- Me having a device with SmartNIC and full driver support to play with. I've done all my testing on Fedora 43 on my device
For now this could be a future roadmap item, but the current "Golden Thread" correlation architecture fundamentally requires userspace + kernel cooperation that can't be fully offloaded.
Here is a sample debug output when you run spliff -d and it tries to detect all your NICs:
--- [DEBUG] Loaded BPF program from build-release/spliff.bpf.o [XDP] Found program: xdp_flow_tracker [XDP] Found required maps: flow_states, session_registry, xdp_events [XDP] Found optional map: cookie_to_ssl [XDP] Found map: flow_cookie_map (for cookie caching) [XDP] Found optional map: xdp_stats_map [XDP] Initialization complete [XDP] Discovered interface: enp0s20f0u2u4u2 (idx=2, mtu=1500, UP, physical) [XDP] Discovered interface: wlp0s20f3 (idx=4, mtu=1500, UP, physical) [XDP] Discovered interface: enp0s31f6 (idx=3, mtu=1500, UP, physical) libbpf: Kernel error message: Underlying driver does not support XDP in native mode [XDP] native mode failed on enp0s20f0u2u4u2, falling back to SKB mode [XDP] Attached to enp0s20f0u2u4u2 (idx=2) in skb mode libbpf: Kernel error message: Underlying driver does not support XDP in native mode [XDP] native mode failed on wlp0s20f3, falling back to SKB mode [XDP] Attached to wlp0s20f3 (idx=4) in skb mode libbpf: Kernel error message: Underlying driver does not support XDP in native mode [XDP] native mode failed on enp0s31f6, falling back to SKB mode [XDP] Attached to enp0s31f6 (idx=3) in skb mode [XDP] Attached to 3 of 3 discovered interfaces XDP attached to 3 interfaces [SOCKOPS] Using cgroup: /sys/fs/cgroup [SOCKOPS] Attached socket cookie caching program sock_ops attached for cookie caching [XDP] Warm-up: Seeded 5 existing TCP connections [DEBUG] Warmed up 5 existing connections ---
edit: formating is hard on my phone
Same. I have a Pi Pico with PIO, though
> but the current "Golden Thread" correlation architecture fundamentally requires userspace + kernel cooperation that can't be fully offloaded.
Hard limit, I guess.
(If you indent all lines of a block of text with two spaces (including blank newlines), HN will format it as monospace text and preserve line breaks.)
Thanks for the format tip.
/? TLS accelerators open: https://www.google.com/search?q=TLS+accelerators+open :
- "AsyncGBP+: Bridging SSL/TLS and Heterogeneous Computing with GPU-Based Providers" https://ieeexplore.ieee.org/document/10713226 .. https://news.ycombinator.com/item?id=46664295
/? XDP hardware offload to GPU: https://www.google.com/search?q=XDP+hardware+offload+to+a+GP... :
- eunomia-bpf/XDP-on-GPU: https://github.com/eunomia-bpf/XDP-on-GPU
Perhaps AsyncGBP+ + XDP-on-GPU would solve.
The AsyncGBP+ article mentions support for PQ on GPU.
But then process isolation on GPUs. And they removed support for vGPU unlock.
After reading loophole labs post [0] a few months ago. I was hoping someone would cook on this for security research.
[0]https://docs.cilium.io/en/stable/operations/performance/tuni...
[1]https://isovalent.com/blog/post/cilium-netkit-a-new-containe...
Cilium is definitely the gold standard if you’re working with Kubernetes clusters and need a full CNI, but if you want to extend CNI functionality without replacing it, then this approach is the only option.
It works quite well because Cillium (and all CNIs that I’m aware of) don’t use XDP like the blog post mentions, they use Netkit instead which is an alternative to veth designed for netfilter-like use cases.
This means XDP can work alongside Cillium (with enough tweaking) which is what we wanted to be able to do.
If you’re using pure containers and no CNI, then of course this provides a significant speed up even beyond netkit devices.