Others mentioned tailscale, it's cool and all but you don't always need it.
As far as security, that's not even the consideration I had in mind, sure wireguard is secure, but that's not why you should have vxlan inside it, you should do so because that's the purpose of wireguard, to connect networks securely across security/trust boundaries. it doesn't even matter if the other protocol is also wireguard, or ssh or whatever, if it is an option, wireguard is always the outermost protocol, if not then ipsec, openvpn,softether,etc..whatever is your choice of secure overlay network protocol gets to be the tunnel protocol.
How deranged would it be to have every nfs client establish a wireguard tunnel and only have nfs traffic go through the tunnel?
Sounds good to me. I have my Wireguard tunnel set up so that only traffic intended for hosts that are in the Wireguard network itself are routed over the Wireguard tunnel.
I mostly use it to ssh into different machines. The Wireguard server runs on a VPS on the Internet, and I can connect to it from anywhere (except from networks that filter Wireguard traffic), and that way ssh into my machines at home while I am away from home. Whereas all other normal traffic to other places is unaffected by and unrelated to the tunnel. So for example if I bring my laptop to a coffee shop and I have Wireguard running and I browse the web with a web browser, all my web browsing traffic still gets sent the same normal way that it would even if I didn’t have the tunnel running.
I rarely use NFS nor SMB, but if I wanted to connect either of those I would be able to that as well over this Wireguard setup I have.
Nowadays I would recommend using NFS4+TLS or Gluster+TLS if you need filesystem semantics. Better still would be a proper S3-style or custom REST API that can handle the particulars of whatever strange problem lead to this architecture.
Instead you can create multiple Wireguard interfaces and use policy routing / ECMP / BGP / all the layer 3 tricks, that way you can achieve similar things to what vxlan could give you but at layer 3.
There's a performance benefit to doing it this way too, in some testing I found the wireguard interface can be a bottleneck (there's various offload and multiple core support in Linux, but it still has some overhead).
You'd be surprised to know that this is especially popular in cloud! It's just abstracted away (:
Also IME EVPN is mostly deployed/pushed when clueless app developers expect to have arbitrary L2 reachability across any two points in a (cross DC!) fabric [1], or when they want IP addresses that can follow them around the DC or other dumb shit that they just assumed they can do.
[1] "What do you mean I can't just use UDP broadcast as a pub sub in my application? It works in the office, fix your network!" and the like.
Though there are definitely use cases where it is needed, and it is way easier to implement earlier than later.
IPSec over VXLAN is what I recommend if you are doing 10G or above. There is a much higher performance ceiling than WireGuard with IPSec via hardware firewalls. WireGuard is comparatively quite slow performance-wise. Noting Tailscale, since it has been mentioned, has comparatively extremely slow performance.
edit: I'm noticing that a lot of the other replies in this thread are not from network engineers. Among network engineers WireGuard is not very popular due to performance & absence of vendor support. Among software engineers, it is very popular due to ease of use.
Isn't this mainly because Tailscale relies on userspace WG (wireguard-go)? I'd imagine the perf ceiling is much higher for kernel WG, which I believe is what Netbird uses.
The fastest I am aware of is VPP (open-source) & Intel QAT [1], which while it is achieves impressive numbers for large packets (70Gbps @ 512 / 200Gbps @ 1420 on a $20k+ MSRP server), is still not comparable with commercial IPsec offerings [2][3][4] that can achieve 800Gbps+ on a single gateway (and come with the added benefit of relying on a commercial product with support).
[1] https://builders.intel.com/docs/networkbuilders/intel-qat-ac...
[2] https://www.juniper.net/content/dam/www/assets/datasheets/us...
[3] https://www.paloaltonetworks.com/apps/pan/public/downloadRes...
[4] https://www.fortinet.com/content/dam/fortinet/assets/data-sh...
[1] https://www.arista.com/assets/data/pdf/Whitepapers/EVPN-Data...
One wonders what WG perf would look like if it could leverage the same hardware offload.
I use Tinc as a daily driver (for personal things) and have yet to come up with a new equivalent, given that I probably should. Does Vxlan help here?
These days I lean towards WireGuard simply because it's built into Linux, but Tinc would be my second choice.
I considered dropping my root wireguard and setting up just vxlan and flannel, but as I need NAT hole punching I kind of need the wireguard root so that is why i ended up with it.
Going Wireguard inside the vxlan (flannel) in my case, would likely be overkill, unless I wanted my traffic between nodes between regions to be separated from other peers on the network, not sure where that would be useful. It is an easy way of blocking out a peer however, but that could just as well be solved on the "root" wireguard node.
There might be some MTU things that would be messed up going nested wireguard networks.
But it's not necessarily a bad idea. It depends on the circumstances, even when traversing a public network.
I achieve load balancing by running native wireguard on a vps at hetzner, I've got a native wireguard mesh, I believe Talos can do the same, where the peers are manually set up, or via. tailscale etc. I then tell k3s that it should use the wireguard interface for vxlan, and boom my kubernetes mesh is now connected.
flannel-iface: "wg0" # Talos might have something similar.
I do use some node-labels and affinities to make sure the right pods end up in the right spot. For example the metallb annoucer always has to come from the hetzner node. As mentioned in my reply below, it takes about 20ms roundtrip back to my homelab, so my sites can take a bit of time to load, but it works pretty well otherwise, sort of similar to how cloudflare tunnels would work, except not as polished.
My setup is here if it is of help
https://git.kjuulh.io/kjuulh/clank-homelab-flux/src/branch/m...
I used to run my K8S homelab through ZT as well. Latency is extremely bad.
What I wanted is more like meshed L2TPv3, but L2TPv3 is extremely hard to setup nowadays
L2 is a total waste of time.
The lack of support for hardware EVPN is one of the many reasons that Mikrotik is not considered for professional deployments.
People who think one size fits all are not professional.
With that said, I love Mikrotik for what it is: it is very approachable and it fills a niche. I believe it has added a lot of value to the industry and I'm excited to see their products mature.
VXLAN is L2-like tranport over L3
You can have EoIP over WG with any VLANs you like.
You can have a VXLAN over plain IP, over EoIP, over WG, over IPSec. Only WG and IPSec (with not NULL sec) do providecany semblance ofvencryption in transit
And mandatory X\Y problem.
IPSec-equivalent, VXLAN-equivalent, IPSec-equivalent.
Prevents any compromised layer from knowing too much about the traffic.
Andromeda https://research.google/pubs/andromeda-performance-isolation...
Orion https://research.google/pubs/orion-googles-software-defined-...