46 pointsby mlhpdx25 days ago20 comments
  • volkadav25 days ago
    https://man.openbsd.org/vxlan.4#SECURITY seems unambiguous that it's intended for use in trusted environments (and all else being equal, I'd expect the openbsd man page authors to have reasonable opinions about network security), so it sounds like vxlan over ipsec/wg is probably the better route?
  • notepad0x9025 days ago
    For site-so-site ovelay networks, use wireguard, vxlan should be inside of it, if at all. Your "network" is connected by wireguard, and it contains details like vxlan. Even within your network, when crossing security boundaries across untrusted channels, you can use wireguard.

    Others mentioned tailscale, it's cool and all but you don't always need it.

    As far as security, that's not even the consideration I had in mind, sure wireguard is secure, but that's not why you should have vxlan inside it, you should do so because that's the purpose of wireguard, to connect networks securely across security/trust boundaries. it doesn't even matter if the other protocol is also wireguard, or ssh or whatever, if it is an option, wireguard is always the outermost protocol, if not then ipsec, openvpn,softether,etc..whatever is your choice of secure overlay network protocol gets to be the tunnel protocol.

    • p0w3n3d24 days ago
      There's also headscale - an OSS clone, if you're too paranoid to leave your keys to a company
  • q3k25 days ago
    Drop the VXLAN. There's almost never a good reason to stretch L2 over a WAN. Just route stuff across.
    • dgl25 days ago
      This.

      Instead you can create multiple Wireguard interfaces and use policy routing / ECMP / BGP / all the layer 3 tricks, that way you can achieve similar things to what vxlan could give you but at layer 3.

      There's a performance benefit to doing it this way too, in some testing I found the wireguard interface can be a bottleneck (there's various offload and multiple core support in Linux, but it still has some overhead).

    • cjaackie25 days ago
      This is the correct answer, routing between subnets is how it’s suppose to work. I think there are some edge cases like DR where it seems like stretching L2 might sound like a good idea, but it practice it gets messy fast.
      • formerly_proven25 days ago
        VXLAN makes sense in the original application, which is to create routable virtual LANs within data centers.
    • iscoelho25 days ago
      EVPN/VXLAN fabrics are becoming industry standard for new deployments. MACSEC/IPsec is industry standard for site-to-site.

      You'd be surprised to know that this is especially popular in cloud! It's just abstracted away (:

      • wmf25 days ago
        EVPN/VXLAN fabrics are becoming cargo culted. In most cases they aren't needed.
        • q3k25 days ago
          Agreed. They've also been extremely finnicky from my experience - had cases where large EVPN deployments just blackholed some arbitrary destination MAC until GARPs were sent out of them.

          Also IME EVPN is mostly deployed/pushed when clueless app developers expect to have arbitrary L2 reachability across any two points in a (cross DC!) fabric [1], or when they want IP addresses that can follow them around the DC or other dumb shit that they just assumed they can do.

          [1] "What do you mean I can't just use UDP broadcast as a pub sub in my application? It works in the office, fix your network!" and the like.

          • iscoelho25 days ago
            VXLAN is used in cloud/virtualization networks commonly. VM HA/migration becomes trivial with VXLAN. It also replaces L3VPN/VRFs for private networks.
            • wmf25 days ago
              The good clouds don't support L2, they use a centralized control plane instead of brittle EVPN, and they virtualize in the hypervisor instead of in the switches. People are being sold EVPN as "we have cloud at home" and it's not really true.
              • iscoelho24 days ago
                AWS/GCE/Azure's network implementations pre-date EVPN and are proprietary to their cloud. EVPN is for on-premise. You don't exactly have the opportunity to use their implementation unless you are on their cloud, so I am not sure comparing the merits of either is productive.
          • hbogert20 days ago
            > Also IME EVPN is mostly deployed/pushed when clueless app developers expect to have arbitrary L2 reachability across any two points in a (cross DC!) fabric [1], or when they want IP addresses that can follow them around the DC or other dumb shit that they just assumed they can do.

            Sorry, but that's really reductive and backwards. It's usually pushed by requirements from the lower regions of the stack, operators don't want to let VMs have downtime so they live migrate to other places in the DC. It's not a weird requirement to let those VM's keep the same IP once migrated. I never had a developer ask me for L2 reachability.

        • iscoelho25 days ago
          I don't disagree (:

          Though there are definitely use cases where it is needed, and it is way easier to implement earlier than later.

  • iscoelho25 days ago
    VXLAN over WireGuard is acceptable if you require a shared L2 boundary.

    IPSec over VXLAN is what I recommend if you are doing 10G or above. There is a much higher performance ceiling than WireGuard with IPSec via hardware firewalls. WireGuard is comparatively quite slow performance-wise. Noting Tailscale, since it has been mentioned, has comparatively extremely slow performance.

    edit: I'm noticing that a lot of the other replies in this thread are not from network engineers. Among network engineers WireGuard is not very popular due to performance & absence of vendor support. Among software engineers, it is very popular due to ease of use.

    • kosolam25 days ago
      How is IPSec performance better than wg? I never heard this before, it sounds intriguing.
      • iscoelho25 days ago
        At this time, there is no commercial offering for hardware/ASIC WireGuard implementations. The standard WireGuard implementation cannot reach 10G.

        The fastest I am aware of is VPP (open-source) & Intel QAT [1], which while it is achieves impressive numbers for large packets (70Gbps @ 512 / 200Gbps @ 1420 on a $20k+ MSRP server), is still not comparable with commercial IPsec offerings [2][3][4] that can achieve 800Gbps+ on a single gateway (and come with the added benefit of relying on a commercial product with support).

        [1] https://builders.intel.com/docs/networkbuilders/intel-qat-ac...

        [2] https://www.juniper.net/content/dam/www/assets/datasheets/us...

        [3] https://www.paloaltonetworks.com/apps/pan/public/downloadRes...

        [4] https://www.fortinet.com/content/dam/fortinet/assets/data-sh...

        • iscoelho25 days ago
          There are also solutions like Arista TunnelSec [1] that can achieve IPsec and VXLANsec at line-rate performance (21.6Tbps per chassis)! This is fairly new and fancy though.

          [1] https://www.arista.com/assets/data/pdf/Whitepapers/EVPN-Data...

        • mlhpdx24 days ago
          This lack of ASIC is interesting to me. If it existed, that would very much change the game. And, given the simplicity of WG encryption it would be a comparatively small design (lower cost?)
      • hdgvhicv25 days ago
        If you have an edge device which implements hardware IPsec at 10g+ but pushes WireGuard to software on an underpowered cpu then sure.
        • rebewhd25 days ago
          While that's true, I'm not sure it's because of something inherent in IPsec vs WireGuard. It's more likely due to the fact that hardware accelerators have been designed to offload encryption routines that IPsec uses.

          One wonders what WG perf would look like if it could leverage the same hardware offload.

          • iscoelho25 days ago
            Exactly this. I would love to see a commercial product with a hardware implementation for WireGuard, but it does not yet exist. IPsec, however, is well supported.
            • kosolam24 days ago
              Thanks for your answers. I wonder though, from the perspective of a small user that doesn’t have requirements for such bandwidth, how does ipsec compare with wg on other metrics/features? Is it worth looking into?
              • iscoelho24 days ago
                I'd use WireGuard in that case. The main reason WireGuard is popular at all is because it is approachable. IPsec is much more complicated and is designed for network engineers, not users.
                • kosolam23 days ago
                  Well yeah, so except being more complex and having hardware support, is there anything useful in ipsec? I meant a user in the general sense, not necessarily meaning a clueless non technical home user.
    • Cyph0n25 days ago
      > Noting Tailscale, since it has been mentioned, has comparatively extremely slow performance.

      Isn't this mainly because Tailscale relies on userspace WG (wireguard-go)? I'd imagine the perf ceiling is much higher for kernel WG, which I believe is what Netbird uses.

      • iscoelho24 days ago
        wireguard-go is indeed very slow. For example, the official WireGuard Mac client uses it, and performance on my M1 Max is CPU capped at 200Mbps. The kernel WireGuard implementation available for Linux is certainly faster, but I would not consider it fast.

        Tailscale however, although it derives from WireGuard libraries and the protocol, is really not WireGuard at all- so comparing it is a bit apples to oranges. With that said, it is still entirely userspace and its performance is less than stellar.

        • Cyph0n24 days ago
          Well, according to this[1] bench, you can get ~10 Gbps with kernel WG.

          I'm interested in this because I'm working on a small hobby project to learn eBPF. The idea is to implement a "Tailscale-lite" that eliminates context switches by keeping both Wireguard and L3 and L4 policy handling in kernel space. To me, the bulk of Tailscale's overhead comes from the fact that the dataplane is running between user and kernel space.

          [1]: https://github.com/cyyself/wg-bench

          • iscoelho24 days ago
            That's a large packet benchmark, not mixed packet size, and it just barely hits it. If you need consistent 10Gbps for a business use case, I would not consider that sufficient.

            > "To me, the bulk of Tailscale's overhead comes from the fact that the dataplane is running between user and kernel space."

            Yes and no, it's more complicated. DPDK is the industry standard library for fast packet processing, and it is in entirely user space. The Linux kernel netstack is just not very fast.

            • Cyph0n24 days ago
              Sure, but who is going to ship a DPDK application to end users? And how exactly would that work for existing user applications that are not DPDK aware?

              I think kernel networking is the only option for Tailscale (or any similar mesh VPN solution). Given this key constraint, the best you can do is do more work in kernel space and reduce context switches.

  • denkmoon25 days ago
    Not super related to the OP but since we're discussing network topologies; I've recently had an insane idea that nfs security sucks, nfs traversing firewalls sucks, kerberos really sucks, and that just wrapping it all in a wireguard pipe is way better.

    How deranged would it be to have every nfs client establish a wireguard tunnel and only have nfs traffic go through the tunnel?

    • QuantumNomad_25 days ago
      > How deranged would it be to have every nfs client establish a wireguard tunnel and only have nfs traffic go through the tunnel?

      Sounds good to me. I have my Wireguard tunnel set up so that only traffic intended for hosts that are in the Wireguard network itself are routed over the Wireguard tunnel.

      I mostly use it to ssh into different machines. The Wireguard server runs on a VPS on the Internet, and I can connect to it from anywhere (except from networks that filter Wireguard traffic), and that way ssh into my machines at home while I am away from home. Whereas all other normal traffic to other places is unaffected by and unrelated to the tunnel. So for example if I bring my laptop to a coffee shop and I have Wireguard running and I browse the web with a web browser, all my web browsing traffic still gets sent the same normal way that it would even if I didn’t have the tunnel running.

      I rarely use NFS nor SMB, but if I wanted to connect either of those I would be able to that as well over this Wireguard setup I have.

    • throw0101c24 days ago
      > How deranged would it be to have every nfs client establish a wireguard tunnel and only have nfs traffic go through the tunnel?

      See perhaps NFS over TLS:

      * https://datatracker.ietf.org/doc/html/rfc9289

      * https://access.redhat.com/solutions/7079884

      * https://www.phoronix.com/news/Linux-6.4-NFSD-RPC-With-TLS

    • eugenekay25 days ago
      I built a NFS3-over-OpenVPN network for a startup about a decade ago; it worked “okay” for transiting an untrusted internal cloud provider network and even over the internet to other datacenters, but ran into mount issues when the outer tunnels dropped a connection during a write. They ran out of money before it had to scale past a few dozen nodes.

      Nowadays I would recommend using NFS4+TLS or Gluster+TLS if you need filesystem semantics. Better still would be a proper S3-style or custom REST API that can handle the particulars of whatever strange problem lead to this architecture.

  • jrm425 days ago
    Whenever I see threads like this, I think its related but I'll be honest, my networking understanding might be limited.

    I use Tinc as a daily driver (for personal things) and have yet to come up with a new equivalent, given that I probably should. Does Vxlan help here?

    • imiric25 days ago
      Tinc is a fantastic piece of software. Very easy to use and configure, and it just works.

      These days I lean towards WireGuard simply because it's built into Linux, but Tinc would be my second choice.

    • iscoelho25 days ago
      VXLAN is for L2 between campuses. It is commonly used in enterprise and cloud networks.
  • solaris200725 days ago
    If a situation where production vxlan is going over Wireguard arises, then someone in leadership failed to plan and the underlying Wireguard tunnel is coping with that failure. No doubt, OP already knows this and all too well.

    The problem is no doubt a people problem. I have learned to overcome these people problems by adhering to specific kinds of communication patterns (familiar to Staff Engineers and SVPs).

    There is no reason that Wireguard over vxlan over Wireguard can't work, even with another layer (TLS) on top of Wireguard. Nonetheless it is very suboptimal and proprietary implementations of vxlan tend to behave poorly in unexpected conditions.

    We should remember that vxlan is next-get vlan.

    The type of Wireguard traffic encapsulated within the vxlan that comes to mind first is Kubernetes intra/inter-cluster pod-to-pod traffic. But this Wireguard traffic could be between two legacy style VMs.

    If I were the operator told "you need to securely tunnel this vxlan traffic between two sites" I would reach for IPsec instead of Wireguard in an attempt to not lower the MTU of encapsulated packets too much. Wireguard is a layer 4 (udp) protocol intended to encapsulate layer 3 (ipv6 and legacy ip) packets.

    If I were the owner of the application I would bake mutual TLS authentication on QUIC with "encrypted hello" (both elliptic and PQ redundant) into the application. The applications would be implemented in Rust, or if not practicable to implement the application in Rust I would write into the Helm chart a sidecar that does such a mutual TLS auth part (in Rust of course).

    I would also aggressively "ping" in some manner through the innermost encapsulation layer. If I had tenancy on a classic VM doing Wireguard over the vxlan I would have "ping -i 2 $remote_inside_tunnel_ipv6" running indefinitely.

  • kjuulh25 days ago
    I use vxlan on top of wireguard in my hobby set up. Probably wouldn't recommend it for an actual production use-case. But that is more or less because of how my homelab is setup (Hetzner -> Home about 20ms latency roundtrip).

    I considered dropping my root wireguard and setting up just vxlan and flannel, but as I need NAT hole punching I kind of need the wireguard root so that is why i ended up with it.

    Going Wireguard inside the vxlan (flannel) in my case, would likely be overkill, unless I wanted my traffic between nodes between regions to be separated from other peers on the network, not sure where that would be useful. It is an easy way of blocking out a peer however, but that could just as well be solved on the "root" wireguard node.

    There might be some MTU things that would be messed up going nested wireguard networks.

  • 24 days ago
    undefined
  • tucnak25 days ago
    For traversing public networks, simply consider BGP over Wireguard. VXLAN is not worth it.
    • ghxst25 days ago
      I've used wireguard for a while, not sure why I never considered doing BGP over it, might make for a fun weekend project.
      • tucnak25 days ago
        BGP is vastly superior to any L2 make-believe trash you can imagine, and amazingly, it often has better hardware offloading support for forwarding and firewalls. For example, 100G switches (L3+) like MikroTik's CRS504 do not support IPv6 in hardware for VXLAN-encapsulated flows, but everything just works if you choose to go the BGP route.

        L2 is a total waste of time.

        • iscoelho25 days ago
          Any ASIC switch released in the last decade from Cisco/Juniper/Arista supports EVPN/VXLAN in hardware. EVPN is built on BGP. This has become the industry standard for new enterprise and cloud deployments.

          The lack of support for hardware EVPN is one of the many reasons that Mikrotik is not considered for professional deployments.

          • hdgvhicv25 days ago
            Mikrotik is used for professional deployments all over the world. Right tool for the right job.

            People who think one size fits all are not professional.

            • iscoelho25 days ago
              If I can source an enterprise Cisco/Juniper/Arista ASIC switch that is 1) rock-solid 2) full featured 3) cheaper - which I can - there is unfortunately no rationale where Mikrotik would be applicable in any professional project of mine.

              With that said, I love Mikrotik for what it is: it is very approachable and it fills a niche. I believe it has added a lot of value to the industry and I'm excited to see their products mature.

              • hdgvhicv24 days ago
                Based on the lldp messages I see across dozens of countries, the majority of business isps globally use mikrotiks at their edge.
                • iscoelho24 days ago
                  I'm curious what you classify as a business ISP?

                  Take a look at AMS-IX, one of the largest internet exchanges: https://bgp.tools/ixp/AMS-IX

                  21/1020 (2%) of all peers are Mikrotik. 15 (1.4%) of those are >=1000mbps. 7 (0.6%) of those are 10gbps. None are larger than 10gbps.

                  • tucnak24 days ago
                    You're referencing backbone, not edge. It has only been a few years that MikroTik had offered a 100G solution, let alone became competitive in it. You won't find it in the backbone yet. However, many European ISP's have largely upgraded their distro and aggregation switches to MikroTik over the last five years. There's a sovereignty push, too. I would guess edge is similar, but there's too many cheap options there so probably not that much.

                    If your impression is based on data circa ~2020, you should re-evaluate your priors with the recent packages in mind. See https://mikrotik.com/product/crs812_ddq

                    • iscoelho24 days ago
                      CE (Customer Edge) is what you are referring to. ISPs would be the PE (Provider Edge). I am aware it can be popular for SMB CE devices, however that is simply not the case for PE devices.

                      Service Provider ISPs cannot use Mikrotik - It is impossible. RouterOS supports none of the features required for a service provider. VRFs are even still unsupported in HW [1]. I am confused why this is even a discussion as anyone with experience working at an ISP/SP would come to the same conclusion.

                      [1] https://help.mikrotik.com/docs/spaces/ROS/pages/62390319/L3+...

                      • tucnak24 days ago
                        There are many ISP's that successfully run their networks on BGP, without VRF unless their customers specifically require it. It simply means that VRF-heavy architectures (like dense MPLS L3VPN etc.) would require additional hardware. Nobody says you have to use MikroTik for everything, and nobody says it's the ultimate solution to all ISP problems. I don't get it where this maximalist view comes from—all or nothing. The typical MPLS VPN scenario has to do with overlapping address spaces, and for customer separation most aggregation layer deployments use pure L3 routing with VLAN segmentation in the first place.

                        There's a famous use-case from 10 years ago (sic!) of using MikroTik for serving over 400 customers, see https://mum.mikrotik.com/presentations/ID16/presentation_340... proving you could do it on small scale many years ago. Needless to say, A LOT has improved since. MikroTik has become a serious, and affordable means to power a small-to-midsize ISP in the recent years. Of course there are "enterprise" features for some people to get knickers in a twist over, but they are well beyond necessity. It's often that people were taught certain techniques, a certain way to do things (which more often than not includes all this domain over-extension madness and all that it carries with it up to L7!) so they struggle to adapt to alternative architectures.

                        To say that it's "impossible" to provide ISP services with MikroTik is reaching.

                  • hdgvhicv24 days ago
                    Those selling end services to businesses.

                    I have a mix of equipment from heavyweight juniper mxs at peeing points to arista dcs/ccs in large sites to £50 mikrotiks in the smallest branch offices.

                    Right tool for the right job, mikrotik is often but not always the right tool.

                    • iscoelho24 days ago
                      Mikrotik can be popular for CE (Customer Edge) devices, that is correct. Those are not ISPs however, those are customers.
              • tucnak24 days ago
                You're delusional on price. I wouldn't touch severely overpriced and backdoored American switches with a 10-foot pole! Meanwhile, MikroTik just released a 400G switch in under two grand. To buy Cisco/Juniper/Arista with your own money in 2025 you have to be super rich and super stupid. And I say this as a guy that buys 100G stuff from Xilinx.
                • iscoelho24 days ago
                  I have not seen a case where I could not source a Juniper switch (for example) for lower $/port than Mikrotik, even at 400GE. It is unheard of to pay MSRP. YMMV.
  • justsomehnguy25 days ago
    WG is L3 transport

    VXLAN is L2-like tranport over L3

    You can have EoIP over WG with any VLANs you like.

    You can have a VXLAN over plain IP, over EoIP, over WG, over IPSec. Only WG and IPSec (with not NULL sec) do providecany semblance ofvencryption in transit

    And mandatory X\Y problem.

  • inetknght25 days ago
    > Let’s agree going recursive (WireGuard inside VXLAN inside WireGuard) is a bad idea.

    But it's not necessarily a bad idea. It depends on the circumstances, even when traversing a public network.

  • stevefan199925 days ago
    Is there a WireGuard equivalent that does L2 instead of L3? Need this for a virtual mesh network for homelabbing. I have this exact setup, running VXLAN or GENEVE over WireGuard tunnel using KubeSpan from Talos Linux but I simply think having L2 access would make load balancer much easier
    • kjuulh25 days ago
      You can see my reply below: https://news.ycombinator.com/item?id=46609044 I believe our setups are pretty equivalent.

      I achieve load balancing by running native wireguard on a vps at hetzner, I've got a native wireguard mesh, I believe Talos can do the same, where the peers are manually set up, or via. tailscale etc. I then tell k3s that it should use the wireguard interface for vxlan, and boom my kubernetes mesh is now connected.

      flannel-iface: "wg0" # Talos might have something similar.

      I do use some node-labels and affinities to make sure the right pods end up in the right spot. For example the metallb annoucer always has to come from the hetzner node. As mentioned in my reply below, it takes about 20ms roundtrip back to my homelab, so my sites can take a bit of time to load, but it works pretty well otherwise, sort of similar to how cloudflare tunnels would work, except not as polished.

      My setup is here if it is of help

      https://git.kjuulh.io/kjuulh/clank-homelab-flux/src/branch/m...

    • dietr1ch25 days ago
      • stevefan199925 days ago
        I used to like ZT but they went BSL. Plus it is not running in kernel unlike WireGuard. Memory usage is extremely high.

        I used to run my K8S homelab through ZT as well. Latency is extremely bad.

        What I wanted is more like meshed L2TPv3, but L2TPv3 is extremely hard to setup nowadays

    • throw0101c24 days ago
      > I have this exact setup, running VXLAN or GENEVE […]

      I see VxLAN mentioned all over the place, but it seems that GENEVE isn't really implemented as much: besides perhaps being a newer protocol, is there a reason(s) why in your opinion? Where do you personally use each?

      • stevefan199924 days ago
        Since I'm a Kubernetes cloud engineer and I do self hosting with Flannel, Calico and ended up with Cilium
    • viraptor25 days ago
      ZeroTier does L2.
  • wmf25 days ago
    What problem is being solved here?
  • uberduper25 days ago
    What are your discovery mechanisms? I don't know what exists for automatic peer management with wg. If you're doing bgp evpn for vxlan endpoint discovery then I'd think WG over vxlan would be the easier to manage option.
    • uberduper25 days ago
      If you actually want to use vxlan ids to isolate l2 domains, like if you want multiple hypervisors separated by public networks to run groups of VMs on distinct l2 domains, then vxlan over WG seems like the way to go.
  • mbreese25 days ago
    What are you trying to do? Why are you trying to link networks across the public internet?
  • pjd725 days ago
    Tell us why you think so at least.
    • ronsor25 days ago
      Reduced MTU chopping off your maximum packet size from all the extra headers and other overhead you're adding?
  • DiabloD325 days ago
    I mean, ultimately, thats how Google routes internally.

    IPSec-equivalent, VXLAN-equivalent, IPSec-equivalent.

    Prevents any compromised layer from knowing too much about the traffic.

    • tucnak25 days ago
      What gave you that idea? Internally, Google uses GRE/GENEVE-like stuff but for reasons that have nothing to do with "preventing compromise" or whatever, but because they're carrying metadata (traces, latency budgets, billing ids.) That is to say, encapsulation is just transport. It's pretty much L3 semantics all the way down... In fact, this is more or less the point: L2 is intractable at scale, as broadcast/multicast doesn't work. However, it's hard to find comparisons to anything you're familiar with at Google scale. They have a myriad of proprietary solutions and custom protocols for routing, even though it's all L3 semantics. To learn more:

      Andromeda https://research.google/pubs/andromeda-performance-isolation...

      Orion https://research.google/pubs/orion-googles-software-defined-...

      • zeroxfe25 days ago
        The last time I was there, there were many layers of encap, including MPLS, GRE, PSP, with very tightly managed MTU. Traffic engineering was mostly SDN-managed L3, but holy hell was it complex. Considering that Google (at the time) carried more traffic than the rest of the Internet combined, maybe it was worth it.
      • DiabloD325 days ago
        What gave me that idea? Talks and research papers from Google network engineers over the past decade.
        • tucnak24 days ago
          Where are you getting at VXLAN-equiv, IPsec-equiv, etc. specifically? ALTS/PSP is not "IPsec-equivalent"
    • pixl9725 days ago
      Internal is fine because you control things like MTU so you don't have to worry about packet fragmentation/partial loss.
    • als025 days ago
      That seems like an awful amount of overhead for questionable gain.
      • _bernd25 days ago
        Links between, and in between data centers use so called jumbo frames with an mtu of over 9000. Not joking.
        • xoa25 days ago
          Worth mentioning that links at home can use them too, jumbo frame support was rare at one point but now you can get them on really cheap basic switches if you're looking for it. Even incredibly cheap $30 (literally, that's what a 5 port UniFi flex mini lists for direct) switches support them now. Not just an exotic thing for data centers anymore, and it can cut down on overhead within a LAN particularly as you get into 10/25/40/100 Gbps stuff to your own NAS/SAN or whatever.
  • sciencesama25 days ago
    vxlan inception is fun ! said no one ever !
    • hbogert24 days ago
      Where is the inception?
  • H8crilA25 days ago
    Not sure I understand, but why not Tailscale?
    • sy2625 days ago
      In my case, Tailscale does not implement K8S CNI.