Or they are centrally/corporate-controlled and do not allow hole punching.
https://github.com/tbocek/qotp and https://github.com/qh-project/qh
The main idea is to have a simple encryption (ed25519/chacha20+poly1305) for encryption in the transport layer, on top of that then qh, where certs are use for signing content.
With out of band key exchange, you can establish a connection after you successfully punched a hole.
However, its not QUIC compatible in any way (https://xkcd.com/927)
"Cannot" is a strong word:
> UDP hole punching will not work with symmetric NAT devices (also known as bi-directional NAT) which tend to be found in large corporate networks. In symmetric NAT, the NAT's mapping associated with the connection to the known STUN server is restricted to receiving data from the known server, and therefore the NAT mapping the known server sees is not useful information to the endpoint.
* https://en.wikipedia.org/wiki/UDP_hole_punching#Overview
I've also heard lots of people complain about how they're stuck behind CG-NAT and various P2P things do not work.
This link is 404.
QOTP looks really cool. Like what QUIC would be if DJB were in charge of it.
... how does that work when the network disallows UDP altogether?
If you're really really desperate you can send UDP packets with fake TCP headers (i.e. you aren't actually doing any congestion control or retransmission) but you have to control both ends of the connection for that.
And there's ICMP.
> Traversal Using Relays around NAT (TURN): Relay Extensions to Session Traversal Utilities for NAT (STUN)
> Abstract
> If a host is located behind a NAT, then in certain situations it can be impossible for that host to communicate directly with other hosts (peers). In these situations, it is necessary for the host to use the services of an intermediate node that acts as a communication relay. This specification defines a protocol, called TURN (Traversal Using Relays around NAT), that allows the host to control the operation of the relay and to exchange packets with its peers using the relay.
As I understand it, most consumer devices will set up a port mapping which is completely independent of the destination's IP and port. It's just "incoming packet for $wanip:567 goes to $internal:123, outgoing packet from $internal:123 get rewritten to appear from $wanip:567". This allows any packet towards $wanip:567 to reach the internal host - both the original server the client initiated the connection to, and any other random host on the internet. Do this on two clients, have the server tell them each other's mappings, and they can do P2P comms: basic hole punching. I believe this is usually called "Full Cone NAT".
However, nothing is stopping you from setting up destination-dependent mapping, where it becomes "incoming packet from $server:443 to $wanip:456 goes to $internal:123, outgoing packet from $internal:123 to $server:443 gets rewritten to appear from $wanip:567". This would still work totally fine for regular client-to-server communication, but that mapping would only work for that specific server. A packet heading towards $wanip:456 would get dropped because the source isn't $server:443 - or it could even get forwarded to another host on the NATed network. This would block traditional hole punching. I believe this is called "Address Restricted Cone NAT" if it filters only on source IP, or "Port Restricted Cone NAT" if it filters on both source IP and source port.
And there's a lot of other considerations; chances are your NAT won't be happy if you send all those probe packets at once, and your user may not be either. It's probably only worth it to do exhaustive probing if the connection is long lived, and proxying is expensive (in dollars because of bandwidth or in latency)
[1] https://github.com/danderson/nat-birthday-paradox/tree/maste...
If you can manage to bump it up to 65536 probes without getting blocked, hitting a NAT limit, or causing the user to fall asleep waiting, then it should hit the same success rate :D. I'm not sure many would like to use that P2P service though, at that point just pay for the TURN server.
If you need to send 64k probes to get p2p and you want to make a 15 minute call, it probably doesn't make sense, but it's probably worth trying a bit in case you catch an easy case. Not that p2p is always better than going through a relay, but it's often less expensive.
Try doing it over a network that only allows connections through a SOCKS/Squid proxy, or on a network that uses CG-NAT (i.e., double-NAT).
See also:
> UDP hole punching will not work with symmetric NAT devices (also known as bi-directional NAT) which tend to be found in large corporate networks. In symmetric NAT, the NAT's mapping associated with the connection to the known STUN server is restricted to receiving data from the known server, and therefore the NAT mapping the known server sees is not useful information to the endpoint.
"Unfortunately, no matter how hard you try, there is a certain percentage of nodes for whom hole punching will never work. This is because their NAT behaves in an unpredictable way. While most NATs are well-behaved, some aren’t. This is one of the sad facts of life that network engineers have to deal with."
In this scenario, the article goes on to describe a convention relay-based approach.
I would guess that most consumer routers are very cooperative as far as hole punching because it's pretty critical functionality for bittorrent and many online games. Corporate firewalls wouldn't be as motivated to care about those use-cases or may want to actively block them.
I think parents point is a bit like "you can't disallow lock picking"; the term "hole punching" being used to describe techniques that are intentionally trying to bypass whatever thing others (particularly corporations) try to put in the way, sometimes for good reasons and sometimes for kind of shit reasons.
Carrier peering using the UDP hashes for encrypting network traffic from a WAN to serve a Tier 1 network.
It might be helpful to cite the percentage
It's relatively small
A default policy that relays traffic through a third party is asinine
For the small percentage, the third parties will always be there if they need them. The internet has an enormous supply of middlemen, like Google
For everyone else, the third parties, i.e. the middlemen, can be avoided
It seemed like there was such a good exciting start, but the spec has been dormant for years. https://github.com/w3c/p2p-webtransport
Unlike websockets you can supply "cert hash" which makes it possible for the browser to establish a TLS connection with a client that doesn't have a certificate signed by a traditional PKI provider or even have a domain name. This property is immensely useful because it makes it possible for browsers to establish connections to any known non-browser node on the internet, including from secure contexts (i.e. from an https page where e.g. you can't establish a ws:// connection, only wss:// is allowed but you need a 'real' tls cert for that)
https://caniuse.com/webtransport
However, there have been some recent pull requests indicating gradual progress:
https://github.com/WebKit/WebKit/pulls?q=is%3Apr+is%3Aclosed...
Webtransport as a protocol certainly could be used for p2p, but the browser APIs aren't there: hence p2p-webtransport was created, to allow its use beyond traditional server<->client.
For TCP based protocols it's very hard since there is no reliable way to hole punch NATs and stateful firewalls with TCP.
First time I've heard about this, and went looking for more. Came across https://news.ycombinator.com/item?id=5969030 (95 points - July 1, 2013 - 49 comments) that had bunch of background info + useful discussions.
Again, maybe packet forging is needed for some routers/middleboxes/firewalls, since careful inspection would show that the conns are technically independent. If you have any details about this, please let me know! (Networking is difficult to test.)
No. QUIC require TLS. TLS just provides a way to move certificates, but doesn't care what a "certificate" actually is. JPEG of your 10m swimming certificate from school? Sure, that's fine.
The endpoints get to decide which certificates to accept and in practice in a web browser and many other modern programs that'll be some sort of X.509 certificate more or less following PKIX and on the public Internet usually the Web PKI which is a PKI operated on behalf of the Relying Parties (literally everybody) by the Trust Stores (in practice the OS vendors plus Mozilla for the Free Unix systems) but none of that is defined by QUIC.
https://yggdrasil-network.github.io/documentation.html
I'm currently working on creating a managed Yggdrasil relay node service. A feature I hope they implement is QUIC multistream support.