So you have to ship new code to every 'network element' to support IPv4x. Just like with IPv6.
So you have to update DNS to create new resource record types ("A" is hard-coded to 32-bits) to support the new longer addresses, and have all user-land code start asking for, using, and understanding the new record replies. Just like with IPv6. (And their DNS idea won't work—or won't work differently than IPv6: a lot of legacy code did not have room in data structures for multiple reply types: sure you'd get the "A" but unless you updated the code to get the "AX" address (for ipv4X addresses) you could never get to the longer with address… just like IPv6 needed code updates to recognize AAAA, otherwise you were A-only.)
You need to update socket APIs to hold new data structures for longer addresses so your app can tell the kernel to send packets to the new addresses. Just like with IPv6.
A single residential connection that gets a single IPv4 address also gets to use all the /96 'behind it' with this IPv4x proposal? People complain about the "wastefulness" of /64s now, and this is even more so (to the tune of 32 bits). You'd probably be better served with pushing the new bits to the other end… like…
* https://en.wikipedia.org/wiki/IPv6#IPv4-mapped_IPv6_addresse...
IPv6 adoption has been linear for the last two decades. Currently, 48% of Google traffic is IPv6.[1] It was 30% in 2020. That's low, because Google is blocked in China. Google sees China as 6% IPv6, but China is really around 77%.
Sometimes it takes a long time to convert infrastructure. Half the Northeast Corridor track is still on 25Hz. There's still some 40Hz power around Niagara Falls. San Francisco got rid of the last PG&E DC service a few years ago. It took from 1948 to 1994 to convert all US freight rail stock to roller bearings.[2] European freight rail is still using couplers obsolete and illegal in the US since 1900. (There's an effort underway to fix this. Hopefully it will go better than Eurocoupler from the 1980s. Passenger rail uses completely different couplers, and doesn't uncouple much.)[3]
[1] https://www.google.com/intl/en/ipv6/statistics.html
[2] https://www.youtube.com/watch?v=R-1EZ6K7bpQ
[2] https://rail-research.europa.eu/european-dac-delivery-progra...
It just so happens that, unlike for v6, v4 and v4x have some "implicit bridges" built-in (i.e. between everything in v4 and everything in v4x that happens to have the last 96 bits unset). Not sure if that actually makes anything better or just kicks the can down the road in an even more messy way.
That's pretty much identical to 6in4 and similar proposals.
The Internet really needs a variant of the "So, you have an anti spam proposal" meme that used to be popular. Yes, it kill fresh ideas in the bud sometimes, but it also helps establish a cultural baseline for what is constructive discussion.
Nobody needs to hear about the same old ideas that were subsumed by IPv6 because they required a flag day, delayed address exhaustion only about six months, or exploded routing tables to impossible sizes.
If you have new ideas, let's hear them, but the discussion around v6 has been on constant repeat since before it was finalized and that's not useful to anyone.
—Sent from my IPv6 phone
For those unfamiliar:
There are a ton of weird coins around, sure, but no-one is using them as money.
I still have to stump up actual dollars backed by a government if I want to buy a coffee.
See perhaps:
> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.
> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.
* https://en.wikipedia.org/wiki/6to4
Have an IPv4 address? Congratulations! You get entire IPv6 /48 for free.
Yes, but the compatibility is very very easy to support for both hardware vendors, softwares, sysadmins etc. Some things might need a gentle stroke (mostly just enlarge a single bitfield) but after that everything just works, hardware, software, websites, operators.
A protocol is a social problem, and ipv6 fails exactly there.
Even still. The rollout is still progressing, and new systems like Matter are IPv6 only.
I think the biggest barrier to IPv6 adoption is that this is just categorically untrue and people keep insisting that it isn't, reducing the chance that I'd make conscious efforts to try to grok it.
I've had dozens of weird network issues in the last few years that have all been solved by simply turning off IPv6. From hosts taking 20 seconds to respond, to things not connecting 40% of the time, DHCP leases not working, devices not able to find the printer on the network, everything simply works better on IPv4, and I don't think it's just me. I don't think these sort of issues should be happening for a protocol that has had 30 years to mature. At a certain point we have to look and wonder if the design itself is just too complicated and contributes to its own failure to thrive, instead of blaming lazy humans.
For me just disabling IPv6 has given the biggest payoff. Life is too short to waste time debugging obscure IPv6 problems that still routinely pop up after over 30 years of development.
Ever since OpenVPN silently routed IPv6 over clearnet I've just disabled it whenever I can.
Now I'm sure I can fix DNSmasq to do something sensible here, but the defaults didn't even break - they worked in the most annoying way possible where had I just disabled IPv6 that would've fixed the entire problem right away.
Dual stack has some incredibly stupid defaults.
If an ISP uses an MPLS core, every POP establishes a tunnel to every other POP. IP routing happens only at the source POP as it chooses which pre-established tunnel to use.
If an ISP is very new, it likely has an IPv6-only core, and IPv4 packets are tunneled through it. If an ISP is very old, with an IPv4-only core, it can do the reverse and tunnel IPv6 packets through IPv4. It can even use private addresses for the intermediate nodes as they won't be seen outside the network.
So let’s say your internet provider owns x.x.x.x, it receives a packet directed to you at x.x.x.x.y.y… , forwards it to your network, but your local router has old software and treats all packages to x.x.x.x.* as directed to it. You never receive any medssagea directly to you evem though your computer would recognise IPv4x.
It would be a clusterfuck.
Your home router that sits on the end of a single IPv4 address would need to know about IPv4x, but in this parallel world you'd buy a router that does.
Your computer knows it’s connected to an old router because dhcp gave it x.x.x.x address and not x.x.x.x... so it knows it’s running in old v4 mode.
And it can still send outbound to a v4x address that it knows about.
Software updates scale _very well_ - once author updates, all users get the latest version. The important part is sysadmin time and config files - _those_ don't scale at all, and someone needs to invest effort in every single system out there.
That's where IPv6 really dropped the ball by having dual-stack the default. In IPv4x, there is no dual-stack.
I upgrade my OS, and suddenly I can use IPv4x addresses... but I don't have to - all my configs are still valid, and if my router is not compatible, all devices still fall back to IPv4-compatible short addresses, but are using IPv4x stack.
I upgrade the home router and suddenly some devices get IPv4x address... but it is all transparent to me - my router's NAT takes care of that if my upstream (ISP) or a client device are not IPv4x-capable.
I have my small office network which is on the mix IPv4 and IPv4x addresses. Most Windows/Linux machines are on IPv4x, but that old network printer and security controller still have IPv4 address (with router translating responses). It still all works together. There is only one firewall rule set, there is only one monitoring tool, etc... My ACL list on NAS server has mix of IPv4 and IPv4x in the same list...
So this is a very stark contrast to IPv6 mess, where you have to bring up a whole parallel network, setup a second router config, set up a separate firewall set, make a second parallel set of addresses, basically setup a whole separate network - just to be able to bring up a single IPv6 device.
(Funny enough, I bet one _could_ accelerate IPv6 deployment a lot by have a standard that _requires_ 6to4/4to6/NAT64 technology in each IPv6 network... but instead the IPv6 supporters went into all-or-nothing approach)
With IPv6 the router needs to send out RAs. That's it. There's no need to do anything else with IPv6. "Automatic configuration of hosts and routers" was a requirement for IPng:
* https://datatracker.ietf.org/doc/html/rfc1726#section-5.8
When I was with my last ISP I turned IPv6 on my Asus router, it got a IPv6 WAN connection, and a prefix delegation from my ISP, and my devices (including by Brother printer) started getting IPv6 addresses. The Asus had a default-deny firewall and so all incoming IPv6 connections were blocked. I had do to zero configuration on any of the devices (laptops, phones, IoT, etc).
> I upgrade my OS, and suddenly I can use IPv4x addresses... but I don't have to - all my configs are still valid, and if my router is not compatible, all devices still fall back to IPv4-compatible short addresses, but are using IPv4x stack.
So if you cannot connect via >32b addresses you fall back to 32b addresses?
* https://en.wikipedia.org/wiki/Happy_Eyeballs
> I upgrade the home router and suddenly some devices get IPv4x address... but it is all transparent to me - my router's NAT takes care of that if my upstream (ISP) or a client device are not IPv4x-capable.
* https://en.wikipedia.org/wiki/IPv6_rapid_deployment
A French ISP deployed this across their network of four million subscribers in five months (2007-11 to 2008-03).
> There is only one firewall rule set, there is only one monitoring tool, etc... My ACL list on NAS server has mix of IPv4 and IPv4x in the same list...
If an (e.g.) public web server has public address (say) 2.3.4.5 to support legacy IPv4-only devices, but also has 2.3.4.5.6.7.8.9 to support IPv4x devices, how can you have only one firewall rule set?
> So this is a very stark contrast to IPv6 mess, where you have to bring up a whole parallel network, setup a second router config, set up a separate firewall set, make a second parallel set of addresses, basically setup a whole separate network - just to be able to bring up a single IPv6 device.
Having 10.11.12.13 on your PC as well as 10.11.12.13.14.15.16 as per IPv4x is "a second parallel set of addresses".
It is running a whole separate network because your system has the address 10.11.12.13 and 10.11.12.13.14.15.16. You are running dual-stack because you support connection from 32-bit-only un-updated, legacy devices and >32b updated devices. This is no different than having 10.11.12.13 and 2001:db8:dead:beef::10:11:12:13.
Contrast this with ip6, which is a completely new system, and thus has a chicken and egg problem.
At some point IPv4 addresses will cost too much.
Today it seems most ISPs support it but have it behind an off by default toggle.
The reason there's an IPv4 address shortage is because ISPs assign every user a unique IPv4 address. In this alternative timeline, ISPs would have to give users less-than-an-IPv4 address, which probably means a single IPv4x address if we're being realistic and assuming that ISPs are taking the path of least resistance.
Something that just uses IPv4 won’t work without making the extra layer visible. That may not have been apparent then but it is now.
So the folks that just happen to get in early on the IPv4 address land rush (US, Western world) now also get to grab all this new address space?
What about any new players? This particular aspect idea seems to reward incumbents. Unlike IPv6, where new players (and countries and continents) that weren't online early get a chance to get equal footing in the expanded address space.
From where?
All then-existing IPv4 addresses would get all the bits behind them. There would, at the time, still be IPv4 addresses available that could be given out, and as people got them they would also get the extend "IPv4x" address associated with them.
But at some point IPv4 addresses would all be allocated… along with all the extended addresses 'behind' them.
Then what?
The extended IPv4x addresses are attached to the legacy IPv4 addressed they are 'prefixed' by, so once the legacy bits are assigned, so are the new bits. If someone comes along post-legacy-IPv4 exhaustion, where do new addresses come from?
You're in the exact same situation as we are now: legacy code is stuck with 32-bit-only addresses, new code is >32-bits… just like with IPv6. Great you managed to purchase/rent a legacy address range… but you still need a translation box for non-updated code… like with CG-NAT and IPv6.
This IPv4x thing is bullshit but we should be accurate about how it would play out.
But right now you can get an IPv4 /24 (as you say), but you can get an IPv6 allocation 'for free' as we speak.
In both cases legacy code cannot use the new address space; you have to:
* update the IP stack (like with IPv6)
* tell applications about new DNS records (like IPv6)
* set up translation layers for legacy-only code to reach extended-only destination (like IPv6 with DNS64/NAT64, CLAT, etc)
You're updating the exact same code paths in both the IPv4x and IPv6 scenarios: dual-stack, DNS, socket address structures, dealing with legacy-only code that is never touched to deal with the larger address space.
And at the same time the address format and IP header is extended, effectively still splitting one network into two (one of which is a superset of the others)?
A fundamentally breaking change remains a breaking change, whether you have the guts to bump your version number or not.
That's cool and all, but end-user edge routers are absolutely going to have to be updated to handle "IPv4x". Why? Because the entire point of IPvNext is to address address space exhaustion, their ISP will stop giving them IPv4 addresses.
This means that the ISP is also going to have to update significant parts of their systems to handle "IPv4x" packets, because they're going to have to handle customer site address management. The only thing that doesn't have to change is the easiest part of the system to get changed... the core routers and associated infrastructure.
No. The router in your home would need to support IPv4x, or you would get no Internet connection. Why? Because IPv4x extends the address space "under" each IPv4 address -thus- competing with it for space. ISPs in areas with serious address pressure sure as fuck aren't going to be giving you IPv4 addresses anymore.
As I mentioned, similarly, ISPs will need to update their systems to handle IPv4x, because they are -at minimum- going to be doing IPv4x address management for their customers. They're probably going to -themselves- be working from IPv4x allocations. Maybe each ISP gets knocked down from several v4 /16s or maybe a couple of /20s to a handful of v4 /32s to carve up for v4x customer sites.
Your scheme has the adoption problems of IPv6, but even worse because it relies on reclaiming and repurposing IPv4 address space that's currently in use.
Is that really the easy bit to change? ISPs spend years trialling new hardware and software in their core. You go through numerous cheapo home routers over the lifetime of one of their chassis. You'll use whatever non-name box they send you, and you'll accept their regular OTA updates too, else you're on your own.
When you're adding support for a new Internet address protocol that's widely agreed to be the new one, it absolutely is. Compared to what end-users get, ISPs buy very high quality gear. The rate of gear change may be lower than at end-user sites but because they're paying far, far more for the equipment, it's very likely to have support for the new addressing protocol.
Consumer gear is often cheap-as-possible garbage that has had as little effort put into it as possible. [0] I know that long after 2012, you could find consumer-grade networking equipment that did not support (or actively broke) IPv6. [1] And how often do we hear complaints of "my ISP-provided router is just unreliable trash, I hate it", or stories of people saving lots of money by refusing to rent their edge router from their ISP? The equipment ISPs give you can also be bottom-of-the-barrel crap that folks actively avoid using. [2]
So, yeah, the stuff at the very edge is often bottom-of-the-barrel trash and is often infrequently updated. That's why it's harder to update the equipment at edge than the equipment in the core. It is way more expensive to update the core stuff, but it's always getting updated, and you're paying enough to get much better quality than the stuff at the edge.
[0] OpenWRT is so, so popular for a reason, after all.
[1] This was true even for "prosumer" gear. I know that even in the mid 2010s, Ubiquiti's UniFi APs broke IPv6 for attached clients if you were using VLANs. So, yeah, not even SOHO gear is expensive enough to ensure that this stuff gets done right.
[2] You do have something of a point in the implied claim that ISPs will update their customer rental hardware with IPv6 support once they start providing IPv6 to their customer. But. Way back when I was so foolish as to rent my cable modem, I learned that I'd been getting a small fraction of the speed available to me for years because my cable modem was significantly out of date. It required a lucky realization during a support call to get that update done. So, equipment upgrades sometimes totally fall through the cracks even with major ISPs.
I entirely disagree. Due to a combination of ISPs sticking with what they know and refusing to update (because of the huge time/cost in validating it), and vendors minimising their workloads/risk exposure and only updating what they "have to". The vendors have a lot of power here and these big new protocols are just more work.
In addition, smaller ISPs have virtually no say in what software/features they get. They can ask all they want, they have little power. It takes a big customer to move the needle and get new features into these expensive boxes. It really only happens when there's another vendor offering something new, and therefore a business requirement to maintain feature parity else lose big-customer revenue. So yeh, if a new protocol magically becomes standard, only then would anyone bother implementing and supporting it.
I think it's much easier to update consumer edge equipment. The ISP dictates all aspects of this relationship, the boxes are cheap, and just plug and play. They're relatively simple and easy to validate for 99% of usecases. If your internet stops working (because you didn't get the new hw/sw), they ship you a replacement, 2 days later it's fixed.
But I will just say, and slightly off topic of this thread, the lack of multiple extension headers in this proposed protocol instantly makes it more attractive to implement compared to v6.
You misunderstand me, though the misunderstanding is quite understandable given how I phrased some of the things.
I expect the updating usually occurs when buying new kit, rather than on kit that's deployed... and that that purchasing happens regularly, but infrequently. I'm a very, very big proponent of "If it's working fine, don't update its software load unless it fixes a security issue that's actually a concern.". New software often brings new trouble, and that's why cautious folks do extensive validation of new software.
My commentary presupposed that
[Y]ou're adding support for a new Internet address protocol that's widely agreed to be *the* new one
which I'd say counts as something that a vendor "has to" implement.> I think it's much easier to update consumer edge equipment. The ISP dictates all aspects of this relationship...
I expect enough people don't use the ISP-rented equipment that it's -in aggregate- actually not much easier to update edge equipment. That's what I was trying to get at with talking about "ISP-provided routers & etc are crap and not worth the expense".
You must have had much better experiences with firmware update policies for embedded consumer devices than me.
Sure. On the other other hand, companies going "Is this a security problem that's going to cost us lots of money if we don't fix it? No? Why the fuck should I spend money fixing it for free, then? It can be a headline feature in the new model." means that -in practice- they aren't so easily updated.
If everyone in the consumer space made OpenWRT-compatible routers, switches, and APs, then that problem would be solved. But -for some reason- they do not and we still get shit like [0].
All endpoints need to upgrade to IPv4x before anyone can reasonably use it. If I have servers on IPv4x, clients can reach my network fine, but they then can't reach individual servers. Clients need to know IPv4x to reach IPv4x servers.
Similarly, IPv4x clients talking to IPv4 servers do what? Send an IPv4x packet with the remaining IPv4x address bits zeroed out? Nope a V4 server won't understand it. So they're sending an IPv4 packet and the response gets back to your network but doesn't know how to get the last mile back to the IPv4x client?
I desperately wish there was a way to have "one stack to rule them all", whether that is IPv4x or IPv4 mapped into a portion of IPv6. But there doesn't seem to be an actually workable solution to it.
I see many ISPs deploying IPv6 but still following the same design principles they used for IPv4. In reality, IPv6 should be treated as a new protocol with different capabilities and assumptions.
For example, dynamic IP addresses are common with IPv4, but with IPv6 every user should ideally receive a stable /64 prefix, with the ability to request additional prefixes through prefix delegation (PD) if needed.
Another example is bring-your-own IP space. This is practically impossible for normal users with IPv4, but IPv6 makes it much more feasible. However, almost no ISPs offer this. It would be great if ISPs allowed technically inclined users to announce their own address space and move it with them when switching providers.
I personally feel that IPv6 is one of the clearest cases of second system syndrome. What we needed was more address bits. What we got was a nearly total redesign-by-committee with many elegant features but had difficult backwards compatibility.
IPv6 gets a lot of hate for all the bells and whistles, but on closer examination, the only one that really matters is always “it’s a second network and needs me to touch all my hosts and networking stack”.
Don’t like SLAAC? Don’t use it! Want to keep using DHCP instead? Use DHCPv6! Love manual address configuration? Go right ahead! It even makes the addresses much shorter. None of that stuff is essential to IPv6.
In fact, in my view TFA makes a very poor case for a counterfactual IPv4+ world. The only thing it really simplifies is address space assignment.
Simplifying address space assignment is a huge deal. IPv4+ allows the leaves of the network to adopt IPv4+ when it makes sense for them. They don't lose any investment in IPv4 address space, they don't have to upgrade to all IPv6 supporting hardware, there's no parallel configuration. You just support IPv4 on the terminals that want or need it, and on the network hardware when you upgrade. It's basically better NAT that eventually disappears and just becomes "routing".
What investment? IP addresses used to be free until we started running out, and I don't think anything of value would be lost for humanity as a whole if they became non-scarce again.
> they don't have to upgrade to all IPv6 supporting hardware
But they do, unless you're fine with maintaining an implicitly hierarchical network (or really two) forever.
> It's basically better NAT
How is it better? It also still requires NAT for every 4x host trying to reach a 4 only one, so it's exactly NAT.
> that eventually disappears
Driven by what mechanism?
> What investment? IP addresses used to be free
Well they're not now, so it's an investment. Any entity that has IP addresses doesn't want its competition to get IP addresses, even when this leads to bad outcomes overall.
It doesn't work like this. SLAAC is a standard compliant way of distributing addresses, so you MUST support it unless you're running a very specific isolated setup.
Most people using Android will come to your home and ask "do you have WiFi here?"
The Android implementation of IPv6 completely boggles my mind. They have completely refused to implemented DHCPv6 since 2012:
* https://issuetracker.google.com/issues/36949085
But months after client-side DHCP-PD was made an RFC they're implementing that?
* https://android-developers.googleblog.com/2025/09/simplifyin...
In what universe does implementing DHCP-PD but not 'regular' DHCPv6 make any kind of sense?
Their policy makes a lot of sense. It's hindering ipv6 deployment, but it is preventing ISPs from allocating less than /64 to customers. It has nothing to do with standards actually.
Dhcp-pd makes a lot of sense though, because if an isp is willing to give you a prefix, they are by default nice guys.
Why should my Pixel 10 send out DHCP-PD packets when it connects to Wifi, but not DHCPv6?
Which is fine, I guess, but still doesn’t explain their refusal to implement regular DHCPv6 for so long.
The removal of arp and removal of broadcast, the enforcement of multicast
The almost-required removal of NAT and the quasi-relgious dislike from many network people. Instead of simply src-natting your traffic behind ISP1 or ISP2, you are supposed to have multiple public IPs and somehow make your end devices choose the best routing rather than your router.
All of these were choices made in addition to simply expanding the address scope.
Only use the real one then (unless you happen to be implementing ND or something)!
> The removal of arp and removal of broadcast, the enforcement of multicast
ARP was effectively only replaced by ND, no? Maybe there are many disadvantages I'm not familiar with, but is there a fundamental problem with it?
> The almost-required removal of NAT
Don't like that part? Don't use it, and do use NAT66. It works great, I use it sometimes!
(i.e. anything other than the decision to make a breaking change to address formats and accordingly require adapters)
I (and I expect the fellow you're replying to) believe that if you're going to have to rework ARP to support 128-bit addresses, you might as well come up with a new protocol that fixes things you think are bad about ARP.And if the fellow you're replying to doesn't know that broadcast is another name for "all-hosts multicast", then he needs to read a bit more.
[0] Several purity-minded fools wanted to pretend that IPv6 NAT wasn't going to exist. That doesn't mean that IPv6 doesn't support NAT... NAT is and has always been a function of the packet mangling done by a router that sits between you and your conversation partner.
There was only 1 mistake, but it was huge and all backwards compatibility problems come from it. The IPv4 32-bit address space should have been included in the IPv6 address space, instead of having 2 separate address spaces.
IPv6 added very few features, but it mostly removed or simplified the IPv4 features that were useless.
Like
> Addresses in this group consist of an 80-bit prefix of zeros, the next 16 bits are ones, and the remaining, least-significant 32 bits contain the IPv4 address. For example, ::ffff:192.0.2.128 represents the IPv4 address 192.0.2.128. A previous format, called "IPv4-compatible IPv6 address", was ::192.0.2.128; however, this method is deprecated.[5]
* https://en.wikipedia.org/wiki/IPv6#IPv4-mapped_IPv6_addresse...
?
> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.
> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.
The entire IPv4 address space is included in the IPv6 address space, in fact it's included multiple times depending on what you want to do with it. There's one copy for representing IPv4 addresses in a dual-stack implementation, another copy for NAT64, a different copy for a different tunneling mechanism, etc.
IPv6 added IPSEC which was backported to IPv4.
IPv6 tried to add easy renumbering, which did’t work and had to be discarded.
IPv6 added scoped addresses which are halfbaked and limited. Site-scoped addresses never worked and were discarded; link-scoped addresses are mostly used for autoconfiguration.
IPv6 added new autoconfiguration protocols instead of reusing bootp/DHCP.
That's ... exactly how IPv6 works?
Look at the default prefix table at https://en.wikipedia.org/wiki/IPv6_address#Default_address_s... .
Or did you mean something else? You still need a dual stack configuration though, there's nothing getting around that when you change the address space. Hence "happy eyeballs" and all that.
Yes there is, at least outside of the machine. All you need to do is have an internal network (100.64/16, 169.254/16, wherever) local to the machine. If you machine is on say 2001::1, then when an application attempts to listen on an ipv4 address it opens a socket listening on 2001::1 instead, and when an application writes a packet to 1.0.0.1, your OS translates it to ::ffff:100:1. This can be even more hidden than things like internal docker networks.
Your network then has a route to ::ffff:0:0/96 via a gateway (typically just the default router), with a source of 2001::1
When the packet arrives at a router with v6 and v4 on (assume your v4 address is 2.2.2.2), that does a 6:4 translation, just like a router does v4:v4 nat
The packet then runs over the v4 network until it reaches 1.0.0.1 with a source of 2.2.2.2, and a response is sent back to 2.2.2.2 where it is de-natted to a destination of 2001:1 and source of ::ffff:100.1
That way you don't need to change any application unless you want to reach ipv6 only devices, you don't need to run separate ipv4 and ipv6 stacks on your routers, and you can migrate easilly, with no more overhead than a typical 44 nat for rfc1918 devices.
Likewise you can serve on your ipv6 only devices by listening on 2001::1 port 80, and having a nat which port forwards traffic coming to 2.2.2.2:80 to 2001::1 port 80 with a source of ::ffff:(whatever)
(using colons as a deliminator wasn't great either, you end up with http://[2001::1]:80/ which is horrible)
That is horrible, but you do no longer have any possibility of confusion between an IP address and a hostname/domain-name/whatever-it's-called. So, yeah, benefits and detriments.
> Your network then has a route to ::ffff:0:0/96 via a gateway...
I keep forgetting about IPv4-mapped addresses. Thanks for reminding me of them with this writeup. I should really get around to playing with them some day soon.
Could have used 2001~1001~~1 instead of 2001:1001::1, which looks weird today, but wouldn't have done if that had been chosen all those years ago.
(unless : as an ipv6 separator predates its use as a separator for tcp/udp ports, in which case tcp/udp should have used ~. Other symbols are available)
> If you machine is on say 2001::1, then when an application attempts to listen on an ipv4 address it opens a socket listening on 2001::1 instead, and when an application writes a packet to 1.0.0.1, your OS translates it to ::ffff:100:1. ...
> Your network then has a route to ::ffff:0:0/96 via a gateway (typically just the default router), with a source of 2001::1
What's the name of this translation mechanism that you're talking about? It seems to be the important part of the system.
I ask because when I visit [0] in Firefox on a Linux system with both globally-routable IPv6 and locally-routable IPv4 addresses configured, I see a TCP conversation with the remote IPv4 address 192.168.2.2. When I remove the IPv4 address (and the IPv4 default route) from the local host, I get immediate failures... neither v4 nor v6 traffic is made.
When I add the route it looks like you suggested I add
ip route add ::ffff:0:0/96 dev eth0 via <$DEFAULT_IPV6_GATEWAY_IP>
I see the route in my routing table, but I get exactly the same results... no IPv4 or IPv6 traffic.Based on my testing, it looks like this is only a way to represent IPv4 addresses as IPv6 addresses, as ::ffff:192.168.2.2 gets translated into ::ffff:c0a8:202, but the OS uses that to create IPv4 traffic. If your system doesn't have an IPv4 address configured on it, then this doesn't seem to help you at all. What am I missing?
You make nat64 part of the typical router.
> I ask because when I visit [0] in Firefox on a Linux system with both globally-routable IPv6 and locally-routable IPv4 addresses configured, I see a TCP conversation with the remote IPv4 address 192.168.2.2. When I remove the IPv4 address (and the IPv4 default route) from the local host, I get immediate failures... neither v4 nor v6 traffic is made.
Yes, that's the failure of ipv6 deployment.
Imagine you have two vlans, one ipv4 only, one ipv6 only. There's a router sitting across both vlans.
VLAN1 - ipv6 only
Router 2001::1
Device A 2001::1234
VLAN2 - ipv4 only
Router 192.168.1.1
Device B 192.168.1.2
Device A pings 192.168.1.2, the OS converts that transparently to ::ffff:192.168.1.2, it sends it to its default router 2001::1
That router does a 6>4 translation, converting the destination to 192.168.1.2 and the source to 192.168.1.1 (or however it's configured)
It maintains the protocol/port/address in its state as any ipv4 natting router would do, and the response is "unnatted" as an "established connection" (with connection also applying for icmp/udp as v4 nat does today)
An application on Device A has no need to be ipv6 aware. The A record in DNS which resolves to 192.168.1.2 is reachable from device A despite it not having a V4 address. The hard coded IP database in it works fine.
Now if Device B wants to reach Device A, it uses traditional port forwarding on the router, where 192.168.1.1:80 is forwarded to [2001::1234]:80, with source of ::ffff:192.168.1.2
With this in place, there is no need to update any applications, and certainly no need for dual stack.
The missing bits are the lack of common 64/46 natting -- I don't believe it's built into the normal linux network chain like v4 nat is, and the lack of transparent upgrading of v4 handling on an OS level.
If so, that was not at all clear from your original comment.
[0] <https://docs.fortinet.com/document/fortigate/7.6.1/administr...>
[1] <https://docs.fortinet.com/document/fortigate/7.6.1/administr...>
(Note that using the servers provided by nat64.net is equivalent to using an open proxy, so you probably don't want it for general-purpose use. You would probably want either your ISP to run the NAT64 (equivalent to CGNAT), or to run it on your own router (equivalent to NAT).)
The standard prefix for NAT64 is 64:ff9b::/96, although you can pick any unused prefix for it. ::ffff:0:0/96 is the prefix for a completely different compatibility mechanism that's specifically just for allowing an application to talk to the kernel's v4 stack over AF_INET6 sockets (as you figured out). It was a confusing choice of prefix to use to describe NAT64.
Note though that I'm not proposing IPv4x as something we should work towards now. Indeed, I come down on the side of being happy that we're in the IPv6 world instead of this alternative history.
Which has been discussed previously: https://hn.algolia.com/?q=The+IPv6+mess
Apparently the practical problems were related to people haplessly firewalling it out (ref. https://labs.ripe.net/author/emileaben/6to4-why-is-it-so-bad...)
Huh:
> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.
> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.
And NAT needed zero software changes. That's why it's won. It brought the benefits of whatever extension protocol with existing mechanisms of IPv4.
IPv6 isn't an alternative to IPv4, it's an alternative to all IPv4xes.
Are you sure about that? Until a few years ago my residential ISP was IPv4 only. I definitely couldn't connect to an IPv6-only service back then.
Motivation for retiring IPv4 completely would NOT be to make the world a better more route-able place. It would be to deliberately obsolescence old products to sell new.
The most likely alternative would have been 64-bit. That's big enough that could have worked for a long time.
- NAT gateways are inherently stateful (per connection) and IP networks are stateless (per host, disregarding routing information). So even if you only look at the individual connection level, disregarding the host/connection layering violation, the analogy breaks.
- NAT gateways don't actually route/translate by (IP, port) as you imply, but rather by (source IP, source port, destination IP, destination port), as otherwise there simply would not be enough ports in many cases.
Until you have stateful firewall, which any modern end network is going to have
> NAT gateways don't actually route/translate by (IP, port) as you imply, but rather by (source IP, source port, destination IP, destination port), as otherwise there simply would not be enough ports in many cases.
If 192.168.0.1 and 0.2 both hide behind 2.2.2.2 and talk to 1.1.1.1:80 then they'll get private source IPs and source ports hidden behind different public source ports.
Unless your application requires the source port to not be changed, or indeed embeds the IP address in higher layers (active mode ftp, sip, generally things that have terrible security implications), it's not really a problem until you get to 50k concurrent connections per public ipv4 address.
In practice NAT isn't a problem. Most people complaining about NAT are actually complaining about stateful firewalls.
Yes, but it's importantly still a choice. Also, a firewall I administer, I can control. One imposed onto me by my ISP I can’t.
> not really a problem until you get to 50k concurrent connections per public ipv4 address.
So it is in fact a big problem for CG-NATs.
> In practice NAT isn't a problem. Most people complaining about NAT are actually complaining about stateful firewalls.
No, I know what I'm complaining about. Stateful firewall traversal via hole punching is trivial on v6 without port translation, but completely implementation dependent on v4 with NAT in the mix, to just name one thing. (Explicit "TCP hole punching" would also be trivial to specify; it's a real shame we haven't already, since it would take about a decade or two for mediocre CPE firewalls to get the memo anyway.)
Having global addressing is also just useful in and of itself, even without global reachability.
Only if they're under provisioned. If my home really needed tens of thousands I'd provision another ipv4 address, but it doesn't -- at the moment I have a mere 121 active connections in my firewall.
The cost of a firewall is far more than the cost of an ipv4 address, which are available for about $20 each.
> Having global addressing is also just useful in and of itself, even without global reachability.
Except that doesn't happen, as most locations will not be BGP peering and advertising their own /48 (routing tables would melt)
Instead if you change your ISP, you change your IP address. Unless you use private ips in the fc00:: range, which is no different to using rfc1918 addresses for the vast majority of users
There was never 64-78 bits in the IPv4 header unconstrained enough to extend IPv4 in place even if you accepted the CGNAT-like compromise of routing through IPv4 "super-routers" on the way to 128-bit addresses. Extending address size was always going to need a version change.
I've rarely seen it used in practice, but it's in theory doable.
A major website sees over 46 percent of its traffic over ipv6. A major mobile operator has a network that runs entirely over ipv6.
This is not “waiting for adoption” so I stopped reading there.
https://www.google.com/intl/en/ipv6/statistics.html
https://www.internetsociety.org/deploy360/2014/case-study-t-...
There are plenty of other places doing the same thing, but these examples alone should be sufficient to disprove "no-one is willing to turn v4 off".
To be less glib: IPv6 is well-adopted. It's not universally adopted.
Thanks though. Your comment really cheered me up.
Too sneaky, apparently. I suggest putting something at the top mentioning it ... then even folks with very short attention spans will see it.
Does the "advice" boil down to "You should NEVER use ULAs and ALWAYS use GUAs!" and is given by the same very, very loud subset of people who seemed to feel very strongly that IPv6 implementations should explicitly make it impossible to do NAT?
The router in a coffee shop gives you an ULA, and NATs everything to a single globally routable public ipv6 address.
Things that are fucked up can also be simple, understandable, and straightforward.
Unless you're claiming that DHCPv6 is not simple, understandable, and straightforward... in which case:
DHCPv4 is "Give me an IP address, please.". DHCPv6 is "Give me an IP address, please. And also give me what I need for all of my directly-connected friends to have one, too, if you don't mind.".
Because of Google's continued (deliberate?) misunderstanding of what DHCPv6 is for, Android clients don't do anything sane with it. That doesn't mean that DHCPv6 isn't simple.
Again, DHCPv6 is "Please give me an IP address, and maybe also what my directly-attached friends need to get IP addresses.". Simple, straightforward, and easy to understand. Even if it were relevant, Google's chronic rectocranial insertion doesn't change that.
To answer your question: Who knows? Perhaps you have a shitlord ISP that only provides you with a /128 (such as that one "cloud provider" whose name escapes me). [0] It's a nice tool to have in your toolbox, should you find that you need to use it.
[0] Yes, I'm aware that a "cloud provider" is not strictly an ISP. They are providing your VMs with access to the Internet, so I think the definition fits with only a little stretching-induced damage.
It's conceivable that OSes could support some sort of traffic steering mechanism where the network distributes policy in some sort of dynamic way? But that also sounds fragile and you (i.e. the network operator) still have to cope with the long tail of devices that will never support such a mechanism.
I don't think that's true. I haven't had reason to do edge router failover, but I am familiar with the concepts and also with anycast/multihoming... so do make sure to cross-check what I'm saying here with known-good information.
My claim is that the scenario you describe is superior in the non-NATted IPv6 world to that of the NATted IPv4 world. Let's consider the scenario you describe in the IPv4-only world. Assume you're providing a typical "one global IP shared with a number of LAN hosts via IPv4 NAT". When one uplink dies, the following happens:
* You fail over to your backup link
* This changes your external IP address
* Because you're doing NAT, and DHCP generally has no way to talk back to hosts after the initial negotiation you have no way to alert hosts of the change in external IP address
* Depending on your NAT box configuration, existing client connections either die a slow and painful death, or -ideally- they get abruptly RESET and the hosts reestablish them
Now consider the situation with IPv6. When one uplink dies:
* You fail over to your backup link
* This changes your external prefix
* Your router announces the prefix change by announcing the new prefix and also that the now-dead one's valid lifetime is 0 seconds [0]
* Hosts react to the change by reconfiguring via SLAAC and/or DHCPv6, depending on the settings in the RA
* Existing client connections are still dead, [1] but the host gets to know that their global IP address has changed and has a chance to take action, rather than being entirely unaware
Assuming that I haven't screwed up any of the details, I think that's what happens. Of course, if you have provider-independent addresses [2] assigned to your site, then maybe none of that matters and you "just" fail over without much trouble?
[0] I think this is known as "deprecating" the prefix
[1] I think whether they die slow or fast depends on how the router is configured
[2] ...whether IPv4 or IPv6...
This is the linchpin of the workflow you've outlined. Anecdotal experience in this area suggests it's not broadly effective enough in practice, not least because of this:
> * Existing client connections are still dead, [1] but the host gets to know that their global IP address has changed and has a chance to take action, rather than being entirely unaware
The old IP addresses (afaiu/ime) will not be removed before any dependent connections are removed. In other words, the application (not the host/OS) is driving just as much as the OS is. Imo, this is one of the core problems with the scenario, that the OS APIs for this stuff just aren't descriptive enough to describe the network reconfiguration event. Because of that, things will ~always be leaky.
> [1] I think whether they die slow or fast depends on how the router is configured
Yeah, and that configuration will presumably be sensitive to what caused the failover. This could manifest differently based on whether upstream A simply has some bad packet loss or whether it went down altogether (e.g. a physical fault).
In any case, this vision of the world misses on at least two things, in my view:
1. Administrative load balancing (e.g. lightly utilizing upstream B even when upstream A is still up
2. The long tail of devices that don't respond well to the flow you outlined. It's not enough to think of well-behaved servers that one has total control over; need to think also of random devices with network stacks of...varying quality (e.g. IOT devices)
I have two reactions to this.
1) Duh? I'm discussing a failover situation where your router has unexpectedly lost its connection to the outside world. You'd hope that your existing connections would fail quickly. The existence of the deprecated IP shoudn't be relevant because the OS isn't supposed to use it for any new connections.
2) If you're suggesting that network-management infrastructure running on the host will be unable to delete a deprecated address from an interface because existing connections haven't closed, that doesn't match my experience at all. I don't think you're suggesting this, but I'm bringing it up to be thorough.
> ...the OS APIs for this stuff just aren't descriptive enough to describe the network reconfiguration event.
I know that Linux has a system (netlink?) that's descriptive enough for daemons [0] to actively nearly-instantaneously start and stop listening on newly added/removed addresses. I'd be a little surprised if you couldn't use that mechanism to subscribe to "an address has become deprecated" events. I'd also be somewhat surprised if noone had built a nice little library over top of whatever mechanism that is. IDK about other OS's, but I'd be surprised if there weren't equivalents in the BSDs, Mac OS, and Windows.
> In any case, this vision of the world misses on at least two things, in my view:
> 1. Administrative load balancing...
I deliberately didn't talk about load balancing. I expect that if you don't do that at a layer below IP, then you're either stuck with something obscenely complicated or you're doing something like using special IP stacks on both ends... regardless of what version of IP your clients are using.
> 2. The long tail of devices that don't respond well to the flow you outlined.
Do they respond worse than in the IPv4 NAT world? This and other commentary throughout indicates that you missed the point I was making. That point was that -unlike in the NATted world- the OS and the applications running in it have a way to plausibly be informed of the network addressing change. In the NAT case, they can only infer that shit went bad.
[0] ...like BIND and NTPd...
Well failover is an administrative decision that can result from unexpectedly losing connection. But it can also be more ambiguous packet loss too, something that wouldn't necessarily manifest in broken connections--just degraded ones.
If upstream A is still passing traffic that simply gets lost further down the line, then there's no particular guarantee that the connection will fail quickly. If upstream A deliberately starts rejecting TCP traffic with RST, then sure, that'll be fine. But UDP and other traffic, no such luck. Whereas QUIC would fare just fine with NAT thanks to its roaming capabilities.
> I know that Linux has a system (netlink?) that's descriptive enough for daemons [0] to actively nearly-instantaneously start and stop listening on newly added/removed addresses. I'd be a little surprised if you couldn't use that mechanism to subscribe to "an address has become deprecated" events. I'd also be somewhat surprised if noone had built a nice little library over top of whatever mechanism that is. IDK about other OS's, but I'd be surprised if there weren't equivalents in the BSDs, Mac OS, and Windows.
Idk, I'll have to take your word for it. Instinctively though, this feels like a situation where the lowest common denominator wins. In other words, average applications aren't going to do any legwork here. The best thing to hope for is for language standard libraries to make this as built-in as possible. But if that exists, I'm extremely unaware of it.
> I deliberately didn't talk about load balancing. I expect that if you don't do that at a layer below IP, then you're either stuck with something obscenely complicated or you're doing something like using special IP stacks on both ends... regardless of what version of IP your clients are using.
I presume you meant a layer above IP? But no, I don't see why this is challenging in a NAT world. At least, I've worked with routers that support this, and it always seemed to Just Work™. I'd naively assume that the router is just modding the hash of the layer 3 addresses or something though.
> Do they respond worse than in the IPv4 NAT world?
I've basically only ever had good experiences in the IPv4 NAT world.
> That point was that -unlike in the NATted world- the OS and the applications running in it have a way to plausibly be informed of the network addressing change. In the NAT case, they can only infer that shit went bad.
I'm certainly sympathetic to this point. And, all things being equal, of course that seems better! If NAT66 were to not offer sufficient practical benefits, then I'd be convinced.
But please bear in mind that this was the original comment I responded to (not yours). Responding to this is where I'm coming from:
> Why would IPv6 ever need NAT?
"ping 1.1.1.1"
it doesn't work.
If stacks had moved to ipv6 only, and the OS and network library do the translation of existing ipv4, I think things would have moved faster. Every few months I try out my ipv6 only network and inevitably something fails and I'm back to my ipv4 only network (as I don't see the benefit of dual-stack, just the headaches)
Sure you'd need a 64 gateway, but then that can be the same device that does your current 44 natting.
The main limitation is software that only supports IPv4. This would affect your proposed solution of doing the translation in the stack. There is no way to fix an IPv4-only software that has 32-bit address field.
> There are lots of places that have IPv6-only networks and access IPv4 through NAT64
I've just deployed a new mostly internal network, and this was my plan.
The network itself worked, but the applications wouldn't. Most required applications could cope, but not all, meaning I need to deploy ipv4, meaning that there's no point in deploying ipv6 as well as ipv4, just increases the maintenance and security for no business benefits.
Issue for CLAT in systemd-networkd: https://github.com/systemd/systemd/issues/23674
I'd rather see this at a lower level than network manager and bodging in with bpf, so it's just a default part of any device running a linux network stack, but I don't know enough about kernel development and choices to know how possible that is in practice.
This should have been supported in the kernel 25 years ago though if the goal was to help ipv6 migration
The first main issue is that most often we waste an entire IPv4 for things that just have a single service, usually HTTPS and also an HTTP redirector that just replies with a redirect to the HTTPS variant. This doesn't require an entire IPv4, just a single port or two.
We could have solved the largest issue with address exhaustion simply by extending DNS to have results that included a port number as well as an IP address, or if browsers had adopted the SRV DNS records, then a typical ISP could share a single IPv4 across hundreds of customers.
The second massive waste of IPv4 space is BGP being limited to /24. In the days of older routers when memory was expensive and space was limited, limiting to /24 makes sense. Now, even the most naive way of routing - having a byte per IP address specifying what the next hop is - would fit in 4GB of RAM. Sure, there is still a lot of legacy hardware out there, but if we'd said 10 years ago that the smallest BGP announcements would reduce from /24 to /32, 1 bit per year, so giving operators time to upgrade their kit, then we'd already be there by now. They've already spent the money on getting IPv6 kit which can handle prefixes larger than this, so it would have been entirely possible.
And following on from the BGP thing is that often this is used to provide anycast, so that a single IPv4 can be routed to the geographically closest server. And usually, this requires an entire /24, even though often it's only a single port on a single IPv4 that's actually being used.
Arguably, we don't even need BGP for anycast anyway. Again, going back to DNS, if the SRV record was extended to include an approximate location (maybe even just continent, region of continent, country, city) where each city is allocated a hierarchical location field split up roughly like ITU did for phone numbers, then the DNS could return multiple results and the browser can simply choose the one(s) that's closest, and gracefully fall back to other regions if they're not available. Alternatively, the client could specify their geo location during the request.
So, basically, all of that can be done with IPv4 as it currently exists, just using DNS more effectively.
We also have massive areas of IPv4 that's currently wasted. Over 8% of the space is in the 240.0.0.0/4 range that's marked as "reserved for future use" and which many software vendors (e.g. Microsoft) have made the OS return errors if it's used. Why? This is crazy. We could, and should, make use of this space, and specifically for usages where ports are better used, so that companies can share a single IPv4 at the ISP level.
Another 8% is reserved for multicast, but nowadays almost nothing on the public IPv4 uses it and multicast is only supported on private networks. But in any case, 225.0.0.0/8-231.0.0.0/8 and 234.0.0.0/8-238.0.0.0/8 (collectively 12 /8s, or 75% of the multicast block) is reserved and should never have been used for any purpose. This too could be re-purposed for alleviating pressure on IPv4 space.
Finally, there are still many IPv4 /24s or larger that are effectively being hoarded by companies knowing they can make good money from renting them out or selling them later. Rather than being considered an asset, we should be charging an annual fee to keep hold of these ranges and turn them into a liability instead, as that would encourage companies with a large allocation that they don't need to release them back.
The other main argument against IPv4 is NAT, but actually I see that as a feature. If services actually had port number discovery via DNS, then forwarding specific ports to the server than deals with them is an obvious thing to do, not something exceptional. The majority of machines don't even want incoming connections from a security point of view, and most firewalls will block incoming IPv6 traffic apart from to designated servers anyway. The "global routing" promised by IPv6 isn't actually desired for the most part, the only benefit is when it is wanted you have the same address for the service everywhere. The logical conclusion from that is that IPv4 needs a sensible way of allocating a range of ports to individual machines rather than stopping just at the IP address.
When you then look at IPv6 space, it initially looks vast and inexhaustible, but then you realise that the smallest routable prefix with BGP is /48, it should be apparent that it suffers from essentially the same constraints as IPv4. All of "the global internet" is in 2002::/16, which effectively gives 32 bits of assignable space. Exactly the same as IPv4. Even more, IPv6 space is usually given out in /44 or /40 chunks, which means it's going to be exhausted at almost the same rate as IPv4 given out in /24 chunks. So much additional complexity, for little extra gain, although I will concede that as 2003::/16 to 3ffe::/16 isn't currently allocated there is room to expand, as long as routers aren't baking in the assumption that all routable prefixes are in 2001::/16 as specified.
TLDR: browsers should use SRV to look up ports as well as addresses, and SRV should return geo information so clients can choose the closest server to them. If we did that, the IPv4 space is perfectly large enough because a single IPv4 address can support hundreds or thousands of customers that use the same ISP. Effectively a /32 IPv4 address is no different to a /40 IPv4 prefix, and the additional bits considered part of the address in IPv6 could be encoded in the port number for IPv4.
It seemed strange that the need for CGNAT wasn't mentioned until after the MIT story. The "Nothing broke" claim in that story seems unlikely; I was on a public IP at University at the end of the 90s and if I'd suddenly been put behind NAT, some things I did would have broken until the workarounds were worked out.
What's the difference between that and dual stack v4/v6, though? Other than not needing v6 address range assignments, of course.
To replace something, you embrace it and extend it so the old version can be effectively phrased out.
Who's arguing for that? That would be completely non-viable even today, and even with NAT64 it would be annoying.
> Dual-stack fails miserably when the newer stack is incompatible with the older one.
Does it? All my clients and servers are dual stack.
> With a stack that extends the old stack, you always have something to fallback to.
Yes, v4/v6 dual stack is indeed great!
> To replace something, you embrace it and extend it so the old version can be effectively phrased out.
Some changes unfortunately really are breaking. Sometimes you can do a flag day, sometimes you drag out the migration over years or decades, sometimes you get something in between.
We'll probably be done in a few more decades, hopefully sooner. I don't see how else it could have realistically worked, other than maybe through top-down decree, which might just have wasted more resources than the transition we ended up with.
I don't see IPv4 going away within the next fifty years. I'd not be surprised for it to last for the next hundred+ years. I expect to see more and more residential ISPs provide their customers with globally-routable IPv6 service and put their customers behind IPv4 CGNs (or whatever the reasonable "Give the customer's edge router a not-globally-routable IPv4 address, but serve its traffic with IPv6 infrastructure" mechanism to use is). That IPv4 space will get freed up to use in IPv4-only publicly-facing services in datacenters.
There's IPv4-only software out there, and I expect that it will outlive everyone who's reading this site today. That's fine. What matters is getting proper IPv6 service to every Internet-connected site on (and off) the planet.
They’re already not. For example, I believe you won’t get an iOS app approved for distribution by Apple these days if it doesn’t work on v6-only clients.
That's not what I said. I said that having a globally-routable IPv4 address assigned to a LAN's edge router will stop being a thing. Things like CGN (or some other sort of translation system) will be the norm for all residential users.
> ...but servers (or at least load balancers) will absolutely not stay v4-reachable only.
Some absolutely will. There's a lot of software and hardware out there that's chugging along doing exactly what the entity that deployed it needs it to do... but -for one of handful of reasons- will never, ever be updated ever again. This is fine. The absolute best thing any programmer can do is to create a system that one never has to touch ever again.
That's still what I would call a v6-only (with translation mechanisms) client deployment. Sorry for being imprecise on the "with translation mechanisms" part.
> Some absolutely will.
Very few, in my prediction. We're already seeing massive v6 + CG-NAT-only deployments these days, and the NAT part is starting to have worse performance characteristics: Higher latency because the NATs aren't as geographically distributed as the v6 gateway routers, shorter-lived TCP connections because IP/port tuples are adding a tighter resource constraint than connection tracking memory alone etc.
This, and top-down mandates like Apple's "all apps must work on v6 only phones", is pushing most big services to become v6 reachable.
At some point, some ISP is going to decide that v6 only (i.e. without translation mechanisms) Internet is "enough" for their users. Hackers will complain, call it "not real Internet" (and have a point, just like I don't consider NATted v4 "real Internet"!), but most profit-oriented companies will react by quickly providing rudimentary v6 connectivity via assigning a v6 address to their load balancer and setting an AAAA record.
I agree that v4 only servers will stick around for decades, just like there are still many non-Internet networks out there, but v4 only reachability will become a non-starter for anything that humans/eyeballs will want to access. And at some point, the fraction of v4-only eyeballs will become so small that it'll start becoming feasible to serve content on v6 only. At that point, v4 will be finally considered "not the real Internet" too.
Sure, I agree. I'm not sure how you got the notion that I thought a large percentage of systems out there will never get IPv6 support. There's a lot of solid systems out there that just fucking run. They're a small percentage of all of the deployed machines in the world.
> That's still what I would call a v6-only (with translation mechanisms) client deployment.
When people say "IPv6 only", they mean "Cannot connect to IPv4 systems". IMO, claiming it means anything else is watering down the definition into meaninglessness. Consider it in the context of what someone means when they envision a future where the Internet is "IPv6 only", so we don't need to deal with the "trouble" and "headache" of running both v4 and v6.
> We're already seeing massive v6 + CG-NAT-only deployments these days...
Yeah, it's my understanding that that's been the situation for a great many folks in the Asia/Pacific part of the world for a while now. Lots and lots of hosts, but not much IPv4 space allocated.
Honestly, this backwards compatibility thing seems even worse than IPv6 because it would be so confusing. At least IPv6 is distinctive on the network.
Rather than looking down on IPv4 , we should admire how incredible it's design was. Its elegance and intuitiveness, resourcefulness have all led to it outlasting every prediction of it's demise.
What is described here is basically just CIDR plus NAT which is...what we actually have.
At the time IPv6 was being defined (I was there) CIDR was just being introduced and NAT hadn't been deployed widely. If someone had raised their hand and said "I think if we force people to use NAT and push down on the route table size with CIDR, I think it'll be ok" (nobody said that iirc), they would have been rejected because sentiment was heavily against creating a two-level network. After all having uniform addressing was pretty much the reason internet protocols took off.