It's pretty good - their provided router is locked down to hell and they're on a cgnat, but not having to deal with Comcast's 1.2tb data cap is well worth it. Checking Comcast's site now, it seems that they now offer "unlimited" data. Interesting, that option wasn't there 6 months ago.
~100 customers seems too small for the amount of effort they have put in so far. They've been working along all the roads near me for about a year, and they're out there running fiber conduit every day. The houses out here are far apart. Hopefully, they can make it work.
This sounds like mine. I'm guessing yours doesn't support IPv6 because most fiber providers don't.
For the router, I already build firewalls so that. I pay $10/mo to escape their cgnat.
I've also alerted them to expect regular haranguing from me about deploying IPv6. Especially since bgp.he.net shows they have a /40 allocated to themselves; it doesn't seem to be used.
Some people will say monitoring is all that you need, but I do not agree. There are a million different little issues that can and do occur on physical networks in the real world, and there's no way monitoring will have a 99% chance of detecting all of them. When incidents like the partial Microsoft network outage that hit certain peering points occurred, I had to route around the damage by tweaking route filtering on the core routers to prefer a transit connection that worked over the lower cost peering point. It's that kind of oddball issue that active users catch and report which does not happen for barely used services like IPv6.
How many ask for IPv4? I understand your situation, it's a lot of work, for something that many won't notice. It's just that saying there's no demand because your average consumer, who also doesn't know what IPv4 is, isn't asking for it, is the mentality that keeps IPv6 from being implemented.
On the funnier side of things, we've also sometimes run into the opposite problem that we can't reproduce an issue, because it's only on IPv4 and 95% of the time everything we do is IPv6. But we're also not serving home users.
Now apply that to IPv6 and you can see the point that (I think) GP is making.
Side note: The claim it is not widely used doesn't track. How many people use Google or Facebook? More than half of that traffic is over IPv6.
https://circleid.com/posts/ipv6-usage-in-the-u.s-surpasses-5...
It's not widely used in my customer base as well as among a number of other small independent ISPs I know of. A significant number of the CPE we have deployed today do not support IPv6, therefore a significant number of our end users will not be using IPv6 resulting in only partial usage. That is what I am referring to: an incomplete deployment of IPv6 across a customer base without their knowledge of change and the end user's ability to identify issues will cause support issues which costs money and time.
Because of how IPv6 is provisioned, going from a no IPv6-deployed state to fully-deployed state will encounter transition issues. It's not provisioned the same way as IPv4, and that's the whole bloody problem!!! Sure, I can turn on handing out IPv6 addresses from our edge routers via PPPoE today, but it's not going to "Just Work" for all customers without CPE config changes (assuming the CPE even supports IPv6 in the first place for which a bunch of them do not!). It's a complete pain that has less than zero benefit for a small provider today since everything is available over IPv4 and nothing is IPv6 only. Heck, another ISP I know had a customer ask for IPv6 support, and then the customer didn't even bother to provision it on their own router!
Another example: take Ubiquiti's AirCube product line. It doesn't support IPv6. To deploy IPv6 it would be necessary to reconfigure things to make the ONU for the customer act as the router for the customer's home network and place the AirCube in bridged mode. That's assuming that the ONU even supports IPv6 yet as it did not when I looked into it on the order of 6 years ago. And yes, there are a few dozen customers on AirCubes that fall under this case. This means moving into a territory where multiple different CPE deployment configs are now required for less than zero benefit.
That there are gaps in IPv6 support has been a problem, is a problem, and will continue to be a problem. The industry isn't 100% IPv6, and I don't see how that is going to be the case in the next 5 years or even the next 10 years. Consumer gear isn't there yet. That was the problem 5 years ago, that was the problem 10 years ago, that was the problem 15 years, and that was the problem 20 years ago... What's different today that's any different than yesterday?
It's easy to deploy IPv6 in data centers where you control all the infrastructure on servers where operating system IPv6 support was mature decades ago. It's not so easy as a small ISP when dealing with pile of random consumer gear that an ISP's customers bring to the table where IPv6 support remains hit and miss.
And then you say `Nobody asks for IPv4` - so nobody asks for IPv4 and 0.5% ask for IPv6?
So, yeah, I don't see IPv6 being relevant as a small ISP today.
Big, evil, hated Comcast has full ipv6, and I doubt any of its customers asked for it either. Instead people complain they’re only getting a /60.
CPE support for IPv6 has generally been garbage with it taking 15-20 years before the bare minimum was supported by mainstream router vendors. Even today there are still vendors that assume only IPv4 support. In my opinion the IETF really screwed up when they made IPv6 more complicated than just IPv4 with more address bits. The incumbent in my area generally uses PPPoE in their access network, but routers that supported PPPoE and prefix delegation basically didn't exist in 2010, and only started being available circa 2015 (in part due to the required bits not existing in OpenWRT and the hardware vendors' software development kits for their chipsets). Sure, we're 10 years further on now, but there remain a number of vendors that only support IPv4 for management of devices (cough Ubiquiti cough) in parts of their product line.
That said, there are features of IPv6 that are absolutely awesome for carriers. The next header feature that pretty much eliminates the need for MPLS in an IPv6 transport network is one such item that makes building transport networks so much cleaner when using IPv6 than IPv4. No more header insertion or rewriting, just update one field and fix up the delta on the checksum and CRC. They just aren't really applicable for smaller networks.
I do think NAT64/464XLAT is a pretty good architecture for new ISPs that can't get their hands on IPv4 space, though. Or even MAP-T, but CPE support isn't really there yet.
The people who attempt to fill these gaps are commonly rural telephone companies, electric cooperatives, tribal entities, or mom and pop shops where the owner grew up on a Ditch Witch and only knows as much IP networking as essential to light up the fiber and get the packets flowing upstream.
They are enormously resource constrained in ways you might not expect, too, eg operations can grind to a halt because everyone is out with a chainsaw after a storm, or because the Guy that Knew Stuff about their network died suddenly.
They are very, very unlikely to decide to run an IPv6 network just because. There's no upside that makes the juice worth the squeeze for them.
In an IPv4-only CGNAT setup, all the traffic has to flow through the CGNAT gateways, and that gear is stupidly expensive. Having IPv6 in the mix means that anything that supports IPv6 (such as most streaming services) won't hit the CGNAT gateway and can just be routed natively. This can really save money on CGNAT hardware.
For implementation, you can use NAT64/DNS64 for your CGNAT setup and implement 464XLAT on the CPE. This keeps your whole edge network IPv6-only so you don't have the complication of maintaining two parallel configurations on the edge.
There is also MAP-T, which is even lighter on infrastructure since it pushes all the state into the CPE and avoids the complication of stateful CGNAT. But unfortunately CPE support for it is pretty limited at the moment.
No state at the carrier; IP:6 native network.
If you buy a full IPv4 you get one for yourself but I expect still through the same tunneling machinery to keep the network IPv6-only.
Saline is less than 10 miles from Ann Arbor.
> The people who attempt to fill these gaps are commonly rural telephone companies, electric cooperatives, tribal entities, or mom and pop shops...
That's fair, but at some point, you need to recognize you are competing with a major ISP. No one appreciates it when you come in, tear up the roads, and then pull out once the incumbent ISPs bump up their speeds ever so slightly. (Looking at you, Google.)
> They are very, very unlikely to decide to run an IPv6 network just because.
No one deploys IPv6 "just because," and yet more than half of the traffic to major sites is IPv6.
I live half an hour from a state capital and my only option is cable... the coaxial cable they laid bare on my flower bed decades ago. I dig it up about every other year when planting. It's not even in a conduit!
I have spent most of my career under the thumb of fucking cable and I'd sooner slam a car door on my nuts than go back to paying so much money for such garbage service.
Yeah, what's up with that? I just got switched on to fiber and the CGNAT for IPv4 doesn't shock me much, but what's with the no IPv6 in 2025?
I know enough to deal with it, but what's the deal? Is there something systematic here?
How about not having to pay for (as) beefy CG-NAT hardware because people that go to Youtube, Netflix, MetaFace, TikTok, etc, can directly connect via IPv6.
Even a small number of devices/services not supporting IPv6 can have huge costs:
> Our [American Indian] tribal network started out IPv6, but soon learned we had to somehow support IPv4 only traffic. It took almost 11 months in order to get a small amount of IPv4 addresses allocated for this use. In fact there were only enough addresses to cover maybe 1% of population. So we were forced to create a very expensive proxy/translation server in order to support this traffic.
> We learned a very expensive lesson. 71% of the IPv4 traffic we were supporting was from ROKU devices. 9% coming from DishNetwork & DirectTV satellite tuners, 11% from HomeSecurity cameras and systems, and remaining 9% we replaced extremely outdated Point of Sale(POS) equipment. So we cut ROKU some slack three years ago by spending a little over $300k just to support their devices.
* https://community.roku.com/t5/Features-settings-updates/It-s...
* Discussion: https://news.ycombinator.com/item?id=35047624
Sadly the post is now behind a login: what happened later was Apple donate a bunch of Apple TV devices to the tribal ISP and that cut their IPv4 usage by an order of magnitude (or some ridiculous number) and there were major savings. The ISP then recommended AppleTV to all of their customers to get the best viewing experience (because of the latency/overhead of CG-NAT when streaming video).
So the more you move over the more headroom you have for the broken IPv4-only systems. AIUI, the rollout of MAP-T/E has helped in that things are more stateless, and more work is done at the CPE, but there's still overhead.
I get the impression that they are still learning to run an ISP, both technically and customer facingly. It's weird - I learned more about them from this article than from actually being living here with them.
Some applications want to open ports and don't have the server-side infrastructure to punch a hole through NAT. Especially P2P apps and some games.
Sometimes I want to run a small, low-traffic web server from home.
Sometimes I'm connecting to my network from a machine that I don't control and can't install Tailscale on.
It's been there since they announced the data cap. I thought the unlimited bundled with leasing their higher end hardware came first, but the email from 2016 announcing that our plan was getting the cap mentions being able to pay for unlimited.
https://arstechnica.com/tech-policy/2025/06/stung-by-custome...
A comcast customer always had the option to pay for unlimited data. I get that part. What is the 2nd part? “Started offering it as standard” means what?
There's barely any competition here. You can pretty much chose from Comcast Business or XFinity, which both are just Comcast because of a free market with free as in not in jail.
If starlink ever gets more capacity, I'll probably switch. Right now I think the only way to get gigabit down on starlink is with four or five accounts and manually bonding the dishes together. As soon as that obstacle goes away, Comcast will have competition in my area and I intend take advantage of that.
If you live in a region where they have no meaningful competition (which is still fairly common in a lot of places in the US) well bend over and lube up.
They will happily let you pay for years, for services that no longer exist, no longer connected to any of their networks. They'll take you to court rather than pay anything back; they know they are receiving extra money, and there's a significant amount that comes in, but "oh, it's so confusing, and there are so many legacy systems, we can't possibly catch every mistake."
The money they shuffle back and forth between each other, daily, reeks of book cooking - you might have a stretch of 20 miles of trunk in which there are 20 separate owners - not concurrent riding separate fiber lines, but in sequence, each paying rent to or getting rent paid by the adjacent rider, even though only a single company actually services the entire span.
It's funny how construction companies and ISPs get these rackets going, and then when people come along like these PrimeOne guys and offer a reasonable rate on a decent product, it's somehow vastly disruptive and threatening.
They'll expand, and be encouraged and allowed to expand, and after 5 or 6 years, the big ISPs will start circling, and eventually buy them out, and they'll retire happy. AT&T or Lumen will own their network inside of 10 years, and they'll claim it's modernized and upgraded infrastructure. People with shitty oversold undermaintained cable internet will be left alone until the money stops.
Starlink to phones is great, if it only didn't make ISPs so much money handling the base stations on the ground.
There's fiber all over the US just hanging there, unused, unmaintained, because merger after merger after merger left giant piles of assets under the ownership of companies like comcast and centurylink and at&t, who left infrastructure to rot, often built with public funding, and maintained their local monopolies and shitty service.
Whatever it is we're doing to regulate the industry at a federal level isn't working, but I imagine that's where a lot of the money goes.
Working with the director there, IIRC we traced down the Verizon and Comcast box as actually being connected to the AT&T box. After some checking of circuit IDs on boxes it seemed that the LEC for both the Verizon and Comcast circuits was AT&T, and AT&T was the actual owner of that single physical fiber going out there.
I wish I could sell the same thing 3 times, lol. Contracts were signed on all these circuits so there was no getting out of it.
However I still applaud these guys. There needs to be more competition.
The only thing in the end their salespeople could do was offer TV bundles but still wasn't cost-competitive. Not sure what their offerings are now but it was such an easy decision to switch.
Isn’t this standard competitive practice ? Charge what the market will bear.
I don’t know if I’d call that “exploitation”. If there’s one gas station 90 miles from every other gas station in the Nevada desert, they’re gonna charge more, aren’t they?
https://arstechnica.com/tech-policy/2025/06/stung-by-custome...
Correct. It was a very calculated decision. They were squeezing out profits by trying to move heavy users to the next tier of service. But this only works if they have a monopoly.
So not actually better than Comcast, just bad in a different way.
Everyone here that has started a company to challenge the entrenched monopoly raise your hands please.
I understand the tech deeply and that does not translate to the practical needs of trying to run a successful business.
I raise a toast to these guys. Well done.
From the article, it sounds like the "default" option is for the customer to supply their own router, which I appreciate:
> Prime-One provides a modem and the ONT, plus a Wi-Fi router if the customer prefers not to use their own router.
My fiber installer referred to the Adtran 632V ONT he installed as the "modem".
He installed two other junction boxes (one outside the house near/under where the fiber attaches to the wall of the house, one inside near the ONT) but they're just passive optical couplers allowing them to swap out fiber segments in the event of fiber damage without re-running the entire install.
As far as I know, nobody uses separate boxes for the modem and router, that kind of thinking died when wifi became more widespread and included by default with ISP plans.
Definitely splitting hairs here though on terminology.
This article couldn't have passed through my inbox more than 6 weeks or so ago so it is a very recent change.
This indicates that their local and state governments aren't (at this time) captured by the incumbent cable provider.
A captured state gov will pass laws to thwart new infra deployment, commonly written by ISP interests. A captured local gov will never approve deployment or slow-walk permitting in an attempt to bankrupt the upstart.
more explainers: New suburban fiber infrastructure means either trenching or pole hanging. The local gov issues permits for both but poles also require the cooperation of the pole owners. This last adds the PSC to the mix.
Recalcitrant pole owners are known to stall and kill infrastructure deployment - especially where going underground isn't an option. Some PSCs mandate that pole owners cooperate. Some PSCs abdicate that responsibility and are examples of regulatory capture.
Why isn’t the Bay Area a hot bed of fiber deployment? You think Comcast in Philly has more pull with Cupertino and Mountain View than Google and Apple? No! Internet in the Bay Area is shit for the same reason all the infrastructure in the Bay Area is shit. The government makes it slow and difficult to build anything.
Comcast installed fiber to my house back in 2018 or so. The permitting took months. And this was to run Comcast fiber on poles where Comcast already had their own cable lines. And my county is actually pretty efficient with permitting. It’s just that American municipalities absolutely hate it when anyone builds anything.
I live in a blue state that actively encourages municipal and cooperative fiber deployment: https://mdbc.us. It's had approximately zero impact outside some rural parts of the state.
Network performance in that last mile can differ by block and even by season. An otherwise functional run of coax might have intermittent ingress but only shortly after it's rained while cold out.
This isn't even counting all of the flaky performance anecdotes that really boil down to overcrowded Wi-Fi or poorly configured consumer gear or anything else that isn't strictly the fault or problem of Comcast.
What you don't get often is fiber-to-home, or great upload speeds. But most people aren't running big home servers.
Now, if they're doing CGNAT and also don't have IPv6, that's just crap. They shouldn't be allowed to call themselves an ISP at that point.
Pro poles / open air:
- very, VERY cheap and fast to build out with GPON. That's how you got 1/1 GBit fiber in some piss poor village in the rural ditches of Romania.
- easy to get access when you need to do maintenance
Con poles / open air:
- it looks fucking ugly. Many a nice photo from Romania got some sort of half assed fiber cable on it.
- it's easy for drunk drivers, vandals (for the Americans: idiots shooting birds that rest on aboveground lines [1][2]), sabotage agents or moronic cable thieves to access and damage infrastructure
Pro trench digging:
- it's incredibly resilient. To take out electricity and power, you need a natural disaster at the scale of the infamous Ahrtal floods that ripped through bridges carrying cables and outright submerged and thus ruined district distribution networking rooms, but even the heaviest hailstorm doesn't give a fuck about cable that's buried. Drunk drivers are no concern, and so are cable thieves or terrorists.
- it looks way better, especially when local governments go and re-surface the roads afterwards
Cons trench digging:
- it's expensive, machinery and qualified staff are rare
- you usually need lots more bureaucracy with permits, traffic planning or what not else that's needed to dig a trench
- when something does happen below ground, it can be ... challenging to access the fault.
- in urban or even moderately settled areas, space below ground can be absurdly congested with existing infrastructure that necessitates a lot of manual excavation instead of machinery. Gas, water, sewers, long decommissioned pipe postal service lines, subways, low voltage power, high voltage power, other fiber providers, cable TV...
[1] https://www.usgs.gov/news/national-news-release/illegal-shoo...
For anyone starting out today, I would strongly recommend having a planned legal / regulatory strategy to fall back on in the event that excessive delays occur by parties you cannot avoid dealing with.
when i got this far I literally thought you were making a joke about Poland.
Manufacturers should print on the sleeve "fibre optic only. no copper". :)
> [Buried pros:] Drunk drivers are no concern, and so are cable thieves or terrorists.
Except for that one old lady who took the country of Armenia off the air:
* https://www.theguardian.com/world/2011/apr/06/georgian-woman...
Doesn't work, sadly. They just DGAF, cut, rip and go.
frontier installed fiber in my area using this method. relatively quick and no damage that needs to be "aggressively" paved over.
Yeah you can directionally drill across a front yard typically; but actual urban areas are just too filled with infrastructure.
Optimum had their entire service area bought out by Comcast the day after I switched. Comcast has since broken every major utility at least twice and my fiber connection three times by working on the old infrastructure. I think Optimum won that trade. I can't imagine many residents are going to prefer Comcast over $80/m for no-bullshit internet, especially after the water main break they caused last week.
These FTTP providers have the game solved in Texas. I've seen them do 500-1000 homes in <30 days. Their directional drilling expertise and aggressive neglect for 811 seem to get things done very quickly. There are some areas with competing fiber providers now. I've got 5gbps symmetric for $110/m and I live in the woods. Trees go through power lines and the fiber infra is completely unaffected. The only utility left to bury is the electricity, and they're actively working on that in some areas now.
Texas regulations are quite something. My friend told me that the closest thing to regulation and zoning they have is... an HOA. This was the first time in my life I heard anything positive about HOAs.
AT&T put an optic cable at my curb 10 years ago (most likely due to imminent competition from Google Fiber internet), but then never lit it (most likely because Google dropped their effort due to complications with cities)…
VPNs: Business IT usually uses some weird IP range like 172.xxx.xxx.xxx so avoid conflicting with the popular 192.168.1.xxx or 10.xx.yy.zz. When two companies merge there is now an IP range overlap and a renumbering has to be done.
P2P file transfers: No port forwarding needed.
Self-hosted servers: No port forwarding needed.
Video chat/VoIP: One reason video chat still suffers from bad quality is that video is proxied through cloud servers to deal with NAT. With more IPv6 video chat services can use more direct connectivity lowering costs and improving quality.
Avoiding CGNAT. If anyone on the same CGNAT group in your ISP got banned you will also be banned. Many Internet services don't have IPv6 and they often cite IPv4 based reputation as the reason why they won't deploy IPv6.
That has nothing to do with OP's question about consumer internet concerns. However, I've been through several corporate acquisitions. The thing that took the longest to work out? Getting the new company's name on the paychecks.
The thing that took the second longest to work out? Renumbering networks and consolidating and decommissioning hardware for which there just wasn't enough RFC 1918 space available. Every. Damn. Time. this process dragged on and on and on, when IT could have saved 6->18+ months of future work [0] by generating a /48 ULA and using that for every internal thing but the stuff that was IPv4-only.
[0] ...and god only knows how many dollars in labor costs...
Another feature that I find to be pretty stupid (but that some folks seem to really like) are IPv6 "privacy" addresses. Because each host usually is assigned an IPv6 address in a subnet that's 64 bits wide, most mainstream OS's have configured their IPv6 address autoconfigurator to set one stable, "permanent" address, and to set a parade of periodically changing "temporary" addresses. The OS is usually configured to prefer the permanent address when software asks for a socket to listen on (and sockets that handle replies to that listen socket), and those temporary addresses are preferred for sockets that initiate outbound traffic. The idea is that this is supposed to confuse tracking, but I'm very skeptical of its efficacy in the real world.
Finally, a customer can also usually get enough IP space to make globally-reachable subnets on their LAN. Depending how the ISP has configured things, a customer can get between four and 256 subnets. These subnets are handy to provide networks that provide globally-reachable IP addresses, but that can be easily logically isolated from the rest of the LAN by the router.
IMO, something like what's described in RFC 7217 [1] (changing the interface identifier used for "permanent" addresses from the interface's MAC address to something that mixes in the advertised prefix) is a much better way to address the concerns described in section 2.3 of RFC 4941.
[0] <https://datatracker.ietf.org/doc/html/rfc4941#section-1.2>
[1] <https://datatracker.ietf.org/doc/html/rfc7217#section-4>
For me personally, I work on networking startup so I'd like to be able to run IPv6 stack from my home network to test things.
Also the Sail support is really good, fast responses (even the CEO sometimes answers tickets).
I have been a customer for 14 years now. Would love to move to higher bandwidth.
I recently moved into Menlo Park and had no problems getting 2.5Gbps from ATT fiber.
There were things that made the ISP I worked at special, one of them being that we pretty much defaulted to having customers hook up their own DSL, which meant spending a lot of call time helping people who have no idea what an RJ11 jack is install plugs and adapters.
I've also spent a lot of time on "the password I use for my email doesn't work on my Facebook" and "my USB printer doesn't work". People don't know who to call for tech support so they try their ISP. There was also the occasional "the internet is broken" whenever the user's home page had a different theme or design as well, those usually came in waves.
Once the modem and/or router is installed, most internet services Just Work. There are outages and bad modems and the occasional bad software update to deal with, but they're a relatively low call volume compared to what customers call about.
"Maam, if your business is that important, surely you as a responsible business owner have gone and purchased a business class internet service with 24/7 SLA. It says here, you are on our cheapest, residential VDSL service"
This leads to fun tech support calls if you use your own equipment where you're basically proving to the support underling that you know how to run your equipment for the first 20-30 minutes before they take your issue seriously (yes, the modem light is green, yes, I've already power-cycled, yes, I'm testing on a wired connection, etc)
I usually speedrun this by telling them something like: I am hardwired to the modem and seeing T4s in the log.
> Please wait a moment while I check on some things on your account.
> Thank you for your patience. Can you please confirm for me that you see a green light on the top of the device? Can you tell me whether the light is blinking or is solid?
https://www.xfinity.com/support/articles/disable-xfinity-wif...
Now I just use my own customer modem.
For analyzing support burden, I think the relevant question here is why have you even had the experience of calling tech support for a non-working connection - and that falls squarely on the non-reliability of Comcast's network.
Called them to ask why, and they said it was a planned outage. When was it planned, I asked? 17 minutes ago.
I don't really want any outage credits.
I think my conception of basic tech illiteracy among the general public is vastly wrong. I generally like to believe most people are competent enough to handle these sorts of things.
But really, internet (and digital TV) services are pervasive enough that they are no longer just for technologically inclined and resourceful people. All aspects of society are now using the internet, even the homeless, impoverished, disabled, and institutionalized.
Took another call from an irate dialup customer who demanded a refund - he didn't know he needed a computer to use the internet -- and had driven himself mad dialing up our modem bank with his telephone and waiting for the training tones to subside so he could begin to navigate the internet as he imagined it to work: press 1 for email, 2 for news, 3 for weather...
Despite the proliferation of smart phones and greater prevalence of home networks, I don't think the situation has changed much for a segment of the population once you get down to troubleshooting why something isn't working. The skills and the willingness to just try to fix the problem aren't there.
Back when I still had ISPs that provided the modem + router, every single issue I think I ever had fell into one of two categories: a modem and/or router power cycle fixed it, or it was a broader network issue that had nothing to do with me or my particular internet situation (this is omitting the most common third issue: terrible customer service problems, but that's a separate thing)
Also I could save them a bunch of money getting rid of services they don't use, like moving their landlines to VOIP.
If you want a landline to call emergency services, I'd expect a real landline will have higher uptime than one that depends on your router.For example, if you subscribe to Verizon FiOS voice, the technician will disconnect your copper phone lines and connect them to VoIP termination on your ONT.
That's what I did for pocket as a kid in high school (in the mid-2000s).
You call an electrician or a handyman or somebody and tell them you have some low voltage work.
The ISP provides a cable box and modem to most homes in the same way that the electric company sticks a meter on your wall.
In the US, most do. This is a standard part of "in home installation" when first subscribing to service for all of the major providers in the US.
Example: https://forums.xfinity.com/conversations/customer-service/sc...
Also, as the other commenter pointed out, ISPs don't terminate their service at the edge of your premises. Basically all of them today will connect one of your devices to confirm installation.
In San Jose, if you see evidence that your house's main drain has backed up and you have a cleanout within 5' of the sidewalk, you're better off calling the city first before calling a plumber -- the sewer department will snake the "lateral" pipe between the cleanout and the main sewer line under the street for free.
The one time we used this the response time was very quick (in line with the 30 minute response time they cite on their website).
With careful selection of the customer ONU/ONT, the incidence of support calls means that it can be weeks between customer issues on smaller networks. These days my biggest support headache is in house wireless coverage. It's also the one part of internet service that most people are unwilling to invest even small amounts of money to improve. The worst are the folks that install outdoor wireless security cameras without thinking ahead to putting them on a dedicated network to avoid driving up airtime usage and congesting the main wireless AP.
For fiber customer side issues are almost all wifi related, to the point that some operators will offer in home managed wireless options.
I used to provide support in an area where a provider had purchased a VDSL network in order to convert those customers to fiber. 20% of customers remained on VDSL for various reasons. 10% of customers had been moved to a dodgy hybrid fibre/last mile ethernet solution. and the remainder were all on fibre.
70% of support issues related to the VDSL customers. 20% the ethernet customers. and the remainder of issues were almost all wifi or power related.
They had a policy of charging customers 1000 bucks or so to convert them over to Fibre. Eventually they sold the business to a larger entity. 4 weeks of VDSL complaints, and the new owners gave everyone remaining on copper a free fibre upgrade.
Actually it was only technically VDSL. What they did was drop a fibre ntd into the old vdsl node, commission each port for a different customer, and then run a Ethernet / VDSL converter over the old lead in. The "upgrade" was just using the copper as a draw wire for the fibre cable. Nothing over 100 meters.
With fiber, the ISP can see that everything is good up to the GPON terminal. Probably the router too as most customers will just use the ISP provided one. So that leaves the ethernet interface / wifi card as the only thing that would fail and have to be ascertained over the phone, and with a local ISP its probably more cost effective to cut out all the abstractions and just have a tech stop by to check it out.
On the other side, customers have become a lot more used to self help. For example their email isn't even hosted with the ISP any more! I would think that most people would be aware that if a device works good close to the router, and not good far, the issue is wifi range. If they're still calling the ISP, you can direct them towards wifi extenders. Or if device A does not work but device B does, it's not a problem to call the ISP about. And so on.
Of course this is my idyllic view not having worked ISP tech support in a few decades...
There's a huge gap between "had the idea" and "had all the technical skills, the $millions in capital, and the managerial ability to actually build it". Then there's the barrier of "and succeed". If you read between the article's lines a bit - these guys had loads of the first 3, yet they're still losing loads of money every month.
But, bigger picture, you have a good point. These articles are obviously cherry-picked stories, with an extremely optimistic "... and the little guy wins!" spin. Ars is writing for an audience of techies who are frustrated with crappy ISP's.
Yeah, obviously these guy's long prior experience - pulling fiber for other ISP's - was another critical cornerstone of their ability to go from idea to build-out.
Crazy generalization.
>And that's such an obvious legal minefield that no networking nerd wants to do it.
Honestly half the fun.
The worst part appears to be the physical wiring. If your government has implemented loop unbundling, you're already set (probably need to do some bureaucracy and pay some affordable-at-a-stretch fees to get access to it). Otherwise, or if the loops are just crap, you have to figure out how to physically get a cable to everywhere, a task that is fundamentally laborious and legally fraught, not nerdy at all (unless lawyers are nerds) so nobody wants to do it.
Wireless ISPs are about as popular because of this. Wireless service is always worse, but you only have to install plant (physical infrastructure) at the customer's house and one central location, not all the places leading up to the customer's house. This makes it a whole lot more amenable to individual-nerd or handful-of-nerds setup.
I encourage everyone to at least think about how they would do it.
In a rural environment, yeah, sure. Based on what I'm seeing in San Francisco, in an urban environment, you're going to be negotiating for roof space for many transceivers on many separate roofs. (I do absolutely agree that even that annoying tasks is certainly way less work than dealing with a local or state government that wants it to be impossible to run fiber through or along streets and sidewalks.)
Look, wireless service is almost guaranteed to be worse, but that has more to do with dodgy operators. The technology is fantastic, and when engineered correctly largely undetectable.
That said, in my time, I can count on one hand the number of installations where I was allowed to engineer the service correctly. And I can count on all the hands in a small city the number of times I have been called to rescue something extremely stupid, like shooting a link across a construction site.
When engineered correctly people tend to have absolutely no idea wireless is involved in their connection. Largely its a self inflicted branding issue. You see wireless being sold as "Fibre Extension" way too often for this reason.
Theres also factors that fibre people never consider, like mean time to restore a service. Even if you have a team of 24/7 engineers ready to mobilise, a fibre break will often take a significantly longer time to restore than a wireless outage.
And its weird that I can remember building a 10 gig path of siklus just a few years ago. Dodgy product in terms of failover, but delivered the goods.
* https://www.youtube.com/watch?v=ASXJgvy3mEg
2020 NANOG:
Hell if there's a way to invest in Prime-One, these guys seem to have their stuff together...
Those are all telecom providers. It makes sense that they'd love wireless because they already have cellular infrastructure.
> Comcast seems to have noticed, Herman said. "They've been calling our clients nonstop to try to come back to their service, offer them discounted rates for a five-year contract and so on," he said.
go figure. their monopoly/duopoly has ended, profits dropping like a rock in area, and now they want to compete.
Only billionaires and people fooled by Peter Thiel think competition is evil.
I have had a fibre cable poking out of the footpath in front of my apartment for a year or two now, waiting for ODF or whoever to come and install it into the building.
IIRC, Init7 explained that basically the internet wants to be fast, but ISPs want to spend lots of money on special equipment to slow down your internet so that they can sell speed tiers and data caps.
The US is much more captured that Switzerland, of course. Providers in the US get billions of dollars to expand their networks and provide service, and then don't provide the service, and nothing happens.
And my drives aren't fast enough for 25.
My parents live in a small, countryside village. They have fiber at the same prices (including 4Gbit symmetric, though they are happy with a cheap 200Mbit subscription).
And FWIW, I own my house in the east bay — I am the landlady ;)
I'm aware. And it's my understanding that -sadly- much of the Google "Fiber" deployment in the area is a WISP, just like Monkeybrains is. Quite a while back, Google Fiber bought Webpass and continued doing WISP deployment in the SFBA under the Google Fiber brand. (Because it's politically dreadfully hard to run fiber optic cables in the area.)
If you haven't contacted Monkeybrains for a minimum and expected speed quote at your site in a year or five, it's worth doing it again. It's my understanding that they aperiodically upgrade the hardware in their core network as well as the sort of hardware that they deploy at customer sites.
Monkeybrains' down-to 100/100 service is -on paper- far, far slower than the up-to 1400/40 service I was getting from Comcast, but the actual, delivered speed that I'm seeing from Monkeybrains varies between 300mbit and ~1000mbit (sustained) depending on what other folks are doing on their network. [0] I'm in a fifty-apartment building, so it's possible that they've installed faster gear on my roof than they install in smaller (or single-family) buildings. Reports on the Internet seem to be somewhat mixed, with some single-family buildings reporting ~1gbit service, and others reporting ~45mbit.
[0] Typical prime-time speed is something like 400mbit. Off-hours speed is frequently very close to 1gbit. The only time I've seen the minimum speed was when I had a poorly-crimped Ethernet cable between my router and the rest of my LAN that would intermittently only link up at 100mbit.
I wish it was a bit cheaper, but someone has to fund that trip to Mars.
It's disgusting that big telecom has been able to monopolize so much of the US for so long.
But I'm a realist so I'll take what I can get.
As your remote resources get faster and faster you start using them more like local resources, which can change (often for the better) how you do things.
If all you're doing is watching netflix then 25mbps per user is probably fine.
If you're working or creating or earning, then you want your connection to be as fast as possible, your distant end to be as fast as possible, and the hardware in between to be fast as possible.
I'm certain that one could make a sound argument that 300 Mbps is not necessary for that four-person family, and they could make do with a much slower connection. Back in the day, folks would be asking if it's necessary for your Internet connection to be always on. After all, it's no hassle at all to plug the modem into the house phone line and unplug it when you're done!
For me, switching from a 1400/40mbps cable connection to a symmetric-but-variable 300-1000mbps Ethernet connection meant that I was doing the same sorts of things, but often spending much less time waiting for them to complete. Related to that, it also made "content creation"-esque things [0] much, much easier.
[0] Which I'm declaring is a category that includes uploading and downloading huge files while working from home as a programmer/"DevOps" guy.
It's not a committed rate. Your individual line is a gigabit, but the upstream from your whole block is 10 gigabit so you can't all use it at once. Your guaranteed rate is probably have more like 20-50 Mbps, if that's what's confusing you. But it's extremely rare that everyone tries to use their gigabit all at once.
If it's a Passive Optical Network, you might be sharing a gigabit download with your block - you all share the same fiber - and you get substantially less than a gigabit upload due to the need for timeslotting. Gigabit PON is obsolete though, now you'd get at least 10G PON.
The scarcity of upload speed is why I learned how to setup OpenWrt's SQM feature, and upgraded to the highest package my ISPs offers.
I myself have fiber, and speeds are much less than 500 Mbps, but symmetric (with likely maximum two people using video calling at a time), so haven't faced that issue.
> I also occasionally rsync large directories to/from cloud storage and that can also saturate
Just offering some advice if you aren't aware. If you are, freely ignore. (And if you have advice in return I'd love to hear!)For convenience, the rclone tool is nice for most cloud storage like google and stuff that make rsync annoying[0]
rsync also offers compression[1], and you might want to balance it depending if you want to be CPU bound or IO bound. You can pick the compression and level, with more options than just the `-z` flag. You can also increase speed by not doing the checksum, or by running without checksum and then running again later with. Or some intervaling like daily backups without and monthly you do checksums.
If you tar your files up first I have a function that is essentially `tar cf - "${@:2}" | xz -9 --threads $NTHREADS --verbose > "${1}"` which uses the maximum `xz` compression level. I like to heavily compress things upstream because it also makes downloads faster and decompression is much easier than compression. I usually prefer being compute bound.
Also, a systemd job is always nice and offers more flexibility than cron. It's what's helped me most with the wack-a-mole game. I like to do on calendar events (e.g. Daily, Weekly) and add a random delay. It's also nice that if the event was missed because the machine was off it'll run the job once the machine is back on (I usually make it wait at least 15 minutes after machine comes online).
Great tips! I'll definitely be using your tar command
Let's take video streaming. I have a pretty compressed version of Arrival that's at 2GB and is a 4k movie ~2hrs long (the original file was ~2x the size). To stream that we need to do 2000Mb / (3600s * 2) = 277.8Mb/s. This also doesn't account for any buffering. This is one of my smaller 4k videos and more typical is going to be 3Gb-5Gb (e.g. Oppenheimer vs Children of Men). Arrival is pretty dark and a slow movie so great for compression.
Now, there's probably some trickery going on that can get better savings and you'll see used with things like degrading the quality. You could probably drop this down to 1.5Gb and have no major visual hits or you can do a variable streaming and drop this even more. On many screens you might not notice a huge difference between 1440 and 4k, and depending on the video, maybe even 1080p and 4k[0].
For comparison, I loaded up a 4k YouTube video (which uses vp9 encoding) and monitored the bandwidth. It is very spiky, but frequently jumped between 150kbps and 200Mbps. You could probably do 2 people on this. I think it'd get bogged down with 4 people. And remember, this is all highly variable. Games, downloads, and many other things can greatly impact all this. It also highly depends on the stability of your network connection. You're paying for *UP TO* 300Mbps, not a fixed rate of 300Mbps. Most people want a bit of headroom.
[0] Any person will 100% be able to differentiate 1080p and 4k when head to head, but in the wild? We're just too used to spotty connections and variable resolutions. It also depends on the screen you're viewing from, most importantly the screen size (e.g. phone).
First, if it was 2GB * 2 for the source of your recompressed copy, that's 4GB * 8 bits per byte = 32 Gigabits (Gb), or 32,000Mb. Two hours in seconds is 60 * 60 * 2 = 7,200 seconds.
32,000 / 7,200 is 4.444Mb/s. Streaming your 2 hour long 4GB movie could be done with ~5Mbit. A 1Gb/s connection could handle streaming ~200 of these movies.
Going back to Blu-rays as a source, an Ultra HD Blu-ray maxes out at 144Mbit but in reality most movies are encoded at a much lower bitrate. Most movies will cap out around 40-50Mbit. You could do 20 of these straight Blu-ray movies on a 1Gb connection.
https://en.wikipedia.org/wiki/Ultra_HD_Blu-ray#Specification...
Your math is WAY off.
2 gigabytes / 2 hours is only about 2.22 megabits/sec.
2000Mb / 7200s = 0.278 Mb/s, or 277.8Kb/s
> It is very spiky, but frequently jumped between 150kbps and 200Mbps. […] I think it’d get bogged down with 4 people
That’s just burst buffering as fast as it can, you didn’t capture the average. It doesn’t suggest it would slow down with 4 people. 4K on YouTube takes 20Mbps, so to parent’s point, you’ll have plenty of bandwidth to spare if 4 people do this at the same time on a 300Mbps line.
Whoa, they are laying cable just like that, in the open?
It's not a fair comparison; competition can drive price down, but I pessimistically just see two guys who'll inevitably join the Comcast billionaires club. That's just where these "small guys" end up.