The other bit of advice that is buried in there that no-one wants to hear for residences is the best way to speed up your Wi-Fi is to not use it. You might think it's convenient to have your TV connect to Netflix via WiFi and it is, but it is going to make everything else that really needs the Wi-Fi slower. It's a much better answer to hook up everything on Ethernet that you possibly can than it is to follow the more traveled route of more channels and more congestion with mesh Wi-Fi.
Absolutely. Everything other than cell phones and laptops-not-at-a-desk should be on Ethernet.
I had wires run in 2020 when I started doing even more video calls. Huge improvement in usability.
(We do have one internet-connected device which permanently lives about an inch away from one of the ethernet sockets, but it is, ironically, a wifi-only device with no RJ45 port.)
You can get skinny Ethernet cables that bend easily. If you get some that match your paint, and route them in straight lines, those can be unobtrusive. Use tricks like running the cables along baseboards and other trim pieces. If you really want to minimize the visual impact you can use cable runners and paint over them. The cables are not attention-grabbing compared to furniture or art on the wall.
If you’re willing to drill holes (if you terminate the cable yourself, the hole can be narrow), you can pass the cables through walls. If you don’t want to drill, you can go under a door.
If you’ve got fourteen outlets, it seems like there ought to be some solution to get cables everywhere you need.
I think I've done only one house where the owner wanted to be able to put speakers in every corner of every room on every floor with multiple possible locations for his stereo.
Then he wanted multiple cable tv connections per room, multiple sockets for landlines, Ethernet everywhere.
The speaker tube was left empty and a few short distance sockets didn't have wires in them.
It seemed excessive even to me but it isn't actually a lot of work to run 5 tubes in stead of 1. You might add 1-2% to the renovation bill. Even less for a new house.
The end result was wonderful. He could do his chores with music all over the house. Move his TV sofa bed or desk where ever he wanted.
Doing this after the house is finished is more expensive, it takes a lot more work and the result is inferior.
I think nowadays we should have an USB socket next to each power outlet that provides both internet and extra fast charging. In reality I've never even seen such socket.
With a few small updates Android could switch off wifi and mobile networking and seamlessly switch to calling over <s>wifi</s> wired internet when you plug in the charging cable.
Who knows, maybe the mobile phone could even be a first class citizen in the landline network.
I've seen power outlets with embedded USB power adapters. I think I've seen usb ethernet adaptets with embedded USB power for like chromecasts and similar. But not both smooshed into the same outlet. It might be problematic because nobody wants to mix low voltage and high voltage together in the wall. But it's technically feasible.
> With a few small updates Android could switch off wifi and mobile networking and seamlessly switch to calling over <s>wifi</s> wired internet when you plug in the charging cable.
I'm not sure you need updates. I think if the adapter exposes as usb cdc-ethernet that would likely work out of the box, and there may be drivers for specific usb nics available as well; I haven't checked, but this is a thing that is used by ChromeCast devices and AndroidTV devices, so it should also work on Android. Seamlessness is maybe in the air, but if it's seamless from wifi to cellular, it should be better going from wired to something else, because wired has an unambigious and timely disconnect signal.
> Who knows, maybe the mobile phone could even be a first class citizen in the landline network.
IMHO there's less value here; the landline network has degraded and there's not really any first class citizens anymore. Few people retain landlines, and those that remain tend to be ATAs in the home; if you care to use that with an android, there's likely better options than interfacing with the analog side.
Power with network is less common since nobody wants to mix high and low voltage runs.
https://www.amazon.ca/TOPGREENER-Ultra-High-Speed-Receptacle...
Another benefit is that I can cram 4 of them inside a single cable runner at the one spot I have to (no space for a switch). Where it's just one cable you run them bare and they look very clean.
The old ones I have are still CAT5e, the newer ones they sell are CAT6 at the same thinness. All unshielded (UTP).
10/10 would buy again.
Horizontal cabling, from the panel to the jack, is up to I think 350’ of solid core twisted pair.
5e gets you gigabit if it’s done right end to end.
Here is a link for others who want to know how thin:
https://store.ui.com/us/en/category/accessories-cables-dacs/...
If you own, you should replace and/or move them. Might sound scary if you've never done this before but it is much easier than you'd think. If you want to make your future life easier I suggest running a PVC pipe (at minimum in the drop down portion). Replacing or adding new cabling will be much easier if you do this so it's totally worth the few extra bucks and few extra minutes of work. They'll also be less likely to be accidentally damaged (stepping on them, rodents, water damage, etc). I seriously cannot understand why this is not more common practice (leave the pull string in). You might save a few bucks but you sacrifice a lot more than you're saving... (chasing pennies with pounds)
If rental, you could put in an extender. If you're less concerned about aesthetics you can pop the wall place off and directly tie into the existing cable OR run a new one in parallel. If you're willing to donate the replacement wire and don't have access to the attic but do to both ends of the existing cable then you can use one to pull the other through. You could coil the excess wire behind the plate when you reinstall it. But that definitely runs the risk of losing the cable since it might be navigating through a hard corner. If you go that route I'd suggest just asking your landlord. They'd probably be chill about it and might even pay for it.
Of course, some thin raceways can be seen somewhere along the baseboard. It does not look terrible, and is barely noticeable.
But the slope is slippery. If you’re doing fibre, you might as well do 10gbe.
Your clearly don’t live in an earthquake prone area.
I do. But given how cheapskate New Zealand is, I’m 100% sure that we would build in stone and brick if it was cheaper.
It's incredible how people do not understand boot theory... which seems to be something most people know but don't employ in practice
I’m trying to understand how removing an entire sheet of gypsum (or cutting a 6” by 8’ channel) and installing an empty PVC raceway is ‘a few extra minutes of work’. Installing the PVC might be, but you’re looking at hours of work over multiple days to replace the drywall and refinish the wall.
Raceways are unnecessary in stick built houses if you have a fish stick and fish tape. If you’re building a new house, then sure, install 1” EMT as raceway for Cat6A before putting up the drywall.
> I’m trying to understand how removing an entire sheet of gypsum
This is a fixed task. Required if you install the conduit or not. You have to cut the wall to make the port. If you have the port you can just use a slightly longer conduit, brace it where you can reach, and oh no you need an extra 2" of cable? > Raceways are unnecessary in stick built houses
Your mental model is too naïve. Have you done this before? Have you then replaced it or added additional lines?The conduit makes all that easier, and provides the additional protection that I discussed. By having a conduit you're far less likely to get snagged on something while fishing the lines. You can stop hard corners that strip your cables while pulling on them. It's also a million times easier to see while you're chasing those cables. Sure, your house is a framing with wood but you still have insulation and who likes icy hands?
Really, think about it. What is the cost now compared to the future?
Is an additional 10 or let's even say a crazy 50% additional work now really that costly when you have to do the whole thing again in the future? And multiple times? It's a no brainier lol. Definition of chasing pennies with pounds. Just be nice to your future self. Be lazy long term, not lazy short term because lazy short term requires more work
I sell and run electrical work for a living (including low-voltage cabling), I have thought about how cables get pulled into existing walls in virtually any scenario you can contrive. Block, steel stud, brick, wood stud, precast tip-up; both drop ceiling and hardlid.
Cutting open walls to install low-voltage raceway is very uncommon because it’s substantially more expensive (or just way more work) than cutting two small holes (or using an attic/basement for access) and using a fish tape.
Non-professionals overestimate how many cables they’ll pull into existing low-voltage raceways in the future. Pull in an extra cable the first time and you’re future proofed.
I’m in an old stone house and currently have flat cables snaked around until I can piggy back on the workers putting in conduit for other things.
I had a similar situation a few years back. It was a rental so I didn't have access to the attic let alone permission to do my own drops. It'll depend a _lot_ on your exact setup, but we had reasonably good results with some ethernet-over-power adapters.
A better solution is repurposing unused 75Ω coaxial cable with MoCA 2.5 adapters, which will actually give you 1+ Gbps symmetrical. The latency is a very consistent 3-4ms, which is negligible. I use Screenbeam (formerly Actiontek) ECB6250 adapters, though they now make a new model, ECB7250, which is identical to the ECB6250 except with 2.5GBASE-T ports instead of 1000BASE-T.
I'll second this. MoCA works. You can get MoCA adapters off Ebay or whatnot for cheap: look for Frontier branded FCA252. ~90 MBps with a 1000BASE-T switch in the loop. I see ~3 ms of added latency. I've made point-to-point links exclusively, as opposed to using splitters and putting >2 MoCA adapters on shared medium, but that is supported as well.
We had an unused coax (which we disconnected from the outside world) and used MoCA adapters (actiontek) and it's been consistently great/stable. No issues ever... for years.
It’s worked well!
You do need to be a bit careful as coax signal can be shared with neighbors and others sometimes.
You can buy them online for around $10 and they install without tools,
Besides neighbors, you may also need a POE filter if you have certain types of cable modem.
In general they do suck, but they can be pretty decent if you stick them all on one phase, even better if all on the same breaker.
The endpoint in my living room also has a wifi AP so signal is pretty good for laptops and whatnot.
In NYC every channel is congested, I can see like 25 access points at any time and half are poorly configured. Any wired medium is better than the air, I could probably propagate a signal through the drywall that's more reliable than wifi here.
So having something I can just plug into the wall is pretty nice compared to running cables even if it's a fraction of gigE standards.
If you have newer clients that support it, Wifi 6E/7/802.11ax (or whatever it's called) uses the 6GHz spectrum that isn't as heavily used (yet). I've had good success with it in my multi-unit apartment condo (feels as clean as 5GHz did ~2010). Some higher end APs can also use multi-antenna beams that can help, too.
So they would have to do quite a bit of work to run cable. Also people living in apartments that cant just start drilling through walls.
I'd say most ppl use wifi because they have too, not pure convenience
At the 1914 house, I used ethernet-over-powerline adapters so I could have a second router running in access point mode. The alternative was punching holes in the outside walls since there was no way to feasibly run cabling inside lath-and-plaster walls.
I don't know how 2025 houses are built but I would be surprised if they didn't have an ethernet jack in every room to a wiring closet of some sort. Not sure about coax.
My son has ethernet in his dorm with an ethernet switch so he can connect his video game consoles and TV. I think that's pretty common.
Speaking from a US standpoint, it still not common in new construction for ethernet to be deployed in a house. I'm not sure why. It seems like a no-brainer.
Coax is still usually reserved to a couple jacks -- usually in the living room and master bedrooms.
Aye.
Cat5/6/whatever-ish cabling has been both the present and the future for something on the order of 25 years now. It's as much of a no-brainer to build network wiring into a home today as it once was to build telephone and TV wiring into a home. Networking should be part of all new home builds.
And yet: Here in 2025, I'm presently working on a new custom home, wherein we're installing some vaguely-elaborate audio-visual stuff. The company in charge of the LAN/WAN end of things had intended to have the ISP bring fiber WAN into a utility area of the basement (yay fiber!), and put a singular Eeros router/mesh node there, and have that be that.
The rest of the house? More mesh nodes, just wirelessly-connected to eachother. No other installed network wires at all -- in a nicely-finished and fairly opulent house that is owned by a very successful local doctor.
They didn't even understand why we were planning to cable up the televisions and other AV gear that would otherwise be scooping up finite wireless bandwidth from their fixed, hard-mounted locations.
In terms of surprise: Nothing surprises me now.
(In terms of cost: We wound up volunteering to run wiring for the mesh nodes. It will cost us ~nothing on the scale that we're operating at, and we're already installing cabling... and not doing it this way just seems so profoundly dumb.)
They (the homeowner) were getting dedicated custom-built single-purpose wall-mounted shelving for each of these Eeros devices, along with dedicated 120V outlets for each of them to provide power.
Now they're still getting that, plus the Ethernet jack that I will be installing on the wall at these locations because that's the extent to which I am empowered to inject sanity.
(Maybe someone down the road will look at it and go "Yeah, that just needs to be a wall-mounted access point with PoE," and remove even more stupid from the things.
Or... not: People are unpredictable and it seems like many home buyers' first task is to rip out and erase as much current-millennia technology as possible, reducing the home to bare walls under a roof, with a kitchen, a shitter, and some light switches and HVAC.)
For some reason the cable service entry is on the third floor in the laundry room. Ethernet and the TV signal cable runs from there to exactly one place, where the TV is expected to be mounted. Nothing in the nice office area on the other side of the wall.
My guess is that the thinking these days is that everyone's on laptops with wifi and hardwired network connections are only of interest for video streaming. Probably right for 99% of purchasers.
No matter how fancy and directive the antenna arrangement may be at the access point end, the other devices that use this access point will be using whatever they have for antennas.
The access point may be able to produce and/or receive one or many signals with arbitrarily-aimed, laser-like precision, but the client devices will still tend to radiate mostly-omnidirectionally -- to the access point, to eachother, and to the rest of the world around them.
The client devices will still hear eachother just fine and will back off when another one nearby is transmitting. The access point cannot help with this, no matter how fanciful it may be.
(Waiting for a clear-enough channel before transmitting is part of the 802.11 specification. That's the Carrier Sense part of CSMA/CA.)
But if you can pull it off (or even better, move your router closest to the most annoying thing and work from there!), excellent
In an apartment I once had, I ran some cat5-ish cable through the back wall of one closet and into another.
In between those closets was a bathroom, with a bathtub.
I fished the cable through the void of the bathtub's internals.
Spanning a space like this is not too hard to do with a tape measure, some cheap fiberglass rods, a metal coat hanger, and an apt helper.
Or these days, a person can replace the helper by plugging a $20 endoscope camera into their pocket supercomputer. They usually come with a hook that can be attached, or different hooks can be fashioned and taped on. It takes patience, but it can go pretty quickly. In my experience, most of the time is spent just trying to wrap one's brain around working in 3 dimensions while seeing through a 2-dimensional endoscope camera that doesn't know which way is up, which is a bit of a mindfuck at first.
Anyway, just use the camera to grab the rod or the ball of string pushed in with the rod or whatever. Worst-case: If a single tiny thread can make it from A to B, then that thread can pull in a somewhat-larger string, and that string can finally pull in a cable.
(Situations vary, but I never heard a word about these little holes in the closets that I left behind when I moved out, just as I also didn't hear anything about any of the other little holes I'd left from things like hanging up artwork or office garb.)
It’s what you do with that cable that matters :)
Even the telco provided router/ap combo units usually have a built in switch, so you don’t even need another device in most cases.
Got powerlines? Well then you can get gbit+ to a few outlets in your house.
Got old CATV cables? Then you can use them at multiple gbit with MoCA.
Got old phone lines? Then its possible to run ethernet over them with SPE and maybe get a gbit.
And frankly just calling someone who wires houses and getting a quote will tell you if its true. The vast majority of houses arent that hard, even old ones. Attic drops through the walls, cables below in the crawlspace, behind the baseboards. Hell just about every house in the USA had cable/dish at one point, and all they did was nail it to the soffit and punch it right through the walls.
Most people don't need a drop every 6 feet, one near the TV, one in a study, maybe a couple in a closet/ceiling/etc. Then those drops get used to put a little POE 8 port switch in place and drive an AP, TV, whatever.
Depending on the age of the house, there's a chance that phone lines are 4-pair, and you can probably run 1G on 4-pair wire, it's probably at least cat3 if it's 4-pair and quality cat3 that's not a max length run in dense conduit is likely to do gigE just fine. If it's only two-pair, you can still run 100, but you'll want to either run a managed switch that you can force to 100M or find an unmanaged switch that can't do 1G ... Otherwise you're likely to negotiate to 1G which will fail because of missing pairs.
Either may "work" with cat3, but that's by no means a certainty. The twists are simply not very twisty with cat3 compared to any of its successors...and this does make a difference.
But at least: If gigabit is flaky over a given span of whatever wire, then the connection can be forced to be not-gigabit by eliminating the brown and blue pairs. Neither end will get stuck trying to make a 1000BASE-T connection with only the orange and green pairs being contiguous.
I think I even still have a couple of factory-made cat5-ish patch cords kicking around that feature only 2 pairs; the grey patch cord that came with the OG Xbox is one such contrivance. Putting one of these in at either end brings the link down to no more than 100BASE-TX without any additional work.
(Scare quotes intentional, but it may be worth trying if the wire is already there.
Disclaimers: I've made many thousands of terminations of cat3 -- it's nice and fast to work with using things like 66 blocks. I've also spent waaaaay too much time trying to troubleshoot Ethernet networks that had been made with in-situ wiring that wasn't quite cutting the mustard.)
They can get stuck, because negotiation happens on the two original pairs (at 1Mbps), and to-spec negotiation advertises the NIC capabilities and selects the best mutually supported option. Advertising fewer capabilities for retries is not within the spec, but obviously helps a lot with wiring problems.
The key thing with the ethernet wiring requirements is that most of the specs are for 100m of cabling with the bulk of that in a dense conduit with all the other cables running ethernet or similar. Most houses don't have 100m of cabling, and if you're reusing phone cabling, it's almost certainly low density, so you get a lot of margin from that. I wouldn't pull new cat3 for anything (and largely, nobody has since the 90s; my current house was built in 2001, it has cat5e for ethernet and cat5e in blue sheaths for phone), but wire in the wall is worth trying.
My intent wasn't to dissuade anyone from trying to make existing cat 3 wire work (which I've never encountered in any home, but I've not been everywhere), but to try to set reasonable expectations and offer some workarounds.
If a person has a house that is still full of old 2- or 4-pair wire, and that wire is actually cat3, and is actually home-run (or at least, features aspects that can usefully-intercepted), then they should absolutely give it a fair shot.
I agree that the as a practical matter, the specifications are more guidelines than anything else.
I've also gone beyond 100 meters with fast ethernet (when that was still the most commonly-encountered) and achieved proven-good results: The customer understood the problem very well and wanted to try it, so we did try it, and it was reliable for years and years (until that building got destroyed in a flood).
If the wiring is already present and convenient, then there's no downside other than some time and some small materials cost to giving it a go. Decent-enough termination tools are cheap these days. :)
(Most of the cat3 I've ran has been for controls and voice, not data. Think stuff like jails, with passive, analog intercom stations in every cell, and doors from Southern Steel that operate on relay logic...because that was the style at the time when it was constructed. Cat3, punch blocks, and a sea of cross-connect wire still provides a flexible way to deal with that kind of thing in an existing and rather-impervious building -- especially when that building's infrastructure already terminates on 25-pair Amphenols. I'll do it again if I have to, but IP has been the way forward even in that stodgy slow-moving space for a good bit now.)
Yes, it’s better if your cable and clips and wall all match, but it still looks bad.
When I was younger I went and bought a new modem so I could play halo on my Xbox in another room than where my parents had the original modem. Found out then I’d need to pay for each modem.
When I was younger and before WiFi was a thing I naively thought I’d just plug in a new modem.
https://en-us.support.motorola.com/app/answers/detail/a_id/1... will give you some additional info.
My house had quite old (likely 1980s) coax home runs and it worked flawlessly. All I did was change out the entry (root)splitter for one that had a point of entry filter. I’m not sure that was even needed, but it seemed sensible and was not expensive or difficult.
Usually those can be found in the wall boxes behind the plate - but not always!
These used to be a bane on cable modem installs for apartment complexes, but the situation should generally be better 25 years later...
So true!
Other tips I’ve found useful:
Separate 2.4ghz network for only IoT devices. They tend to have terrible WiFi chipsets and use older WiFi standards. Slower speed = more airtime used for the same amount of data. This way the “slow” IoT devices don’t interfere with your faster devices which…
Faster devices such as laptops and phones belong on 5ghz only network, if you’re able to get enough coverage. Prefer wired backhaul and more access points, as you’re better off with a device talking on another channel to an ap closer to it rather than tieing up airtime with lots of retries to a far away ap (which impacts all the other clients also trying to talk to that ap)
WiFi is super solid at our house but it took some tweaking and wiring everything that doesn’t move.
The only devices on wifi should be cell phones and laptops if they can't be plugged in. Everything else, including TVs, should be ethernet.
When I moved into my last house with roommates their network was gaaarbage cuz everything was running off the same router. The 2.4ghz congestion slowed the 5ghz connections because the router was having to deal with so much 2.4ghz noise.
A good way of thinking about it is that every 2.4ghz device you add onto a network will slow all the other devices by a small amount. This compounds as you add more devices. So those smart lights? Yeaaahh
I don't know why you're saying, a 2.4 GHz device should not interfere with 5 GHz channels unless it's somehow emits some harmonics, which would most definitely make i noncompliant with various FC standards. Or do you mean the modem was so crappy it couldn't deal with processing noisy 2.4 GHz channels at the same time as 5GHz ones? That might be true, but I would assume the modems would run completely different DSP chains on different asics, so this would be surprising.
Your assumption is sometimes incorrect as cheap devices can share some RF front end. Also apparently resource contention can also occur due to CPU, thermal, and memory issues.
https://chatgpt.com/share/68e9d2ee-01a4-8004-b27b-01e9083f7e... (Note that Prof is one "character" I have defined in the prompt customisation)
Or:
Please allow me to proffer the following retort: The answer to having a shitty, incapable router is to use one that is not shitty, and is capable.
(The routing-bits have no clue what RF spectrum is being utilized, and never have. They just deal with packets. The packets are all shaped the same way regardless of the physical interface on which they arrive, or which they are destined for.)
cycomanic knows stuff but their answer was basically contradicting chrneu, which nobody likes. It is counterintuitive to me (and I'm guessing cycomanic too) that the different bands should interact so much.
The AI answers passed my shit-detector... And I think it is the same as trying to be helpful but providing a search link in the past. Other HN users can make their own decision about reading the prompt or reply (although using links does make me wonder about cross account tracking and doxing myself).
It's all quite well-worded, and yet is still completely unrelated to what is being discussed.
Real people: "Hey, let's talk about networks!"
Eventually: "Cool, I like networks! Did you know that down is actually up, and up is actually down? In fact, I asked a sycophant bot to demonstrate this fiction with its wily words, and it did so with with wonderful articulation. Here's a link!"
Having tolerance towards this kind of make-believe anti-truth is not something that I would consider to be a healthy human function. Especially when this nonsense has deflected through a third party that is completely absent from the discourse and isolated from the context, such as a sycophant bot, and particularly so when there's an implied appeal to authority for that absent third party.
(I have no intention of considering whether this kind of action is deliberate or not. I simply recognize this move for how consistently successful it is at poisoning a discussion amongst a group of people.)
---
If you were to ask me, a person, the following question:
> "What is the most likely reason that a cheap router/AP would slow down servicing clients on 5GHz when also servicing clients on a congested 2.4GHz spectrum"
...then I would not have responded to that question with a single confidently-stated and presumptive answer, but instead by opening a dialogue.
And I would begin this dialogue by asking about the reasons that lead you to believe that this would ever be true in the first place.
(But that's not the path that was chosen here.)
A measured compromise would entail the meticulous profiling of the TV’s network traffic, followed by the imposition of complete blocking at the DNS level (via Pi-hole, NextDNS and alike) first, whilst blacklisting the outgoing CIDR's on the router itself at the same time.
This course of action shall not eliminate the privacy invasion risk in its entirety – for a mere firmware update may well redirect the TV traffic to novel hosts – yet it shall transform a reckless exposure into a calculated and therefore manageable risk.
As for the extracting of data, yes that happens on a massive scale. In free products that no one is forced to use. And I would argue that, by now, almost everyone should know that comes at a price, it's just not monetary to the user. At that point it's a choice people make and should be allowed to make.
If something is free, you're the product. But if it isn't free, you're paying to be the product.
As a broad concept: Ever since my last Sonos device [that they didn't deliberately brick] died, I don't have any even vaguely bandwidth-intensive devices left in my world that are 2.4GHz-only.
Whatever laptop I have this year prefers the 5GHz network, and has for 20 years. My phone, whatever it is today, does as well and has for 15 years. My CCwGTV Chromecast would also prefer hanging out on the 5GHz network if it weren't plugged into the $12 ethernet switch behind the TV.
Even things like the Google Home Mini speakers that I buy on the used market for $10 or $15 seem to prefer using 5GHz 802.11ac, and do so at a reasonably-quick (read: low-airtime) modulation rate.
The only time I spend with my phone or tablet or whatever on the singular 2.4GHz network I have is when I'm at the edge of what I can reach with my access points -- like, when I visit the neighbors or something, where range is more important than speed and 2.4GHz tends to go a wee bit further.
So the only things I have left in normal use that requires a 2.4GHz network are IoT things like smart plugs and light bulbs and other small stuff like my own little ESP/Pi Zero W projects that require so little bandwidth that the contention doesn't matter. (I mean... the ye olde Wii console and PSP handheld only do 2.4GHz, but they don't have much to talk about on the network anymore and never really did even in the best of times.)
It's difficult to imagine that others' wifi devices aren't in similar form, because there's just not much stuff left out there in the world that's both not IoT and that can't talk at 5GHz.
I can see some merit to having a separate IoT VLAN with its own SSID where that's appropriate (just to prevent their little IoT fingers from ever reaching out to the rest of the stuff on my LAN and discovering how insecure it may be), but that's a side-trip from your suggestion wherein the impetus is just logical isolation -- not spectral isolation.
So yes, of course: Build out a robust wireless network. Make it awesome -- and use it for stuff.
But unless I'm missing something, it sounds like building two separate-but-parallel 2.4GHz networks is just an exercise in solving a problem that hasn't really existed for a number of years.
My dev laptop is about 10 m (30 ft) away from the wifi access point, but goes through about 6 walls diagonally, due to some weird layout, and 2.4 GHz is way faster.
The house has some thick walls.
Same with phones. As soon as I'm in a different room, 2.4 GHz is faster. So I just keep things on 2.4.
Yeah, I've been planning to wire the house with Cat-6 into every room and add some access points. It's been on the backlog for 6 years..
My last house, which was rather small (by midwestern American standards, anyway) had some interior walls that were very good at blocking 5GHz transmissions. (I never took them apart to look, but I suspect that some of them had plaster with metal lath as one or more layers.)
I started with one access point downstairs at the front (because that's where the cable modem lived) but it didn't work so well upstairs, at the back (diagonally) in the room I was using as an office.
So I added another access point upstairs at the back and that fixed it: Wifi became solid-enough both upstairs and down, and also covered the entire back yard, and also worked great for the neighbors when they asked if they could borrow a cup of Internet. It took some literal gymnastics in some very weird normally-unseen spaces to accomplish that run, but it got done. :)
As an side: It's interesting that being blocked by walls is also part of what makes 5GHz wifi so speedy indoors (in addition to having a lot more spectrum to use), for many [not all] people. By being attenuated so well by walls, the co-channel interference from the neighbors is reduced rather dramatically. With neighbors nearby, the RF environment tends to be a lot quieter at 2.4GHz than at 5GHz.
---
Present-day house is a bit lucky: All of the thirsty tech is on the first floor, and it's very simple to get ethernet cables routed 'round in the basement (it's all utility space). I was able to find enough pre-existing holes in the floor (from old cable TV installs and also floor-mounted outlets that have been removed and covered) that getting ethernet to every useful area of every first-floor room with tech in it was a very simple ordeal that did not require a drill. (Yeah, that means that there's a wire poking up through the floor behind the desk I'm sitting at right now instead of a tidy RJ45 receptacle on a wall plate with a nice port designation label. I'm over it; it works perfectly and inertia is a hell of a drug.)
But I'm not completely "lucky." The present house has aluminum siding and low-E windows. It's a great house that is amazingly inexpensive to heat and cool for how old it is, but it has aluminum siding and low-E windows and approximates a somewhat-leaky Faraday cage.
Thus, my cell phone barely works indoors, but it works great outside. And wifi barely works outside on the porch (front or back, doesn't matter), and really not at all beyond the porch (but things like my phone think that it should work, which is problematic).
I worked around that well-enough for the detached garage and back yard area by adding another access point in the garage, configured as a wireless repeater. Its advantage is that it has antennas that are optimized to work well, instead of some that are optimized to be very small (like those inside my phone, or my laptop). It's identical to the one inside the house and gets OK signal to/from the main AP, which it has a visual line-of-sight to through a couple of windows.
As an impromptu solution made from stuff I already had leftover from the last place, it works. I'm not winning any speed records with that remote access point... but it seems to be reliable, and reliability is good.
(Maybe some day I'll actually get around to upgrading the electricity to the garage to support some easy-to-access rooftop solar and/or car charging and/or welding and/or something, and when that trenching happens I'll also drop in some single-mode fiber. A single run of pre-terminated fiber is very cheap to buy, the "optics" at the endpoints are very inexpensive, and it is very safe with its essentially-absolute electrical isolation. It feels like overkill, but it's also once and done.)
As I understand, low-E reflects solar thermal infrared radiation (3-8 microns, 37-100 THz), while letting through visible light. I don't think it affects 5 GHz radio waves very much.
But yeah, it would be very satisfying to finally wire the house with ethernet.
A few things come to mind...
- You can buy ethernet adapters... for iPhone/ipad/etc. Operations are so much faster, especially large downloads like offline maps.
- many consumer devices suck wrt to wifi. For example, there seem to me ZERO soundbars with wired subwoofers. They all incorporate wifi.
- also, if anyone has lived in a really dense urban environment, wifi is a liability in just about every way.
- Whats's worse is how promiscuous many devices are. Why do macs show all the neighbor's televisions in the airplay menu?
- and you can't really turn off wifi on a mac without turning off sip. (in settings, wifi OFF toggle is stuck on but greyed out)
That's a feature that can be configured on the TV/AirPlay receiver. They've configured to allow streaming from "Anyone", which is probably the default. They could disable this is setting and limit it to only clients on their home network. And you can't actually stream without entering a confirmation code shown on the TV.
When you stream to an AirPlay device this way it sets up an adhoc device-to-device wireless connection which usually performs much better that using a wifi network/router and is why screen sharing can be so snappy. Part of the 'Apple Wireless Direct Link' proprietary secret sauce also used by AirDrop. You can sniff the awdl0 or llw0 interfaces to see the traffic. Open AirDrop and then run `ping6 ff02::1%awdl0` to see all the Apple devices your Mac is in contact with (not necessarily on your wifi network)
> and you can't really turn off wifi on a mac without turning off sip.
Just `sudo ifconfig en0 down` doesn't work? You can also do `networksetup -setairportpower en0 off`. Never had issues turning off wifi.
Sonos has its issues, but I do need to point out that their subs (and the rest) all have Ethernet ports in addition to WiFi.
In software-land, they even solved latency inequalities well-enough to keep things properly in-phase at 20KHz between different devices, to allow stereo imaging to work correctly betwixt two wirelessly-connected speakers. (This seems very passe' in these modern enlightened times of seemingly-independent wireless Bluetooth earbuds, but it was a tough nut for them to crack back in 2002[!].)
It wasn't all smiles and rainbows, of course, because the world never properly settled on one, true, universal implementation of something like Spanning Tree Protocol and agreed on how to use it. It was very possible for a person to really hose up their network by connecting Sonos gear the "wrong" way -- by connecting "too much" of it directly to the LAN.
But those potential problems were broadly mitigable by picking exactly one Sonos device to bridge the wireless SonosNet into the home's LAN: Ideally, a Sonos Bridge would -- uh -- provide that bridge, but any random Sonos speaker (or subwoofer!) would do just as well. This worked, but it involved some aspect of wifi.
And yeah, the problems could also be mitigated in other ways if they showed up: A person could certainly plug in their Sonos sub, sound bar, and surround speakers into Ethernet -- which was really quite neat and tidy if it worked, and it often worked. But it was a pickle if it didn't work because STP implementations can be an unadjustable boondoggle in the consumer space.
They had a really neat and rather unique thing going for quite a long time before the market shifted to make their products apparently be fickle, outdated, inferior, and expensive. ("What, no Bluetooth?" people once said, even though, being an independent network-based streamer, it doesn't have Bluetooth problems like a person walking to the other side of the house with their phone where everyone but them can hear it noisily glitch out until they wander back.)
Nowadays, SonosNet seems to be mostly dead, and the STP problems died with it. Common home wifi has also grown up a lot since 2002. So a person can hard-wire their Sonos sub, soundbar, and surround speakers into the LAN without fear of badness -- or use one or more of those wirelessly, instead. All without problems.
It was pretty neat. It's still pretty neat today.
Eh, I just had to go through and disconnect all ethernet from a bunch of Sonos devices in my house a couple months ago due to issues. It's on my list to go through and connect everything to the LAN when I get the time to make another couple ethernet drops - but mixing wifi/ethernet connected Sonos devices is not a great experience even in 2025.
Are you still on S1?
For my IoT network I just block most every device's access to the internet. That cuts down on a lot of their background chatter and gives me some minor protection.
Also honestly, I feel the majority of wifi problems could be fixed by having proper coverage (more access points), using hardwired access points (no meshing), and getting better equipment. I like Ubiquiti/Unifi stuff but other good options out there. Avoid TP-Link and anything provided by an ISP. If you do go meshing, insist on a 6ghz backhaul, though that hurts the range.
Certainly this is the brute-force way to do it and can work if you can run enough UTP everywhere. As a counterexample, I went all-in on WiFi and have 5 access points with dedicated backhauls. This is in SF too, so neighbors are right up against us. I have ~60 devices on the WiFi and have no issues, with fast roaming handoff, low jitter, and ~500Mbit up/down. I built this on UniFi, but I suspect Eero PoE gear could get you pretty close too, given how well even their mesh backhaul gear performs.
I'm glad it works but lol that's just hilarious.
5 aps for 60 devices is hilarious. I have over 120 devices running off 2 APs without issue. lol
I'm just curious – I'm a relatively techy person and I have maybe 15 devices on my whole home network.
I pretty much just deploy WiFi as a "line of sight" technology these days in a major city. Wherever you use the wifi you need to be able to visually see the AP. Run them in low power mode so they become effectively single-room access points.
Obviously for IoT 2.4ghz stuff sitting in closets or whatever it's still fine, but with 6ghz becoming standard the "AP in every room" model is becoming more and more relevant.
I’ve connected a switch and a second access point with mine.
Also I think they work best if there fewer of them on the same circuit. But not sure. Check first.
It's literally wifi just over an even worse medium.
Wired connection is an absolute hack.
Now put an access point into every room and wire them to the router, and things start looking very differently.
People say this until it takes 3 days to restore a fibre cut, when the wireless guys just work around the problem with replacement radios etc.
Issue with Wireless is usually the wireless operator. And most of them do work hard to give wireless a bad rep.
I am aware of a datacentre, whose principal fibre bundle transits a fast tracked development area where theres always construction and always fibre cuts.
I am also aware of a wireless backhaul path with close to 2 weeks battery backup, running entirely off of solar. They only truckroll of they get consistent bad weather.
I used to maintain an absolutely perfect 25km link that only went offline due to wind twisting the mast the radio was mounted on.
I also have maintained an absolute dogs breakfast of a network where customers frequently lost connection. Like daily.
I had one fibre link supporting 1000 customers or so, that the provider admitted had so many joins they could scarcely maintain it. And to add insult to that injury, they mislaid the service id, and would always take an adjacent service offline while troubleshooting it.
The technology is rarely the problem, its the implementation.
Proliferation of consumer hardware that lacks ethernet ports is probably a contributing factor
IMHO, the greatest utility of wifi is wireless keyboards and monitors, not wireless internet access
The ability to remotely control multiple computers not on the same network from the same keyboard, for example
But I've always had a bias for using a (mechanical) external keyboards over built-in laptop keyboards, even before there were wireless keyboards
Sometimes DFS certification comes after general device approval, but I'm not aware of any that just flat out doesn't support it. It supported it 10+ years ago.
I'd guess OP might be trying to use 160mhz channel width on 5ghz band, which will only work on DFS channels though. I wouldn't recommend 160mhz channel width unless you have a very quiet RF environment and peak speed is very important to you. Also I've found it hard to get clients to actually use the full 160mhz width on a network configured this way.
There is other stuff to watch - like uhd bluray backups and those need more than the crappy 100mbps lan port can deliver.
TV streaming seems like a bad example, since it's usually much lower average bandwidth than e.g. a burst of mobile app updates installing with equal priority on the network as as soon as a phone is plugged in for charging, or starting a cloud photo backup.
That's true of any client with older and crappier WiFi chips though, but TVs are such a race to the bottom when it comes to performance in so many other things.
We've gone from 100 Mbps being standard consumer level to 2.5 or 10 Gbps being standard now. That sounds substantial to me.
It is bizarre that they are putting 100mbps Ethernet ports on TVs though.
It's not that bizarre. About the only media one might have access to that is above 100mbps is 4k blu-ray rips which can hit peaks above 100m; but TVs don't really cater to that. They're really trying to be your conduit to commercial streaming services which do not encode at that high of a bitrate (and even if they did, would gracefully degrade to 100Mbps). And then you can save on transformers for the two pairs that are unused for 100base-tx.
It's a few pennies cheaper and i'm sure they have some data showing 70%+ will just use WiFi. TCL in particular doesn't even have very good/stable drivers for their 10/100 NIC; there's a ton of people on the Home Assistant forums that have noticed that their android powered smart TV will just ... stop working / responding on the network until it's rebooted.
The only way I've managed to convince any Wifi 7 client to exceed 1gbps is by freshly connecting to it over 6ghz while standing physically within arm's reach of the AP. That's it. That's the only time it can exceed 1gbps.
In all other scenarios it's well under 1gbps, often more like 300-500mbps. Which is great for wifi, but still quite below the cheapest ethernet ports around. And 6ghz client behavior across OS's (Windows, MacOS, iOS, and Android) is so bad at roaming that I actually end up just disabling it entirely. The only thing it can do is generate bragging rights screenshots, in actual use it's basically entirely DOA.
And that's ignoring that ~$200 N150 NUCs come with 2.5gbps ethernet now.
You can find a 2 port 10gbe+4 port 2.5gbe switch for just over $30 on Amazon.
If the run isn’t too long this can all run over cat5. Handily beats wifi especially for reliability but Thunderbolt is fastest if you only have 2 machines to link.
I could go to 10gbit but the Thunderbolt adapters for those all have fans.
I think this market is driven by content creators. Lots of prosumers shoot terabytes of video on a weekly basis. Local NAS are essential and multi-gig local networks dramatically improve the editing experience.
I have Firewalla Wi-Fi 7 APs connected via 10Gb Ethernet to my router. They're brilliant, very expensive, very high quality devices. I use them only for devices which I can't hardwired, because even 1Gb Ethernet smokes them in actual real-world use.
I see that you have never tried this. By the way, Mac Migration Assistant doesn't need Wi-Fi infrastructure at all.
Running over Wi-Fi dragged on interminably and we gave up several hours in. When we scrounged up a could of USB Ethernet dongles and started over, it took about an hour.
So yeah, my own personal experience confirms exactly what I'd expect: Wi-Fi is slow and high-latency compared to Ethernet, and you should always use hardwired connections when you care about stability and performance more than portability. By all means, use Wi-Fi for routine laptop mobility. If you have the option, definitely run a cable to your stationary desktop computers, game consoles, set-top boxes, NASes, and everything else within reach of a switch.
Now lets talk about my actual “old mac” and “new mac” Mid 2012 mbp and my m3 pro. The 2012 only can do 802.11n so not gigabit speeds. It does have a gigabit ethernet however.
Even if I was going m3 pro to m3 pro, I’m only getting full wifi 6e speeds if I actually have a router that makes use of 160hz channels. My router can’t. It is hard to even gleam router offerings to see which are offering proper wifi 6 because there are like dozens of skus sold even to different stores from the same brand getting slightly different skus. Afaik my mac does not support 160hz wifi 6 either.
Wifi is garbage. This person has no idea what they're talking about. It sounds like they read a blog post like 5 years ago and stuck with it cuz it's an edgy take.
Your take is really weird and doesn't represent the real world. What blog did you read this on and why haven't you bothered to attack that obviously wrong stance?
But if you actually want your Ethernet to be similar speed to your SSD, you don't need to spend that much. Get some used gear.
32 port 40GbE switch (Dell S6000) $210 used
Dual port 40GbE NIC (Mellanox MCX354A-FCCT) $31 used
40GbE DAC 1 meter new price $22 or 40GbE optics from FS.com (QSFP-SR4-40G) $43 new + MMF fiber cable
Of course, that's probably not going to be very power efficient for home use - 32 port switch and probably only connecting a handful of devices at most.
If you want to spend a really long time optimizing your wifi, this is the resource: https://www.wiisfi.com/
If you are experiencing problems, this might give you an angle to think about that you hadn't otherwise, if you just naively assume Wifi is as good as a dedicated wire. Modern Wifi has an awful lot of resources, though. I only notice degradation of any kind when I have one computer doing a full-speed transfer for quite a while to another, but that's a pretty exceptional case and not one I'm going to run any more wires around for for something that happens less than once a month.
Also that's an amazing resource, thanks for linking.
Add another idiot sitting on channel 8 or 9 and the other half of the bandwidth is also polluted, now even your mediocre IoT devices that cannot be on 5GHz are going to struggle for signal and instead of the theoretical 70/70mbps you could get off a well placed 20MHz channel you are lucky to get 30.
Add another 4 people are you cannot make a FaceTime call without disabling wifi or forcing 5GHz
I just now reduced it to 20Mhz, and though there is a (slight) perceptible drop in latency, those 5 extra dB I gained from Signal/Noise have given me wifi in the bedroom again
* An a gaussian white noise environment, which WiFi usually isn't in.
The best Ressource out there. Period.
Their `networkQuality` implementation is on the CLI for any Mac recently updated. It's pretty interesting and I've found it to be very good at predicting which networks will be theoretically fast, but feel unreliable and laggy, and which ones will feel snappy and fast. It measures Round-trips Per Minute under idle and load condition. It's a much better predictor of how fast casual browsing will be than a speed test.
My house is old and has stones walls up to 120cm, including the inner walls, so I have to have access points is nearly all rooms.
I never had a true seamless roaming experience. Today, I have TP-Link Omada and it works better than previous solutions, but it is still not as good as DECT phones for examples.
For example if I watch a twitch stream in my room and go to the kitchen grab something with my tablet or my phone, I have a freeze about 30% of the times, but not very long. Before I sometime had to turn the wifi off and on on my device for it to roam.
I followed all Omada and general WiFi best practice I could find about frequency, overlap... But it is still not fully seamless yes.
Most people place wifi repeaters incorrectly, or invest in crappy repeater / mesh devices that do not multiple radios. A Wifi repeater or mesh device with a single radio by definition cuts your throughput in half for every hop.
I run an ISP. Customers always cheap out when it comes to their in home wireless networks while failing to understand the consequences of their choices (even when carefully explained to them).
The design of roaming being largely client initiated means roaming doesn't really work how people intuitively think it should, because at least every device I've ever seen seems to be programmed to aggressively cling to a single AP.
Of course with 3 APs instead of 2 the best layout is different.
People are almost always better off more cheap access points than fewer expensive access points, but that's not how most regular folk reason.
"The basement"
"Uh, i can send someone out to install some repeaters for $$$"
"No just make internet good now"
I assume you have hardwired all the APs, otherwise that would be the first step. Make sure they're on different channels, and have narrow MHz bands (20Mhz for 2.4GHz, 40MHz for 5GHz) selected.
Only use 1,6,11 for 2.4GHz and don't use the DFS channels on 5GHz as they will regularly hang everything.
Afterwards you can try reducing the 5GHz transmission power so there is no/less overlap in the far rooms.
Unfortunately you probably need the 2.4GHz (at least I do) but as the range is so much higher it might make sense to deactivate it on some APs to prevent overlaps.
Doing this basically eliminated the issues for me.
I have worked with networks for many years, and users blaming all sorts of issues on the network is a classic, so of course in their minds they need more speed and more bandwidth. But improvements only makes sense up to some point. After that it is just psychological.
Is that actually a thing? Why would any ISP intentionally add unnecessary load to their network?
So they're not really increasing their network load a measurable amount since the data never actually leaves their internal network. My ISP's network admin explained this to me one day when I asked about it. He said they don't really notice any difference.
(at least as per my understanding)
* https://en.wikipedia.org/wiki/IEEE_802.11bn
So other considerations are being considered.
Whie the two are not the same, they are not exactly separable.
You will not get good Internet speed out of a flaky network, because the interrupted flow of acknowledgements, and the need to retransmit lost segments, will not only itself impact the performance directly, but also trigger congestion-reducing algorithms.
Most users are not aware whether they are getting good speed most of the time, if they are only browsing the web, because of the latencies of the load times of complex pages. Individual video streams are not enough to stress the system either. You have to be running downloads (e.g. torrents) to have a better sense of that.
The flakiness of web page loads and insufficient load caused by streams can conceal both: some good amount of unreliability and poor throughput.
Wifi 8 will probably be another standard homes can skip. Like wifi 6 it is going to bring little that they need to utilise their fibre home connnections well across their home.
> Many ISPs, device manufacturers, and consumers automate periodic, high-intensity speed tests that negatively impact the consumer internet experience as demonstrated
But there’s no support for this claim presented frankly I am skeptical. What WiFi devices are regularly conducting speed tests without being asked?ISP provided routers, at least Xfinity does. I've gotten emails from them (before I ripped out their equipment and put my own in) "Great news, you're getting more than your plan's promised speeds" with speedtest results in the email, because they ran speed tests at like 3AM.
I wouldn't be surprised if it's happening often across all the residential ISPs, most likely for marketing purposes.
Really? DOCSIS has been the bottleneck out of Wi-Fi, DOCSIS, and wider Internet every time I've had the misfortune of having to use it in an apartment.
Especially the tiny uplink frequency slice of DOCSIS 3 and below is pathetic.
There's far more bandwidth within the DOCSIS network than can enter and exit it, which is why running communication tests between DOCSIS devices has no effect on the the usefulness of the network.
Fun fact: The current workaround to the slowness of DOCSIS modems is to put more modems in your modem and trunk then together, so you can get gigabit speeds with 100+ megabit upstream, by simply having a half dozen or more concurrent connections.
What does that even mean? DOCSIS is a point-to-multipoint network of one CMTS and several modems. All traffic happens between modems and the CMTS. Where would the purported “far more bandwidth” be hiding?
> The current workaround to the slowness of DOCSIS modems is to put more modems in your modem
How does that help at all with the peak capacity of a given physical network segment? That’s like saying “the key to increasing the total capacity of a road is to add more cars to utilize all lanes”.
If you use all physical uplink bandwidth of DOCSIS yourself by hypothetically “using more modems”, nobody else on that segment gets anything.
I used to run a docker than ran a speed test every hour and graphed the results but I haven't done that in a while now.
For people who dont follow WiFi closely. While WiFi 8, or 7 or 6 all has the intended features for its release, they are either not mandated or dont work as well as it should. Instead every release was a full refined execution of previous version. So the best WiFi 6 ( OFDMA ) originally promised will only come in WiFi 7. And current WiFi 7 feature like Multi-Link Operation will likely work better in WiFI 8. So if you wanted a fully working WiFi 8 as they marketed it, you better wait for WiFI 9.
But WiFi has come a long way. Not only have they exceeded 1Gbps in real world performance, they are coming close to 2.5Gbps, maximising the 2.5Gbps Ethernet. And we are now working on more efficient and reliable WiFi.
There are are these cool new features like MLO, but maybe devices could mostly use narrow channels and only use more RF bandwidth when they actually need it.
IEEE 802.11ah 900Ish
IEEE 802.11ax(WiFi6): traditional channels can be subdivided between 26 and 2x996 resource units according to need(effectively a 2MHz channel at the low end). This means multiple devices can be transmitted to within the same transmit opportunity.
> How about some modulations designed to work at low received power and/or low SNR?
802.11(og), 1 & 2 Mbps.
> 802.11(og), 1 & 2 Mbps
I’m a little vague on the details, but those are rather old and I don’t think there is anything that low-rate in the current MCS table. Do they actually work well? Do they support modern advancements like LDPC?
They're the original, phase shift keyed modulations.
> Do they actually work well?
They work great, if your problem is SNR, and if you value range more than data rate.
They are, of course, horribly spectrally inefficient which means they work better than OFDM near the guard bands. OFDM has a much flatter power level over frequency, so you have to limit TX power whenever the shoulder of the signal nears the guard band. IIRC, some standard supports individually adjusting the resource unit transmit power which would solve this as well. PSK modulation solves this somewhat accidentally. Guardbands especially suck since there's only 3 non overlapping 2.4GHz channels.
> I don’t think there is anything that low-rate in the current MCS table.
> Do they support modern advancements like LDPC?
Dunno! Generally though, each MCS index will specify both a modulation mechanism (BPSK, OFDM, ...) and a coding rate. All of the newer specs allow you to go almost as slow if you want to, usually 6-7mbps ish , and this is done with the same modulation scheme just a bit faster and with newer coding.
> do those ax resource units work, in practice, in a way that allows two APs that are moderately close to each other to coexist efficiently within the same 20MHz channel?
Yes and no. It doesn't improve RF coexistence directly. But in many cases allows much more efficient use of the available airtime. Before every outgoing packet to a different station consumed a guard interval and the entire channel bandwidth, but now for a single guard interval you can pack as many station's data as will fit.
Also MIMO.
And don't think it's relevant to compare what to do in a large space with what one should do at home. The requirements are entirely different.
In a large space with many users I'd use small channels and many access points. I want it to work good enough for everyone to have calls, and have good aggregate throughput.
In a two bed home I'd use large channels and probably only one AP. Peak single device speed is MUCH more important than aggregate speed.
And in a home it matters much more what channels are being busyed by neighbors.
For latency, of course, there is only wired. Even with few devices.
I wonder how many of those could be wired.
The only thing that makes wifi in a large condo building viable is the 6Ghz channels available on wifi 6e
I have 50+ ESP based devices on WiFi and while low bandwidth (and their own SSID) I really wish there were affordable options that they could be "wired" for comms (since they mostly control mains appliances, but the rules and considerations for mixing data and mains in one package are prohibitively expensive).
So yeah, I do think speed is more important.
Responsiveness doesn’t matter that often and when it does, plugging in Ethernet takes it out of the equation.
I don't see a way to change that setting and I don't see a way to see what it's currently set to.
Use a dedicated 2.4ghz AP for all IoT devices. Firewall this network and only allow the traffic those devices need. This greatly reduces congestion.
Use 5ghz for phones/laptops and keep IoT off that network.
That's really about it. If you have special circumstances there are other solutions, but generally the solution to bad wifi is to not use the wifi, lol.
I operate a large enterprise wireless network with 80mhz 5Ghz channels and 160Mhz 6Ghz channels. It is possible if your environment allows.
Router, and extenders (multi floor house): 1-4
Chromecast|Sonos|Apple speaker/Chromecast|google|firestick|roku|apple TV/smart speaker/hifi receiver/eaves dropping devices: 2-10
Smart doorbell/light switch/temperature sensor/weather station/co2|co detector/flood detector/bulb/led strip/led light/nanoleaf/garage door: 4-16
Some cars: 0-2
Some smart watches speak wifi: 0-4
Computers.. maybe the desktops are wired (likely still support wifi), all laptops, chromebooks, and tablets : 3-8
All game consoles, many TVs, some computer monitors: 3-8
Some smart appliances: 0-4 (based on recent news of ads, best to aim for 0)
The biggest factor in your count, and I think it is the one with the highest ceiling, is smart devices. Trouble is, even by sources like https://www.consumeraffairs.com/homeowners/average-number-of..., around half of all households still have zero, and the average household has only 2.6 people.
In this thread (from its root), we have various users defending the reasonableness of the numbers, some providing numbers in their own houses: 10, 11, 14, 17, 19, 23, 28, 34, over 50, 60+. Averaging, I’ll say, about 27, and that’s with two pretty big outliers—if you excluded them (maybe reasonable, maybe not), you’d be down to 19.5. And these sorts of users are already likely to be above-average, it’s the nature of HN, compounded by them being the ones commenting (confirmation bias). Yet already (with the fiddling of removing what I’m calling outliers) they’re under the claimed average. And for each one of them, there’s another household with zero smart home devices; and the 20% of the population with no broadband are, I imagine, effectively using zero wifi devices, though discounting in this way is a little too simplistic. However you look at it, the average will drop quite a bit. In fact, if you return to the original 27 and simplify the portion of the population without smart home devices to a 30% zero rate (mildly arbitrary, but I think reasonable enough as a starting point) and let the other 70% be average… your 27 has dropped to about 19. In order to reach the 21 across the population, you’d need to establish these HN users, defenders of high wifi device counts, to be below average users of wifi devices, which is implausible.
If the number was 10, I’d consider it plausible, though honestly I’d still expect the number to be lower. But I think my reasoning backs up my initial feeling that 21 is pretty outlandish for your national average. I’d like to see Deloitte Insights’ methodology; I reckon it’s a furphy. I bet it’s come from some grossly misleading survey data, or from sales figures of devices that are wifi-capable even though half of them never get used that way, or from terrible sampling bias (surveys are notorious for that), or something like that. Wouldn’t be the first wildly wrong or grossly misleading result one of those sorts of companies have published.
I had probably 20 prior to swapping out some smart light bulbs and switches for Zigbee.
21 for an average household isn’t nuts.
We have 2 phones, a tablet for the kids, a couple of Google homes, a Chromecast, 2 yoto players, a printer, a smart TV, 2 laptops, a raspberry pi, a solar power Inverter, an Oculus Quest, and a couple of things that have random hostnames.
It adds up.
Add a few wifi security cameras and other IoT devices and 30+ is probably pretty common.
I currently have 23, my parent's house has 19
People have all kinds of stuff on wifi these days - cameras, light bulbs, dishwashers, irrigation, solar, hifi..
Wireless temperature monitor
Sync module for some Blink cameras
2 smart plugs
Roomba
5 smart lights
RPi 3
3 of the smart lights I currently don't need and and so aren't actually connected. That leaves 8 connected 2.4 GHz devices.On 5 GHz I've got 16 devices:
Amazon Fire Stick
iPad
Printer
Echo Show
Apple Watch
Surface Pro 4
iMac
Nintendo Switch
EV charger
Mac Studio
A smart plug
Google Home Mini
Echo Dot
RPi 4
Kindle
iPhone
The iMac and the Surface Pro 4 are almost never turned on, and the printer is also most of the time. That leaves 13 regularly connected 5 GHz devices.That's a total of 21 devices usually connected on my WiFi, right what the article says is average. :-)
I guess I am less "connected" than the average American. Can't say I feel like I am missing out, though.