The OpenWrt wiki on Homenet suggests the project might be dead: https://openwrt.org/docs/guide-user/network/zeroconfig/hncp_...
Anyone familiar with HNCP? Are there any concerns of conflicts if HNCP becomes "a thing"? I have to say, .home.arpa doesn't exactly roll of the tongue like .internal. Some macOS users seem to have issues with .home.arpa too: https://www.reddit.com/r/MacOS/comments/1bu62do/homearpa_is_...
In my native language (Finnish) it's even worse, or better, depending on personal preference - it translates directly to .mildew.lottery-ticket.
home.arpa is for HNCP.
Use .internal.
> A network-wide zone is appended to all single labels or unqualified zones in order to qualify them. ".home" is the default; [...]
If I were going to do a bunch of extra work messing with configs I'd be far more inclined to switch all my personal stuff over to GNS for security and privacy reasons.
The ".localhost" TLD has traditionally been statically defined in
host DNS implementations as having an A record pointing to the
loop back IP address and is reserved for such use
The RFC 8375 suggestion (*.home.arpa) allows for more than a single host in the domain. If not in name/feeling, but the strictest readings [and adherence] too.This can be addressed by hijacking an existing TLD for private use, e.g. mything.bb :^)
If you want to be sure, use mything./ : the . at the end makes sure no further domains are appended during DNS lookup, and the / makes the browser try to access to resource without Googling it.
That's hardly the only example of annoying MONOBAR behavior.
This problem could have been avoided if we had different widgets for doing different things. Someone should have thought of that.
<virtual>.<physical-host>.internal
So for example phpbb.mtndew.internal
And I’d probably still add phpbb.localhost
To /etc/hosts on that host like OP doesRef: https://www.icann.org/en/board-activities-and-meetings/mater...
> As of March 7, 2025, the domain has not been standardized by the Internet Engineering Task Force (IETF), though an Internet-Draft describing the TLD has been submitted.
https://www.icann.org/en/board-activities-and-meetings/mater...
> Resolved (2024.07.29.06), the Board reserves .INTERNAL from delegation in the DNS root zone permanently to provide for its use in private-use applications.
https://developer.mozilla.org/en-US/docs/Web/Security/Secure...
So you don't need self signed certs for HTTPS on local if you want to, for example, have a backend API and a frontend SPA running at the same time talking to eachother on your machine (authentication for example requires a secure context if doing OAuth2).
Won't `localhost:3000` and `localhost:3001` also both be secure contexts? Just starting a random vite project, which opens `locahost:3000`, `window.isSecureContext` returns true.
Usually you'd have a reverse proxy running on port 80 that forwards traffic to the appropoiate service, and an entry in /etc/hosts for each domain, or a catch all in dnsmasq.
Example: a docker compose setup using traefik as a reverse proxy can have all internal services running on the same port (eg. 3000) but have a different domain. The reverse proxy will then forward traffic based on Host. As long as the host is set up properly, you could have any number of backends and frontends started like this, via docker compose scaling, or by starting the services of another project. Ports won't conflict with eachother as they're only exposed internally.
Now, wether you have a use for such a setup or not is up to you.
1. not all browsers are the same
2. there is no official standard
3. even if there was, standards are often ignored
4. what is true today can be false tomorrow
5. this is mitigation, not security
they are all aiming to implement the same html spec
2. there is no official standard
there literally is
> A context is considered secure when it meets certain minimum standards of authentication and confidentiality defined in the Secure Contexts specification
https://w3c.github.io/webappsec-secure-contexts/
3. even if there was, standards are often ignored
major browsers wouldn't be major browsers if this was the case
4. what is true today can be false tomorrow
standards take a long time to become standard and an even longer time to be phased out. this wouldn't sneak up on anyone
5. this is mitigation, not security
this is a spec that provides a feature called "secure context". this is a security feature. it's in the name. it's in the spec.
>5.1. Incomplete Isolation > >The secure context definition in this document does not completely isolate a "secure" view on an origin from a "non-secure" view on the same origin. Exfiltration will still be possible via increasingly esoteric mechanisms such as the contents of localStorage/sessionStorage, storage events, BroadcastChannel, and others.
>5.2. localhost > >Section 6.3 of [RFC6761] lays out the resolution of localhost. and names falling within .localhost. as special, and suggests that local resolvers SHOULD/MAY treat them specially. For better or worse, resolvers often ignore these suggestions, and will send localhost to the network for resolution in a number of circumstances. > >Given that uncertainty, user agents MAY treat localhost names as having potentially trustworthy origins if and only if they also adhere to the localhost name resolution rules spelled out in [let-localhost-be-localhost] (which boil down to ensuring that localhost never resolves to a non-loopback address).
>6. Privacy Considerations > >The secure context definition in this document does not in itself have any privacy impact. It does, however, enable other features which do have interesting privacy implications to lock themselves into contexts which ensures that specific guarantees can be made regarding integrity, authenticity, and confidentiality. > >From a privacy perspective, specification authors are encouraged to consider requiring secure contexts for the features they define.
This does not qualify as the "this" in my original comment.
# Proxy to a backend server based on the hostname.
if (-d vhosts/$host) {
proxy_pass http://unix:vhosts/$host/server.sock;
break;
}
Your local dev servers must listen on a unix domain socket, and you must drop a symlink to them at eg /var/lib/nginx/vhosts/inclouds.localhost/server.sock.Not a single command, and you still have to add hostname resolution. But you don't have to programmatically edit config files or restart the proxy to stand up a new dev server!
Check it out and let me know what you think! (Free, MIT-licensed, single-binary install)
Basically, it wraps up the instructions in this blogpost and makes everything easy for you and your team.
- If Caddy has not already generated a local root certificate:
- Generate a local root certificate to sign TLS certificates
- Install the local root certificate to the system's trust stores, and the Firefox certificate store if it exists and an be accessed.
So yes. I had written about how I do this directly with Caddy over here: https://automagic.blog/posts/custom-domains-with-https-for-y...The certs and keys live in the localias application state directory on your machine:
• tree /Users/pd/Library/Application\ Support/localias/caddy/pki/authorities/local/
/Users/pd/Library/Application Support/localias/caddy/pki/authorities/local/
├── intermediate.crt
├── intermediate.key
├── root.crt
└── root.key
The whole nicety of localias is that you can create domain aliases for any domain you can think of, not just ".localhost". For instance, on my machine right now, the aliases are: • localias list
cryptoperps.local -> 3000
frontend.test -> 3000
backend.test -> 8080
I really wish there was a safer way to do this, i.e. a way to tag a trusted CA as "valid for localhost use only". The article mentions this in passing
> The sudo version of the above command with the -d flag also works but it adds the certificate to the System keychain for all users. I like to limit privileges wherever possible.
But this is a clear case of https://xkcd.com/1200/.
Maybe this could be done using the name constraint extension marked as critical?
Of note, it doesn't work on macOS. I recall having delivered a coding assignment for a job interview long ago, and the reviewer said it didn't work for them, although the code all seemed correct to them.
It turned out on macOS, you need to explicitly add any subdomains of .localhost to /etc/hosts.
I'm still surprised by this; I always thought that localhost was a highly standard thing covered in the RFC long long ago… apparently it isn't, and macOS still doesn't handle this TLD.
No, not here.
I've never seen this not-work on any distro, must be a niche thing.
$ ping hello.localhost
PING hello.localhost (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.057 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms
$ ping hello.localhost
ping: cannot resolve hello.localhost: Unknown host
That said, I do recommend the use of the internal. zone for any such setup, as others have commented. This article provides some good reasons why (at least for .local) you should aim to use a standards-compliant internal zone: https://community.veeam.com/blogs-and-podcasts-57/why-using-...
Not so different from you, but without even registering the vanity domain. Why is this such a bad idea?
It's better to use domain you control.
I'm a fan of buying cheapest to extend (like .ovh, great value) and use real Let's Encrypt (via dns challenge) to register any subdomain/wildcard. So that any device will have "green padlock" for totally local service.
https://krebsonsecurity.com/2020/02/dangerous-domain-corp-co...
Honestly for USD5/year why don't you just buy yourself a domain and never have to deal with the problem?
I guess the lesson is to deploy a self-signed root ca in your infra early.
I suggested using a domain given they already have Caddy set up and it's inexpensive to acquire a cheap domain. It's also less of a headache in my experience.
Actually, now that I've linked the docs, it seems they use smallstep internally as well haha
[0] https://caddyserver.com/docs/automatic-https#local-https
Though you could register a name like ch.ch and get a wildcard certificate for *.ch.ch, and insert local.ch.ch in the hosts file and use the certificate in the proxy, that would even work on the go.
Is that a new thing? I heard previously that if you wanted to do DNS/domain for local network you had to expose the list external.
Yes, this also works under macOS, but I remember there used to be a need to explicitly add these addresses to the loopback interface. Under Linux and (IIRC) Windows these work out of the box.
I previously used differing 127.0.0.0/8 addresses for each local service I ran on my machine. It worked fine for quite a while but this was in pre-Docker days.
Later on I started using Docker containers. Things got more complicated if I wanted to access an HTTP service both from my host machine and from other Docker containers. Instead of having your services exposed differently inside a docker network and outside of it, you can consistently use the IP and Ports you expose/map.
If you're 127.0.0.0/8 addresses then this won't work. The local loopback addresses aren't routed to the host computer when sent from a Docker container; they're routed to the container. In other words, 127.0.0.1 inside Docker means "this container" not "this machine".
For that reason I picked some other unused IP block [0] and assigned that block to the local loopback interface. Now I use those IPs for assigning to my docker containers.
I wouldn't recommend using the RFC 1918 IP blocks since those are frequently used in LANs and within Docker itself. You can use something like the link-local IP block (169.254.0.0/16) which I've never seen used outside of the AWS EC2 metadata service. Or you can use the carrier-grade NAT IP block (100.64.0.0/16). Or even some IP block that's assigned for public use, but is never used, although that can be risky.
I use Debian Bookworm. I can bind 100.64.0.0/16 to my local loopback interface by creating a file under /etc/network/interfaces.d/ with the following
auto lo:1
iface lo:1 inet static
address 100.64.0.1
gateway 100.64.0.0
netmask 255.255.0.0
Once that's set up I can expose the port of one Docker container at 100.64.0.2:80, another at 100.64.0.3:80, etc. $ resolvectl query foo.localhost
foo.localhost: 127.0.0.1 -- link: lo
::1 -- link: lo
Another benefit is being able to block CSRF using the reverse proxy.https://www.man7.org/linux/man-pages/man8/libnss_myhostname....
(There are lots of other useful NSS modules, too. I like the libvirt ones. Not sure if there's any good way to use these alongside systemd-resolved.)
The ability to add host entries via an environment variable turned out to be more useful than I'd expected, though mostly for MITM(proxy) and troubleshooting.
That is of course unless you really intend to send an email to someone at test@gmail.com.
Each service is then exposed via '<service>.local.<domain>'.
This has been working flawlessly for me for some time.
I use it extensively on my LAN with great success, but I have Macs and Linux machines with Avahi. People who don't shouldn't mess with it...
Practically speaking, HTTPS on LAN is essentially useless, so I don't see the benefits. If anything, the current situation allows the user to apply TOFU to local devices by adding their unsigned certs to the trust store.
The existing exception mechanisms already work for this, all you need to do is click the "continue anyway" button.
Public wifi isn't a thing? Nobody wants to admin the router on a wifi network where there might be untrusted machines running around?
In practice, you probably want an authorized network for management, and an open network with the management interface locked out, just in case there's a vulnerability in the management interface allowing auth bypass (which has happened more often than anyone would like).
I agree on the latter, but that means your IoT devices being accessible through both networks and being able to discriminate which requests are coming from the insecure interface and which are coming from secure admin, which isn't practical for lay users to configure as well. I mean, a router admin screen can handle that but what about other devices?
I know it seems pedantic, but this UI problem is one of many reasons why everything goes through the Cloud instead of our own devices living on our own networks, and I don't like that controlling most IoT devices (except router admin screens) involves going out to the Internet and then back to my own network. It's insecure and stupid and violates basic privacy sensibilities.
Ideally I want end users to be able to buy a consumer device, plug it into their router, assign it a name and admin-user credentials (or notify it about their credential server if they've got one), and it's ready and secure without having to do elaborate network topology stuff or having to install a cert onto literally every LAN client who wants to access its public interface.
* It's reserved so it's not going to be used on the public internet.
* It is shorter than .local or .localhost.
* On QWERTY keyboards "test" is easy to type with one hand.
That said, I do use mDNS/Bonjour to resolve .local addresses (which is probably what breaks .local if you're using it as a placeholder for a real domain). Using .local as a imaginary LAN domain is a terrible idea. These days, .internal is reserved for that.
I have a more in depth write up here: https://www.silvanocerza.com/posts/my-home-network-setup/
Yes, it does require a cert for TLS and that cert will not be trusted by default. I have found that with OpenSSL and a proper script you can spin up a cert chain on the fly and you can make these certs trusted in both Windows and Linux with an additional script. A script cannot make for trusted certs in Safari on OSX though.
I figured all this out in a prior personal app. In my current web server app I just don’t bother with trust. I create the certs and just let the browser display its page about high risk with the accept the consequences button. It’s a one time choice.
I used the .z domain bc it's quick to type and it looks "unusual" on purpose. The dream was to set up a web UI so you wouldn't need to configure it in the terminal and could see which apps are up and running.
Then I stopped working the job where I had to remember 4 different port numbers for local dev and stopped needing it lol.
Ironically, for once it's easier to set this kind of thing up on MacOS than on Linux, bc configuring a local DNS resolver on linux (cf this taiscale blog post "The Sisyphean Task Of DNS Client Config on Linux" https://tailscale.com/blog/sisyphean-dns-client-linux). Whereas on Mac it's a couple commands.
I think Tailscale should just add this to their product, they already do all the complicated DNS setup with their Magic DNS, they could sprinkle in port forwarding and be done. It'd be a real treat.
A possible disadvantage is that specifying a single ip to listen on means the http server won't listen on your LAN ip address, which you might want.
If you are using other systems then you can set this up fairly easily in your network DNS resolver. If you use dnsmasq (used by pihole) then the following config works:
address=/localhost/127.0.0.1
address=/localhost/::1
There are similar configs for unbound or whatever you use.I have a ready to go docker-compose setup using Traefik here: https://github.com/georgek/traefik-local
Rather than do all this manually each time and worry about port numbers you just add labels to docker containers. No ports, just names (at least for http stuff).
That's not redirection per se, a word that's needlessly overloaded to the point of confusion. It's a smart use of a reverse proxy.
It would be nice if you all reserved the word "redirect" for something like HTTP 3xx behavior.
I've had nothing but trouble with .local and .localhost. Specifically, .local is intended for other purposes (multicast DNS) and .localhost has a nasty habit of turning into just "localhost" and in some cases, resolvers don't like to allow that to point to anything other than 127.0.0.1.
More recently, I've stuck to following the advice of RFC 6762 and use an actual registered TLD for internal use, and then sub-domain from there. I don't use my "production" TLD, but some other, unrelated TLD. For example, if my company is named FOO and our corporate homepage is at foo.com, I'll have a separate TLD like bar.com that I'll use for app development (and sub-domain as dev.bar.com, qa.bar.com, and maybe otherqa.bar.com as needed for different environments). That helps avoid any weirdness around .localhost, works well in larger dev/qa environments that aren't running on 127.0.0.1, and allows me to do things like have an internal CA for TLS instead of using self-signed certs with all of their UX warts.
For "local network" stuff, I stick to ".internal" because that's now what IANA recommends. But I would distinguish between how I use ".internal" to mean "things on my local network" from "this is where my team deploys our QA environment", because we likely aren't on the same network as where that QA environment is located and my ".internal" might overlap with your ".internal", but the QA environment is shared.
No daemons, and the only piece of configuration is adding a file to /etc/resolvers: https://github.com/djanowski/hostel
I've been using it for myself so it's lacking documentation and features. For example, it expects to run each project using `npm run dev`, but I want to add Procfile support.
Hopefully other people find it useful. Contributions very much welcome!
That’s right: I invented a fictitious subdomain under one my ISP controlled and I never registered it or deployed public DNS for it. It worked great, for my dumb local purposes.
Example:
aten.mysub.isp.net.
porta.mysub.isp.net.
smartphone.mysub.isp.net.
Thus it was easy to remember, easy to add new entries, and was guaranteed to stay out of the way from any future deployments, as long as my ISP never chose to also use the unique subdomain tag I invented...https://weblogs.asp.net/owscott/introducing-testing-domain-l...
BTW, there seems to be some confusion about *.localhost. It's been defined to mean localhost since at least 2013: https://datatracker.ietf.org/doc/html/rfc6761#section-6.3
myapp.localhost { tls internal
# Serve /api from localhost:3000 (your API)
@api path /api/*
handle @api {
# Remove the leading "/api" portion of the path
uri strip_prefix /api
reverse_proxy 127.0.0.1:3000
}
# Fallback: proxy everything else to Vite's dev server on 5173
handle {
reverse_proxy 127.0.0.1:5173
}
}You're welcome.
This is cool, but it only seems to work on the host that has the /etc/hosts loopback redirect to *.localhost. I run my app on a home server and access it from multiple PCs on the LAN. I have several apps, each associated with a different port number. Right now, I rely on a start page (https://github.com/dh1011/start-ichi) to keep track of all those ports. I’m wondering if there’s an easy way to resolve custom domains to each of those apps?
Whenever a host requests a DHCP lease it receives its assigned IP which matches the unbound record, then I can always access it by hostname.
I'm not entirely sure how I feel about it, but at least it's on a completely separate domain.
When I add a new site to my local setup, I just define a CNAME in Cloudflare and add an entry in Nginx proxy manager. It handles SSL via wildcard cert.
It's a neat trick, but it comes with some caveats. For instance, `localhost` often resolves to both 127.0.0.1 and ::1, but `.localhost` is described in RFC2606 as "traditionally been statically defined in host DNS implementations as having an A record pointing to the loop back IP address and is reserved for such use". In other words, your server may be binding to ::1 but your browser may be resolving 127.0.0.1. I'm sure later RFCs rectify the lack of IPv6 addressing, but I wouldn't assume everyone has updated to support those.
Another neat trick to combine with .localhost is using 127.0.0.0/8. There's nothing preventing you from binding server/containers to 127.0.0.2, 127.1.2.3, or 127.254.254.1. Quite useful if you want to run multiple different web servers together.
Of course, in the early internet, the difference between a TLD and a host name weren't quite as clear as they are right now.
I cannot ping xyz.localhost because it doesn't resolve it.
EDIT: on linux and don't use launchd, so I'd still the port number
Self sign a certificate and add it to your trusted certificate list.
Or - use https://pinggy.io
They also show having the webserver to the TLS, that might be helpful.
Forgot to add .local I see
What it does is it has a private network for the containers and itself (all containers get their own unique IP, so there's no port mapping needed).
http://orb.local simply lists all running containers.
The host-names are automatically derived from the containername / directoryname / compose project, but you can add other hostnames as well by adding docker labels.
It works really well.
Would be good to have the config all somewhere in my user's dir too.
Per user subdomains for their apps on localhost.
It works really well and means no setup on our developers machines
Then we just have an entry for local.example.com in our vhosts and bam everything works as expected. No need to mess with /etc/hosts
Seriously though, one of the first things I did when I was hired as the sysadmin for a small company was to eliminate the need for memorizing/bookmarking ip-port combos. I moved everything to standard ports and DNS names.
Any services running on the same machine that needed the same ports were put behind a reverse proxy with virtual hosts to route to the right service. Each IP address was assigned an easy-to-remember DNS name. And each service was setup with TLS/SSL instead of the bare HTTP they had previously.
Rather big caveat IMO. As a side note, your domain doesn't seem to have an AAAA record (which [.]localhost binds to by default on most of my machines, at least).
At this point a lot of TLD changes are going to step on someone's project or home/business/private network. I think .local is a good name for mDNS. I appreciate why you maybe aren't happy with it, but don't share your concern.
There's no reason .mdns or .mdns.arpa couldn't have just been added to the default domains search list (the list of suffixes tried for non FQDN searches); which given it ISN'T a nice human obvious word to append wouldn't have conflicted with anyone who'd already had a .local at the time, and anyone else in the future who thinks an obvious phrase like .local would not be in use by some other resolver system.
.local also works fine, of course, if you enable mDNS and don't try to use normal DNS.