https://www.engie-solutions.com/fr/references/chaleur-fatale...
No private or public entity will grant access to valuable proprietary hardware, as unacceptable risks will not only come from building owners, but also from anyone entering premises.
Also, managing remote nodes evenly spreaded across all areas will be costly. Think of armies of techs on the road permanently, with access problem, dogs or pest barriers, and so on.
A way to solve this would be the allocation of a planned space per block everywhere, which would be safely secured - then available and accessible to all utility organizations: electric, isp, water, phone, data, etc. Heat, power, mini data centers, and such could serve all buildings on a block.
Then other problems emerges: having utilities plan and use these together. Would only work if all services belong to the same entity.
A way around, of course, would be for individuals to setup servers they would own, and rent to data brokers, like Holo project once planned for.
Risks: Co-mingling your home's ISP with the basement rack seems like a surefire way to get your personal devices blocked if external basement rack users are running a VPN through it and doing heinous stuff. Annoying, maybe solvable with an ISP device reboot. But that particular risk is worse depending on whether the host's jurisdiction allows the assumption of identity based on IP. Risks around general liability. Risks around tax implications when internal revenue folks see the opportunity to collect capital gains tax on your income generating property. So many risks!
The only encounters I've had with companies trying to incentivize this type of setup are Storj and Sia - both pay their host operators in cryptocurrency, which is just another risk IMO. Despite my own involvement with Storj, generating enough income to offset my energy bill by about 25% monthly, the implementation that wins out and gains wide traction has a lot of groundwork to lay for those utility contracts, risks, and incentives.
That said, my plex server for my friends is on an ups and I'm on 1Gb fiber and I have better uptime than AWS.
https://news.infomaniak.com/en/infomaniak-inaugurates-a-revo...
1. It depends on what part of the world you are in, but many homes have cooling needs for at least part of the year. The needs to remove excess heat would go up if you are adding more heat -- and it is less efficient to do this at the scale of an individual home than it is at DC scale.
2. Power requirements: While many homelabs have UPS systems -- they lack often lack backup generators, redundant A+B power infrastructure, and don't have the required power density for higher powered servers.
3. Connectivity requirements: most homes don't have access to the connectivity that data centers do.
4. Security requirements: homes simply can't meet the security requirements of most data protection regulations -- things like barriers, access control systems, surveillance, fire protection -- are anywhere from intrusive to completely impossible in a home.
5. Access requirements: homes aren't conducive to a technician responding to an outage at 3am
And those are just the big ones.
2. If servers are distributed then downtime is distributed, you can virtually guarantee that some of the servers over the world will be online so you can get effectively 100% uptime, something that is not possible in a data center
3. To serve tokens you need very little bandwidth, it's just text in and out
4. All of this is down to the HW and the SW itself, not the building. That is, the box that's being deployed.
5. Just switch to a different server until the problem is resolved, in this model there is no urgency. You just need redundancy which you can afford with how much cheaper this would be.
And data centers also exist in cold places. But if you put 8kw of extra heat in someone's home that previously didn't need cooling, it might need it now.
> 2. If servers are distributed then downtime is distributed, you can virtually guarantee that some of the servers over the world will be online so you can get effectively 100% uptime,
You can! But running more servers with worse uptime is less efficient and requires more capital expense than running fewer servers with better uptime.
> something that is not possible in a data center
This is not only possible, this is how the large clouds are architected. This is what availability zones are for.
> 3. To serve tokens you need very little bandwidth, it's just text in and out
bandwidth is only one of the many connectivity advantages that datacenters provide... and LLMs are a bad choice to run residentially for other reasons, particularly power density
> 4. All of this is down to the HW and the SW itself, not the building.
Absolutely not -- basically all industry data protection standards have physical security standards. At least, any of the ones that matter.
> 5. Just switch to a different server until the problem is resolved, in this model there is no urgency.
That is true, there are data centers without 24/7 access. They tend to struggle to compete, though.
> You just need redundancy which you can afford with how much cheaper this would be.
Is it? Residential power and cooling costs more -- and that's the majority of the cost to colocate servers
That's the entire point of being in a cold place that you don't need active cooling. Just open the window.
> 2. But running more servers with worse uptime is less efficient and requires more capital expense than running fewer servers with better uptime.
Even if the cooling is free? Not even free, the cost is negative since it saves heating cost.
> 3. and LLMs are a bad choice to run residentially for other reasons, particularly power density
Can you explain the connection of LLMs to power density? This point makes no sense.
> 4. Absolutely not -- basically all industry data protection standards have physical security standards. At least, any of the ones that matter.
You can lock a box physically
> 5. That is true, there are data centers without 24/7 access. They tend to struggle to compete, though.
Why though if redundancy exists, like you said? Would they still struggle to compete if the cooling cost was effectively negative?
> 6. Is it? Residential power and cooling costs more -- and that's the majority of the cost to colocate servers
You can make cooling cost negative, if that's the majority of the cost, then that's great! And you can also place your servers in residential areas with the cheapest power.
> Even if the cooling is free? Not even free, the cost is negative since it saves heating cost.
Again, having cold air outside is not unique to residential homes. Locating somewhere cold is a strategy for cooling data centers as well. But it doesn’t make environmental management free. You still need to control humidity and move heat. You can’t just run a server outside. However, it isn't the only concern for hosting a compute workload.
> Can you explain the connection of LLMs to power density? This point makes no sense.
The system requirements for a single server to run a frontier workload is a system that would overwhelm the power requirements of practically all residential electrical systems.
> You can lock a box physically
If only ISO 27000/SOC/NIST SP 1800/PCI DSS/ etc were all that easy lol
> Why though if redundancy exists, like you said? Would they still struggle to compete if the cooling cost was effectively negative?
Because of the additional capital costs associated with buying more servers, the additional operational costs of inconveniencing your employees, and the additional operational costs of powering/housing servers that are down.
> You can make cooling cost negative, if that's the majority of the cost, then that's great!
It isn't, most power in a data center is spent to power compute. Even if you do harness waste energy (which some data centers do), it is at best 100% efficient. Residential heat-pumps already have effective efficiencies better than this.
> And you can also place your servers in residential areas with the cheapest power.
And in those places, industrial rates are typically even lower.
Scale is always cheaper.
Then you are likely needing to manage humidity.
Without a truly zero-trust compute platform its going to be difficult to get anyone to trust their workloads to a random compute resource that isn't carefully guarded.
Why don't we all have small farms on our properties, turning lawns into vegetable producing land for each household?
Why don't we have small datacenters on the property of each business, so the business users and IT folks can keep track of their own servers and data and applications?
These are often called server/network closets, and they're pretty common, but the trend has been to move away from it because they are a PITA to manage and it is cheaper and easier to manage at DC scale.
People often think of the large cloud providers when they think of data centers -- but their data centers are typically mediocre in terms of redundancy and uptime. Their strategy is generally to have less infrastructure redundancy and rely on software failover... e.g. failover to another AZ
Not really the only issue actually, the electricity bill would be astronomical for a household and also have you heard the noise from them ?
Issues with them being distributed range from Data protection to Insurance against damage, connectivity issues. Noble maybe, but it's widely unrealistic.
Data protection is an issue, but maybe this is something that SGX and family can provide eventually.
A scheme like this makes a lot of sense for distributed redundant backups.
The real problem is bandwidth. Most home users still don’t have decent symmetrical bandwidth. If you could solve this, then home servers could provide a handful of edge services to others in the area. I’m not sure where this makes sense versus local colo though.
I have home servers, designed for home and they are not too bad, and I can turn them off when sleeping for example.. It's very different with a 20U server running and spinning non stop. Not many people will have the soundproofing to simply not hear it at night.
I don't know, I wouldn't see it working, but I'm just one.
A house older than 30 years typically has 100A 120V split phase power which can supply 25000 watts (you wouldn't want to ever fully load it...)
And an 8000 watt space heater will definitely be noisy, and produce too much heat for pretty much any house.
Smaller servers distributed more widely don’t come with the same requirements. They can’t handle all use cases, but something like a Tinybox can run consumer LLM tasks just fine, a SAN with a small server can provide backup storage or storage for CDNs, etc. No need to turn the house into a full data center.
The key would be to build highly efficient small servers that can work as an appliance. It would need to be very easy to swap them out when one fails.
Again, I’m not sure this has much of a benefit except for providing geographical dispersion. Data centers would still be more cost effective. Maybe it would be helpful for providing local services in small remote areas like islands.
Everything about doing productive computing tasks in houses is more complicated than that! At least it is more profitable, I think?
(I wonder what a rough profit per watt figure is for a datacenter. Very much "it depends" I'm sure.)
Without this widespread ignorance, and with IPv6, a global per host, and without the disappearance and massive price hikes of RAM and storage, we could all have a home server running, for example, a family's personal services:
- contacts (Radicale/Baïkal/Davis/*)
- photos (Immich, PhotoPrism, ...)
- video (Jellyfin and the like, even Stash for those who fancy it)
- files in general (e.g. SyncThing)
- email (fetched via OfflineIMAP and similar, served via Dovecot+webmail for those who want it, etc.)
- federated XMPP/Matrix for family and friends
- ...
And even for the State, a national blockchain for digital identity (NFTs), contracts (e.g. property sales, etc.), and money, with a node for every family and consensus to regulate it, for maximum resilience and reliability, thus also enabling electronic voting.
But well, given the widespread IT ignorance, it's just a pipe dream.
In this model, we would have de facto teleworking, and therefore de-urbanisation, with houses and sheds with solar panels on the roof, batteries, and activities shifted to maximise self-consumption—thereby electrifying without loading the national grid, in a true and substantial transition that would otherwise be unrealistic.
Personally, that's how my house is, national blockchain aside (but with a personal lightning node anyway), and it works beautifully; it would work brilliantly up to ~45° latitude across the EU and slightly less (I think, I haven't checked the PV maps) for North America. Simply doing this would kill off the aforementioned kleptocrats. No cities means:
- no end to private property for the majority
- no dependence on private collective transport in the hands of a tiny few
- no fast tech/fast fashion with very low costs for the vendor but high costs for the customer and nature due to the piles of waste
- no ready-meal deliveries with tons of packaging
A resilient, renewable society (including the built environment) that can evolve but doesn't have the majority enslaved to a tiny few. This is why it isn't happening.
The problem is DNS and access to the IP network
So if you can figure out how to build reliable DNS access/approvals with cloudflare etc then it would work
The biggest challenge at the largest scale is political because then you’re gonna be fighting all of the ISP’s and the giant technology companies and ultimately they’re never going to allow for this
Either take it over on their own by offering their own service which people would sign up for
or they’ll just pressure every ISP or certificate authority to not recognize routes that are not going through “allowed” data centers,
most likely you would end up with a series of state bills or even a federal regulation that prevents data routing for public consumption unless it in some kind of “security standard” or whatever bullshit they come up with