At what cost? People usually exclude the cost of DIY style hosting. Which usually is the most expensive part. Providing 24x7 support for the stuff that you've home grown alone is probably going to make large dent into any savings you got by not outsourcing that to amazon.
> $24,000 annual bill felt disproportionate
That's around 1-2 months of time for a decent devops freelancer. If you underpay your devs, about 1/3rd of an FTE per year. And you are not going to get 24x7 support with such a budget.
This still could make sense. But you aren't telling the full story here. And I bet it's a lot less glamorous when you factor in development time for this.
Don't get me wrong; I'm actually considering making a similar move but more for business reasons (some of our German customers really don't like US hosting companies) than for cost savings. But this will raise cost and hassle for us and I probably will need some re-enforcements on my team. As the CTO, my time is a very scarce commodity. So, the absolute worst use of my time would be doing this myself. My focus should be making our company and product better. Your techstack is fine. Been there done that. IMHO Terraform is overkill for small setups like this; fits solidly in the YAGNI category. But I like Ansible.
I don’t understand why people keep propagating this myth which is mostly pushed by the marketing department of Azure, AWS and GCP.
The truth is cloud provider doesn’t actually provide 24/7 support to your app. They only ensure that their infrastructure is mostly running for a very loose definition of 24/7.
You still need an expert on board to ensure you are using them correctly and are not going to be billed a ton of money. You still need people to ensure that your integration with them doesn’t break on you and that’s the part which contains your logic and is more likely to break anyway.
The idea that your cloud bill is your TCO is a complete fabrication and that’s despite said bill often being extremely costly for what it is.
But the idea that AWS provides some sort of white glove 24/7 support is laughable for anyone that's ever run into issues with one of their products...
You will definitely get support reasonably fast if something breaks because of them but that’s not where breakage happens most of the time. The issue will nearly always be with how you use the services. To fix that, you need someone who understands both the tech you use and how it’s offered by your cloud provider. At which point, you have an expert on board anyway so what’s the point of the huge bill?
A hoster will cost you less for most of the benefits. They already offer most of the bricks required for easy scalability.
1. Dumping a backup every so often?
2. Exporting its performance via Prometheus, and displaying in a dashboard?
3. Machine disk usage via Prometheus?
4. An Ansible playbook for recovery? Maybe kicking that into effect with an alert triggered from bullet 2 and 3.
5. Restoring the database that you backed up into your staging env, so you get a recurring, frequent check of its integrity.
This would be around 100 to 500 lines of code of which an LLM can do for you.
What am I missing?
Besides this we also use - ECS to autoscale app layer - S3 + Athena to store and query logs - Systems Manager to avoid managing SSH keys. - IAM and SSO to control access to the cloud - IoT to control our fleet of devices
I’ve never seen how people operate complex infrastructures outside of a cloud. I imagine that using VPS I would have a dedicated dev. ops acting as a gatekeeper to the infrastructure or I’ll get a poorly integrated and insecure mess. With cloud I have teams rapidly iterating on the infrastructure without waiting on any approvals and reviews. Real life scenario 1. Let use DMS + PG with sectioned tables + Athena 2. Few months later: let just use Aurora read replicas 3. Few months later: Let use DMS + RedShift 4. Few months later: Zero-ETL + RedShift.
I imagine a dev. ops would be quite annoyed by such back and forth. Plus he is busy keeping all the software up to date.
That’s your issue. If all you have is a hammer, everything looks like a nail.
I have the same issue with the junior we hire nowadays. They have been so brain washed that the idea that the cloud is the solution and they can’t manage without them that they have no idea of what to do instead of reaching for them.
> I imagine that using VPS I would have a dedicated dev. ops acting as a gatekeeper to the infrastructure or I’ll get a poorly integrated and insecure mess.
You just describe having a real mess after this.
> I imagine a dev. ops would be quite annoyed by such back and forth.
I would be quite annoyed by such back and forth even on the cloud. I don’t even want to think about the costs of changing so often.
While I admit lack of experience at scale I had my share of Linux admin experience to understand how it could be done. My point is that building a comparable environment without cloud would be much more than just 500 LoC. If you have relevant experience please share.
>I would be quite annoyed by such back and forth even on the cloud. I don’t even want to think about the costs of changing so often.
In cloud it took 1-2 weeks per iteration with several months in between when we have been using the solution. One person did it all, nobody in the team even noticed. Being able to iterate like this is valuable.
This is not the case. The reason for iteration is the search for solution in the space we don’t know well enough. In this particular case cloud made iteration cheap enough to be practical.
I asked you to think about what it would take to build well integrated suite of tools (PG + backups + snapshots + prom + logs + autoscaling for DB and API + ssh key management + SSO into everything). It is a good exercise, if you ever built and maintained such a suite with uptime and ease of use comparable to AWS I genuinely would like to hear about it.
It's their first "leadership principle" (their sort of corporate religion, straight out of the lips of Jeff himself)
Also, the idea that using VPS or non-hyperscaler clouds means “poorly integrated and insecure mess” feels like AWS marketing talking. Good ops doesn’t mean gatekeepers — it means understanding your system so you don’t need to swap out components every quarter because the last choice didn’t scale as promised.
I’d rather spend time building something stable that aligns with my compliance and revenue goals, than chasing the latest AWS feature set. And by the way, someone still has to keep all that AWS software up to date — you’ve just outsourced it and locked yourself into their way of doing it.
So no more Microsoft software then?
The EU isn't willing to pay for that. They'll just throw the ICC under the bus, just like they'll throw any EU company that the US sanctions under the bus. That costs less. The EU has a nice name for throwing people under the bus like this: it's called "the peace dividend".
I guess a lot depends on size, diversity and dynamics of the demand. Not every nail benefits from contact with the biggest hammer in the toolbox.
You are correct, but I think you're missing the point: my 80% and your 80% don't overlap completely.
>> That's around 1-2 months of time for a decent devops freelancer. If you underpay your devs, about 1/3rd of an FTE per year. And you are not going to get 24x7 support with such a budget.
In terms of absolute savings, we’re talking about 90% of 24k, that’s about 21.6k saved per year. A good amount, but you cannot hire an SRE/DevOps Engineer for that price; even in Europe, such engineers are paid north of 70k per year.
I personally think the TCO (total cost of ownership) will be higher in the long run, because now every little bit of the software stack has to be managed by their infra team/person, and things are getting more and more complex over time, with updates and breaking changes to come. But I wish them well.
Out of experience, in the long run, this "managed aws saved us because we didn't need people" feels always like the typical argument made by saas sales people. In reality, many services/saas are really expensive, and you probably will only need a few features which sometimes you can rollout yourself.
The initial investment might be higher, but in the long run I think it's worth it. It's a lot like Heroku vs AWS. Superexpensive, but it allows you with little knowledge to push a POC in production. In this case, it's AWS vs self hosted or whatever.
Finally, can we quantify the cost of data/information? This company seems to be really "using" this strategy (= everything home made, you're safe with us) for sales purposes. And it might work, although for the final consumer this might have a higher price, which finally pays the additional devops to maintain the system. So who cares?
How important is for companies to not be subject to CLOUD act or funny stuff like that?
Unless by Europe you mean the Apple feature availability special of UK/Germany/France/Spain/Italy
You can easily find a decent devops here (not Google-level, no) for much less than 70k I would say, especially if theyre under 30 or so.
I’m an SWE with a background in maths and CS in Croatia, and my annual comp is less than what you claim here. Not drastically, but comparing my comp to the rest of the EU it’s disappointing, although I am very well paid compared to my fellow citizens. My SRE/devops friends are in a similar situation.
I am always surprised to see such a lack of understanding of economic differences between countries. Looking through Indeed, a McDonald’s manager in the US makes noticeably more than anyone in software in southeast Europe.
Being able to stay compliant and protect revenue is worth far more than quibbling over which cloud costs a little less or much a monthly salary for an employee is in various countries.
The real ratio to look at is cloud spend vs. the revenue.
For me, switching from AWS to European providers wasn’t just about saving on cloud bills (though that was a nice bonus). It was about reducing risk and enabling revenue. Relying on U.S. hyperscalers in Europe is becoming too risky — what happens if Safe Harbor doesn’t get renewed? Or if Schrems III (or whatever comes next) finally forces regulators to act?
If you want to win big enterprise and governmental deals, Then you got to do whatever it takes and being compliant and in charge is a huge part of that.
I am curious why you think AWS services are more hands-off than a series of VPSs configured with Ansible and Terraform? Especially if you are under ISO 27001 and need to document upgrades anyway.
My point was that AWS is not hands-off. You still have to set it up, you have to keep a close eye on expenses, and Amazon holds your hand less than many people seem to expect.
Presumably they are in Europe? so labour is a few times cheaper here.
> Providing 24x7 support
They are not maintaining the hardware itself and it’s not like Amazon is doing providing devops for free. Unless you are using mainly serverless stuff the difference might not be that significant
The systems you design when you have reliable queues, durable storage, etc. are fundamentally different. When you go this path you’re choosing to solve problems that are “solved” for 99.99% of business problems and own those solutions.
Also, any company with strict uptime requirements will have proper risk analysis in place, outlining the costs of the chosen strategy in case of downtime; these decisions require proper TCO evaluation and risk analysis, they aren't made in a vacuum.
For example, you'd be hard pressed to find a team building AWS services who is not using SQS and S3 extensively.
Everyone is capable of rolling their own version of SQS. Spin up an API, write a message to an in memory queue, read the message. The hard part is making this system immediately interpretable and getting "put a message in, get a message out" while making the complexities opaque to the consumer.
There's nothing about rolling your own version that will make you better able to plan this out -- many of these lessons are things you only pick up at scale. If you want your time to be spent learning these, that's great. I want my time to be spent building features my customers want and robust systems.
I design and develop products that rely on queuing systems and object storage; if its SQS or S3 is an implementation detail (although S3 is also a de-facto standard). Some of those products may rely on millions of very small objects/messages; some of them may rely on fixed-size multi-MB blocks. Knowing the workload, you can often optimize it in a non-trivial way, instead if just using what the provider has.
> The hard part is making this system immediately interpretable and getting "put a message in, get a message out" while making the complexities opaque to the consumer.
Not really, no. As you said, is already a solved problem. Aws obviously has different scale requirements than my reality, but by having ownership I also have only a fraction of the problems.
> There's nothing about rolling your own version that will make you better able to plan this out -- many of these lessons are things you only pick up at scale.
I cannot agree with you on this. As an example, can you tell me which isolation level is guaranteed on an Autora instance? And what if it is a multi-zone cluster? (If you can, kudos!); next question is, are developers aware of this?
If you have done any cursory solution design, you will know the importance of mastering the above questions on development workflow.
When you rely heavily on U.S. hyperscalers in Europe, you’re exposed to potential disruptions — what if data transfer agreements break down or new rulings force major changes? The value of cloud spend, in my view, isn’t just in engineering convenience, but in how it helps sustain the business and unlock growth. That’s why I prioritized compliance and risk reduction — even if that means stepping a little outside the comfort of the big providers’ managed services.
> “Software is ten times easier to write than it was ten years ago, and ten times as hard to write as it will be ten years from now.”
Ansible, Hetzner, Prometheus and object storage will give you RDS if you prompt an LLM, or at least give you the parts of RDS that you need for your use case for a fraction of the cost.
There will be a new AWS European Sovereign Cloud[1] with the goal of being completely US independent and 100% compliant with EU law and regulations.
[1]: https://www.aboutamazon.eu/news/aws/aws-plans-to-invest-7-8-...
The idea that anything branded AWS can possibly be US independent when push comes to shove is of course pure fantasy.
If Amazon partnered with an actually independent European company, provided the software and training, and the independent company set it up and ran it; in case of dispute, Amazon could pull the branding and future software updates, but they wouldn't be able to access customer data without consent and assistance of the other company and the other company would be unlikely to provide that for requests that were contrary to European law. It would still be branded AWS for Europe, and nobody would doubt its independence.
This way, where it's all subsidiaries of Amazon can't be trusted though.
The US clearly state that extraterritoriality is fine with them. Depending on the company, one gag order is enough to sabotage a whole company.
It is. And China have been the only ones intelligent enough to have understood this very long ago. They also show that while entire independence on their scale may be a pipe dream, getting close to it is feasible.
The ICC move by MS made hospitals go in an even higher gear to prepare off-ramp plans. From private Azure cloud to “let’s get out”
Monitoring and persistence layers are cross cutting and already an abstraction with impedance mismatch already.
You don't need a full blown SOA2 systems, just minimal scaffolding to build on later.
Even if you stick to AWS for the remainder of time, that scaffolding will help when you grow, AWS services change, or you need a multi cloud strategy.
As a CTO, you need to also de-risk in the medium and longer term, and keeping options open is a part of that.
Building tightly coupled systems with lots of leakage is stepping over dollars to pick up pennies unless selling and exiting is your plan for the organization.
The author doesn't mention what they had to write, but typically it is cloud provider implementation details leaking into your code.
Just organizing ansible files in a different way can often help with this.
If I was a CTO who thought this option was completely impossible for my org, I would start on a strategic initiative to address it ASAP.
Once again you don't need to be able to jump tomorrow, but to me the belief that a vendor has you locked in would be a serious issue to me.
Two reasons for this stick out:
- Are the multi-million dollar SV seed rounds distorting what real business costs are? Counting dev salaries etc. (if there is at least one employee) it doesn't seem worth the effort to save $20k - i.e., 1/5 of a dev salary? But for a bootstrapped business $20k could definitely be existential.
- The important number would be the savings as percent of net revenue. Is the business suddenly 50% more profitable? Then it's definitely worth it. But in terms of thinking about positively growing ARR doing cost/benefit on dropping AWS vs. building a new (profitable) feature I could see why it might not make sense.
Edit to add: it's easy to offhand say "oh yeah easy, just get to $2M ARR instead of saving $20k- not a big deal" but of course in the real world it's not so simple and $20k is $20k. The prevalent SV mindset of just spending without thinking too much about profitability is totally delusional except for like 1 out of 10000 startups.
If I generalize, I see two kinds of groups for whom this reduction of cost does not matter. The first group are VC-funded, and the second group are in charge of +million AWS bill. We do not have anything in common with these companies, but we have something in common with 80% of readers on this forum and 80% of AWS clients.
We're also bootstrapped and use Hetzner, not AWS (except for the occasional test), for very much the same reasons as you.
And we are also fully infrastructure as code using Ansible.
We used to be a pure software vendor, but are bringing out a devtool where the free tier runs on Hetzner. But with traction, as we build out higher tier services, it's an open question on what infrastructure to host it on.
There are a kazillion things to consider, not the least of which is where the user wants us to be.
• We heavily invested upfront in infrastructure-as-code (Terraform + Ansible) so that infra is deterministic, repeatable, and self-healing where possible (e.g. auto-provisioning, automated backup/restore, rolling updates).
• Monitoring + alerting (Prometheus + Alertmanager) means we don’t need to watch screens — we get woken up only if there’s truly a critical issue.
• We don’t try to match AWS’s service level (e.g. RTO of minutes for every scenario) — we sized our setup to our risk profile and customers’ SLAs.
> True cost comparison:
• The migration was done as part of my CTO role, so no external consulting costs. The time investment paid back within months because the ongoing cost to operate the infra is low (we’re not constantly firefighting).
• I agree that if you had to hire more people just to manage this, it could negate the savings. That’s why for some teams, AWS is still a better fit.
> Business vs. cost drivers: Honestly, our primary driver was sovereignty and compliance — cost savings just made the business case easier to sell internally. Like you, our European customers were increasingly skeptical of US cloud providers, so this aligned with both compliance and go-to-market.
> Terraform / YAGNI: Fair point! Terraform probably is more than we need for the current scale. I went with it partly because it fits our team’s skillset and lets us keep options open as we grow (multi-provider, DR regions, etc).
And, finally, because this, I am posting about it. I am sharing as much as I can, and just spread the work about it. I just sharing my experience and knowledge. If you have any questions or want to discuss further, feel free to reach out at jk@datapult.dk!
I wonder if it’s both stockholm syndrome and learned helplessness of developers that cannot imagine having to spend a little more effort and save, like OP, 90% off their monthly bill.
Yeah sure for some use cases AWS is the market leader, but let’s not kid ourselves, 9/10 companies on AWS don’t require more than a few servers and a database.
A database administrator for a drug cartel became an informant for the police.
His cartel boss called him in on a weekend due to a server errors. He said in the podcast "I knew I've been found out because a database running Linux never crashes"
Makes you wonder what everyone is telling themselves about the need for RDS..
Hetzner has had issues where they just suddenly bring servers down with no notice, sometimes every server attached to an account because they get a bogus complaint, and in some cases it appears they are still up but all your health checks fail, where you are scurrying around trying to find the cause with no visibility or lifeline. All this costs money, a lot of money, and its unmanageable risk.
For all the risks and talk of compliance, what about the counterparty-risk where a competitor (or whoever) sends a a complaint from a nonexistent email which gets your services taken down. Sure after support gets involved and does their due dilligence they see its falsified and bring things back up but this may be quite awhile.
It takes their support at least 24 hours just to get back to you.
DIY hosting is riddled with so many unmanageable costs I don't see how OP can actually consider this a net plus. You basically are playing with fire in a gasoline refinery, once it starts burning who knows when the fire will go out so people can get back to work.
We didn’t go into this blind though — we spent a lot of time testing scenarios (including Hetzner/OVH support delays) and designing mitigation strategies.
Some of what we do:
• Our infra is spread across multiple providers (Hetzner, OVH)) + Cloudflare for traffic management. If Hetzner blackholes us, we can redirect within minutes. • DB backups are encrypted and replicated nightly to various regions/providers (incl. one outside the primary vendors), with tested restore playbooks.
The key point: no platform is free of counterparty risk — whether that’s AWS pulling a region for legal reasons, or Hetzner taking a server offline. Our approach tries to make the blast radius smaller and the recovery faster, while also achieving compliance and cutting costs substantially (~90% as noted).
DIY is definitely not for everyone — it is more work, but for our particular constraints (cost, sovereignty, compliance) we found it a net win. Happy to share more details if helpful!
Oh, an imagine being kicked out of AWS and you used Aurora.. My certified multi-cloud setup with standard components should not make you cringe.
I probably won't be responding after this or in the future on HN because I took a significant blast off my karma for keeping it real and providing valuable feedback. You have a lot of people brigading accounts that punish those that provide constructive criticism.
Generally speaking AWS is incentivized to keep your account up so long as there is no legitimate reason for them taking it down. They generally vet claims with a level of appropriate due diligence before imposing action because that means they can keep billing for that time. Spurious unlawful requests cost them money and they want that money and are at a scale where they can do this.
I'm sure you've spent a lot of time and effort on your rollout. You sound competent, but what makes me cringe is the approach you are taking that this is just a technical problem when it isn't.
If you've done your research you would have ran across more than a few incidents where people running production systems had Hetzner either shut them down outright, or worse often in response to invalid legal claims which Hetzner failed to properly vet. There have also been some strange non-deterministic issues that may be related to hardware failing, but maybe not.
Their support is often a one response every 24 hours, what happens when the first couple responses are boilerplate because the tech didn't read or understand what was written. 24 hours + % chance of skipping the next 24 hours at each step; and no phone support, which is entirely unmanageable. While I realize they do have a customer support line, it is for most an international call and the hours are bankers hours. If your in Europe you'll have a lot easier time lining up those calls, but anywhere else and you are dealing with international calls with the first chance of the day being midnight.
Having a separate platform for both servers is sound practice, but what happens when your DAG running your logging/notification system is on the platform that fails, but not a failover. The issues are particularly difficult when half your stack fails on one provider, stale data is replicated over to your good side, and you have nonsensical, or invisible failures; and its not enough to force an automatic failover with traffic management which is often not granular enough.
Its been awhile since I've had to work with Cloudflare tm, so this may have become better but I'm reasonably skeptical. I've personally seen incidents where the RTO for support for direct outages was exceptional, but then the RTO for anything above a simple HTTP(200) was nonexistent with finger pointing, which was pointless because the raw network captures were showing the failure at L2/L3 traffic on the provider side which was being ignored by the provider. They still argued, and downtime/outage was extended as a result. Vendor management issues are the worst when contracts don't properly scope and enforce timely action.
Quite a lot of the issues I've seen with various hosting providers OVH and Hetzner included, are related to failing hardware, or transparent stopgaps they've put in place which break the upper service layers.
For example, at one point we were getting what appeared to be stale cache issues coming in traffic between one of a two backend node set on different providers. There was no cache between them, and it was breaking sequential flows in the API while still fulfilling other flows which were atomic. HTTP 200 was fine, AAA was not, and a few others. It appeared there was a squid transparent proxy placed in-line which promptly disappeared upon us reaching out to the platform, without them confirming it happened; concerning to say the least when your intended use of the app you are deploying is knowledge management software with proprietary and confidential information related to that business. Needless to say this project didn't move forward on any cloud platform after that (and it was populated with test data so nothing lost). It is why many of our cloud migrations were suspended, and changed to cloud repatriation projects. Counter-party risk is untenable.
Younger professionals I've found view these and related issues solely as technical problems, and they weigh those technical problems higher than the problems they can't weigh because of lack of experience and something called the streetlamp effect which is an intelligence trap often because they aren't taught a Bayes approach. There's a SANS CTI presentation on this (https://www.youtube.com/watch?v=kNv2PlqmsAc).
The TL;DR is a technical professional can see and interrogate just about every device, and that can lead to poor assumptions and an illusion of control which tend to ignore problems and dismiss them when there is no real clear visibility about how those edge problems can occur (when the low level facilities don't behave as they should). The class of problems in the non-deterministic failure domain where only guess and check works.
The more seasoned tend to focus more on the flexibility needed to mitigate problems that occur from business process failures, such as when a cooperative environment becomes adversarial, which necessarily occurs when trust breaks down with loss, deception, or a breaking of expectations on one parties part. This phase change of environment, and the criteria is rarely reflected or touched on in the BC/DR plans; at least the ones that I've seen. The ones I've been responsible for drafting often include a gap analysis taking into account the dependencies, stakeholder thoughts, and criteria between the two proposed environments, along with contingencies.
This should includes legal obviously to hold people to account when they fail in their obligations but even that is often not enough today. Legal often costs more than simply taking the loss and walking away absent a few specific circumstances.
This youthful tendency is what makes me cringe. The worst disasters I've seen were readily predictable to someone with knowledge of the underlying business mechanics, and how those business failures would lead to inevitable technical problems with few if any technical resolutions.
If you were co-locating on your own equipment with physical data center access I'd have cut you a lot more slack, but it didn't seem like you are from your other responses.
There are ways to mitigate counter-party risk while receiving the hosting you need. Compromises in apples to oranges services given the opaque landscape rarely paint an objective view, which is why a healthy amount of skepticism and disagreement is needed to ensure you didn't miss something important.
There's an important difference between constructive criticism intended to reduce adverse cost and consequence, and criticisms that simply aren't based in reality.
The majority of people on HN these days don't seem capable of making that important distinction in aggregate. My relatively tame reply was downvoted by more than 10 people.
These people by their actions want you to fail by depriving you of feedback you can act on.
On the topic of Hetzner and account risks, I completely agree: this is not just a technical issue, and that's why we built a multi-cloud setup spanning Hetzner and OVH in Europe. The architecture was designed from the start to absorb a full platform-level outage or even a unilateral account closure. Recovery and failover have been tested specifically with these scenarios in mind — it's not a "we'll get to it later" plan, it's baked into the ops process now.
I’ve also engaged Hetzner directly about the reported shutdown incidents — here’s one of the public discussions where I raised this: https://www.reddit.com/r/hetzner/comments/1lgs2ds/comment/mz...
What I got in a private follow-up from Hetzner support helped clarify a lot about those cases. Without disclosing anything sensitive, I’ll just say the response gave me more confidence that they are aware of these issues and are actively working to handle abuse complaints more responsibly. Of course, it doesn't mean the risk is zero — no provider offers that — but it did reduce my level of concern.
Regarding Cloudflare, I actually agree with your point: vendor contract structure and incentives matter. But that’s also why I find the AWS argument interesting. While it’s true that AWS is incentivized to keep accounts alive to keep billing, they also operate at a scale where mistakes (and opaque actions) still happen — especially with automated abuse handling. Cloudflare, for its part, has consistently been one of the most resilient providers in terms of DNS, global routing, and mitigation — at least in my experience and according to recent data. Neither platform is perfect, and both require backup plans when they become uncooperative or misaligned with your needs.
The broader point you make — about counterparty risk, legal ambiguity, and the illusions of control in tech stacks — is one I think deserves more attention in technical circles. You're absolutely right that these risks aren't visible in logs or Grafana dashboards, and can't always be solved by code. It's exactly why we're investing in process-level failovers, not just infrastructure ones.
Again, thank you for sharing your insights here. I don’t think we’re on opposite sides — I think we’re simply looking at the same risks through slightly different lenses of experience and mitigation.
Given these existence of these tools, which are fantastic, I'm often stunned at how sluggish, expensive and how lacklustre the UX is of the AWS monitoring stack.
Monitoring quickly became the most expensive, and most unpleasant part of our AWS experience.
It's paid because operating that feature at AWS' scale is expensive as hell. Maybe not for your project, but for 90% of their customers it is.
It is a great big cloud play to make enterprises reliant on the competency in their weird service abstractions, which is slowly draining the quite simple ops story an enterprise usually needs.
Might throw together a post on it eventually:
Also, Loki! How do you handle memory hunger on loki reader for those pesky long range queries, and are there alternatives?
Failures/upgrades: We provision with Terraform, so spinning up replacements or adding capacity is fast and deterministic.
We monitor hardware metrics via Prometheus and node exporter to get early warnings. So far (9 months in) no hardware failure, but it’s a risk we offset through this automation + design.
Apps are mostly data-less and we have (frequently tested) disaster recovery for the database.
Loki: We’re handling the memory hunger by
• Distinguishing retention limits and index retention
• Tuning query concurrency and max memory usage via Loki'’'s config + systemd resource limits.
• Use Promtail-style labels + structured logging so queries can filter early rather than regex the whole log content.
• Where we need true deep history search, we offload to object store access tools or simple grep of backups — we treat Loki as operational logs + nearline, not as an archive search engine.
We used AWS EKS in the old days and we never liked the extreme complexity of it.
With two Spring Boot apps, a database and Redis running across Ubuntu servers, we found simpler tools to distribute and scale workloads.
Since compute is dirt cheap, we over-provision and sleep well.
We have live alerts and quarterly reviews (just looking at a dashboard!) to assess if we balance things well.
K8s on EKS was not pleasant, I wanna make sure I never learn how much worse it can get across European VPS providers.
There's an ongoing thread (one of many) exploring the different perspectives on that debate: https://news.ycombinator.com/item?id=44317825
On the other side, their VPC CNI plugin and their ingress controller are pretty much set and forget.
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues...
Just remember: their interest is that you buy their cloud service, not in giving an out-of-the-box great experience on their open source stuff.
One of the advantages of more expensive providers seems to be that they have good reputation due to a de facto PoW mechanism.
The only potential indirect risks is if your Hetzner VPS IP range gets blacklisted (because some Hetzner clients abuse it for Sybil attacks or spam).
Or if Hetzner infrastructure was heavily abused, their upstream or internal networking could (in theory) experience congestion or IP reputation problems — but this is very unlikely to affect your individual VPS performance.
This depends on what you are doing on Hetzner and how you restrict access but for an ISO-27001 certified enterprise app, I believe this is extremely unlikely.
The Medium post is mostly fluff and a lead generator.
I’m happy to share specific configs, diagrams, or lessons learned here on HN if people want — and actually I’m finding this thread a much better forum for that kind of deep dive.
I'll dive into other aspects elsewhere: You can't doubt that given what I am sharing here.
Any particular area you’d like me to expand on? (e.g. how we structured Terraform modules, Ansible hardening, Prometheus alerting, Loki tuning?)
A.5.25 Security in development and support processes:
Safe rolling deploy, rollback mechanisms, NGINX health checks, code versioning, Prometheus alerting for deployment issues
A.6.1.2 Segregation of duties:
Separate roles for database, monitoring, web apps; distinct system users
A.8.1.1 Inventory of assets:
Inventory management through Ansible inventory.ini and groups
A.8.2.3 Handling of assets:
Backup management with OVH S3 storage; retention policy for backups
A.8.16 Monitoring activities (audit logging, monitoring):
auditd installed with specific rule sets; Prometheus + Grafana Agent + Loki for system/application/audit log monitoring
A.9.2.1 User registration and de-registration:
ansible_user, restricted SSH access (no root login, pubkey auth), AllowUsers, DenyUsers enforced
A.9.2.3 Management of privileged access rights:
Controlled sudo, audit rules track use of sudo/su; no direct root access
A.9.4.2 Secure log-on procedures:
SSH hardening (no password login, no root, key-based access)
A.9.4.3 Password management system:
Uses Ansible Vault and variables;
A.10.1.1 Cryptographic controls policy:
SSL/TLS certificate generation with Cloudflare DNS-01 challenge, enforced TLS on Loki, Prometheus
A.12.1.1 Security requirements analysis and specification:
Tasks assert required variables and configurations before proceeding
A.12.4.1 Event logging:
auditd, Prometheus metrics, Grafana Agent shipping logs to Loki
A.12.4.2 Protection of log information:
Logs shipped securely via TLS to Loki, audit logs with controlled permissions
A.12.4.3 Administrator and operator logs:
auditd rules monitor privileged command usage, config changes, login records
A.12.4.4 Clock synchronization:
chrony installed and enforced on all hosts
A.12.6.1 Technical vulnerability management:
Lynis, Wazuh, vulnerability scans for Prometheus metrics
A.13.1.1 Network controls:
UFW with strict defaults, Cloudflare whitelisting, inter-server TCP port controls
A.13.1.2 Security of network services:
SSH hardening, NGINX SSL, Prometheus/Alertmanager access control
A.13.2.1 Information transfer policies and procedures:
Secure database backups to OVH S3 (HTTPS/S3 API)
A.14.2.1 Secure development policy:
Playbooks enforce strict hardening as part of deploy processes
A.15.1.1 Information security policy for supplier relationships:
OVH S3, Cloudflare services usage with access key/secret controls; external endpoint defined
A.16.1.4 Assessment of and decision on information security events:
Prometheus alert rules (e.g., high CPU, low disk, instance down, SSL expiry, failed backups)
A.16.1.5 Response to information security incidents:
Alertmanager routes critical/security alerts to email/webhook; plans for security incident log webhook
A.17.1.2 Implementing information security continuity:
Automated DB backups, Prometheus backup job monitoring, retention enforcement
A.18.1.3 Protection of records:
Loki retention policy, S3 bucket storage with rotation; audit logs secured on disk
However in the US it's not very relevant or even interesting to companies, and some European companies fail to understand that.
SOC 2 is the default and the preferred standard in the US - it's more domestic and less rigid than ISO 27001.
checking for evidence that you are doing those things I would call ridgit. SOC2 as attestation doesn’t require so much documentation.
Once I was working in a quite small company (around 100 employees) that hosted everything on AWS. Due to high bills (it's a small company that resided in Asia) and other problems, I migrated everything to DigitalOcean (we still used AWS for things like SES), and the monthly bill for hosting became like 10 times lower. With no other consequences (in other words, it haven't become less reliable).
I still wonder who calculated that AWS is cheaper than everything else. It's definitely one of the most expensive providers.
I lacked both expertise and time to find out where the wasted space go. After I've set up Maria DB on a smallest Digital Ocean droplet, mysterious storage growth haven't repeated and the cheapest droplet had enough capacity to serve our needs for years.
Also, there were 7-10 forgotten "test" server instances and other artifacts (buckets, domains, etc) on Amazon (I believe it's also quite common, especially in bigger companies).
Like when the 5K iMac originally came out, there was a lot of people claiming it was a good value. Because if you bought a 5K display and then built a PC, that would end up being more expensive. So, like for like, Apple was cheaper.
But... that assumed you even needed a 5K display, which were horribly overpriced and rare at the time. As soon as you say "4K is good enough", the cost advantage disappears, and it's not even close.
They might save 90% of their $24K on hardware, but just spend probably double the amount on salaries.
This is why AWS is in the end cheaper if it is costs more for the same (let's be real it's not at all the same actually) software.
> • Ansible roles for PostgreSQL (with automated s3cmd backups + Prometheus metrics) • Hardening tasks (auditd rules, ufw, SSH lockdown, chrony for clock sync) • Rolling web app deploys with rollback + Cloudflare draining • Full monitoring with Prometheus, Alertmanager, Grafana Agent, Loki, and exporters • TLS automation via Certbot in Docker + Ansible
You'll spend a heck of a lot of time on setting it up originally, and you will spend a lot of time keeping it up-to-date, maintaining it, and fixing the inevitable issues that will occur.
If their bill was 200K a year, why not. But at 24K a year, 25% of an employee's salary, it is negligible and most likely a bad choice.
Also, it's not like you need everything you mention and need it immediately.
NTP clock syncing is a part of any Linux distro for the last 20 years if not more.
I don't remember that Amazon automatically locks down SSH (didn't touch AWS for 7-8 years, don't remember such a feature out of the box 8 years ago).
Rolling web app deploys with rollback can be implemented in multiple ways, depends on your app, can be quite easy in some instances. Also, it's not something that Amazon can do for you for free, you need to spend some effort on the development side anyways, doesn't matter if you deploy on Amazon or somewhere else. There's no magic bullet that makes automatic rollback free and flawless without development effort.
A thing we learned in this process is that there's many levels of abstraction which you can think of rollback and locking down SSH and so on and so forth.
If your abstraction level is AWS and the big hyperscalers, it would be to use Kubernetes, but peeling layers of complexity off that, you could also do it with Docker Compose or even Linux programs that are really battle tested for decades.
Most ISO certified companies are not at hyperscale so here is a fun one: Instead of Grafana Agent from 2020, you could most likely get away better with rsyslog from 2004.
And if you want your EKS cluster to give you insights you have configure CloudWatch yourself so does what hands-off is there comparing that setup to Ubuntu+Grafana Agent?
For me, switching from AWS to European providers wasn’t just about saving on cloud bills (though that was a nice bonus). It was about reducing risk and enabling revenue. Relying on U.S. hyperscalers in Europe is becoming too risky — what happens if Safe Harbor doesn’t get renewed? Or if Schrems III (or whatever comes next) finally forces regulators to act?
Being able to stay compliant and protect revenue is worth far more than quibbling over which cloud costs a little less.
The defining conditions is my current setup and business requirement. It works well and we've resisted pretending that we know where we will be in 5 years.
I am reminded of the 2023 story of the surprisingly simple infra of Stack Overflow[1] and the 2025 story of that Stack overflow is almost dead[2]
Given that the setup works now, one can't add that it is only working "for now". I see no client demand in the foreseeable future leading me to think that this has been fundamentally architected incorrectly.
[1] https://x.com/sahnlam/status/1629713954225405952
[2] https://blog.pragmaticengineer.com/stack-overflow-is-almost-...
I'm talking about the issues that will happen to your current setup and requirement. Disaster recovery, monitoring, etc.
The ISO 27001 has me audited for just that (disaster recovery and monitoring) so that settles it, no?
Also worth noting that these are the two things you don't really get from the hyperscalers. If you want to count on more than their uptime guarantees, you have to roll some DR yourself and while you might think that this is easy, it is not easier than doing it with Terraform and Ansible on other clouds.
I have had my DR and monitoring audited in its AWS and EU version. One was no easier or harder than the other.
But the EU setup gave me a clear answer to clients on CLOUD act, Shrems II, GDPR, Safe Harbor, which is a competitive advantage.
No matter load, there is a need for complexity for this certification.
Not all employees log in daily. For a scheduling app, most people check a few times a week, but not every day.
Daily active users (DAU) = around 10,000 to 20,000
Peak concurrency (users on at the exact same time) = generally between 1,500 to 2,000 at busy times (like when new schedules drop or at shift start/end times)
Average concurrent users at any random time = maybe 50 to 150
Why cloud costs can add up even for us:
Extensive use of real-time features and complex labour rules mean the app needs to handle a lot of data processing and ultimately sync into salary systems.
An example:
Being assigned to a shift has different implications for every user. It may trigger a nuisance bonus, and such a bonus could further only be triggered in certain cases, such as when you had the shifts assigned compared to when it start time.
Lastly, there is the optimizing of a schedule why is computationally expensive.
It would be interesting to read more about your policy on logging and monitoring and how you've implemented it.
Our app is a lot more demanding (I put 0.5 cores/user, 300 iops/user and 20Mb/s/user as requirements) and I forgot that there are also lighter use cases. We blew thru the thousands in free credits on aws in like 2 months and went immediately to Hetzner
https://news.ycombinator.com/item?id=44335920#44337659
If you have any more questions, just reach out at jk@datapult.dk
Sounds like an interesting use case.
As for OVH, they don't do the above, but they have week-long unplanned downtimes, so using them is okay only as an optional resource.
Even so, there are lots of providers that are cheaper than Amazon and won't screw you over.
Everyone talks about it but none wants to be the first mover.
There's also a lot of FUD regarding hiring more staff, my observed experience is that hyperscalers need an equivalent number of people on rotation- it's just different skills (learning the intricacies/quirks of different product offerings on the hyperscaler vs CS/Operational fundamentals) - so everyone is scared to overload their teams with work and potentially need to hire people -- you can couple this with the fact that all migrations are up-front expensive and change is bad by default.
There will come a day where there simply isn't enough money to spend 10x the cost on these systems. It will be a sad day for everyone because salaries will be depressed too, and we will opine the days of shiny tools where we could make lots of work disappear by saying that our time is too expensive to work with such peasant issues.
I also sometimes think OVH and Hetzner are not a fair comparison as much as I want competition to HyperScaler. Hetzner uses consumer grade component with a few server grade selections.
What I find especially bizarre is that OpenStack seems to tick many of those boxes, including https://docs.openstack.org/keystone/2025.1/admin/manage-serv... and https://docs.openstack.org/keystone/2025.1/admin/federation/... so it's not like those providers would have to roll such a control plane from scratch as they seem to have done now
As a concrete example for your link, they cite Crossplane (and good for them) but then the Crossplane provider gets what I can only presume is some random person's console creds https://upcloud.com/docs/guides/getting-started-crossplane/#... and their terraform provider auths the same way
I do see the https://developers.upcloud.com/1.3/3-accounts/#add-subaccoun... and some of its built-in filtering but it doesn't have a good explanation for how it interacts with https://developers.upcloud.com/1.3/18-permissions/#normal-re... . But don't worry, you can't manage any of that with their terraform provider, so I guess upcli it is
I like to say that there are no European cloud providers. There are only European hosting providers.
As you say IAM is table stakes for being a real cloud.
This could be a stack that could be parametrised with sound defaults just requiring some terraform provider credentials as well as a path to an executable web app and a choice of database engine.
ISO readiness built-in and abstracted at the OS level rather than programming language level.
If anyone wants to "assetize" what I built, reach out at jk@datapult.dk. I bring a battle-tested setup that has been ISO certified by independent auditors.
You bring clients directly or indirectly with marketing/growth hacking mindset.
That is not to say that this aspect alone justifies huge fees, but it does have significant value.
AWS RDS does not upgrade major or minor versions of Postgres or, as you mentioned, MySQL. In that case, they might patch update it. But these patch updates are easy to do yourself and does not take long to be reminded of in your ISMS and then subsequently carry out.
The purpose of this post is not to justify cloud hyperscalers versus European servers. It is actually a post on how to manage a highly regulated, compliant, and certified server setup yourself outside AWS because so many people just have their ISO certification on AWS infrastructure and once they got that they are never able to leave AWS again.
If you have no client demand and no real need to work on updating your infrastructure yourself, then you can go ahead and not go for an ISO 27001 certification and let AWS RDS update as it pleases. But if you operate a complex beast in a regulated industry such as employment law, finance, and such, then you get some more fun challenges and higher need for control.
https://blog.cloudflare.com/amazon-2bn-ipv4-tax-how-avoid-pa...
If I manage to get https://uncloud.run/ or something similar up & running, the platform will no longer matter, whether it's OVH, Hetzner, Azure, AWS, GCP, ... It should all be possible & easy to switch... #FamousLastWords
Coming from AWS this is simple but I haven't seen how to do it well on hosting providers.
Obviously one can't write the disk encryption key to a boot partition or that undermines the point of it...
In this case, the disks being in a ISO27001 data centre with processes in place to ensure erasure during de-provisioning (which Hetzner is, and has), may well also meet this criteria.
Most of our customers have a hard requirement on ISO 9001. Many on ISO 27001, too. The rest strongly prefers a partner having a plan to get ISO 27001
And also lacking a bit in details:
- both technical (e.g. how are you dealing with upgrades or multi-data center fallback for your postgresql), and
- especially business, e.g. what's the total cost analysis including the supplemental labor cost to set this up but mostly to maintain it.
Maybe if you shared your scripts and your full cost analysis, that would be quite interesting.
I'm trying to share as much technical across this thread as for your two examples:
System upgrades:
Keep in mind that as per the ISO specification, system upgrades should be applied but in a controlled manner. This lends itself perfectly to the following case that is manually triggered.
Since we take steps to make applications stateless, and Ansible scripts are immutable:
We spin up a new machine with the latest packages and once ready it join the Cloudflare load balancer. The old machines are drained and deprovisioned.
we spin up a new machine We have a playbook that iterates through our machines and does it per machine before proceeding. Since we have redundancy on components, this creates no downtime. The redundancy in the web application is easy to achieve using the load balancer in Cloudflare. For the Postgres database, it does require that we switch the read-only replica to become the main database.
DB failover:
The database is only written and read from by our web applications. We have a second VM on a different cloud that has a streaming replication of the Postgres database. It is a hot standby that can be promoted. You can use something like PG Bouncer or HAProxy to route traffic from your apps. But our web framework allows for changing the database at runtime.
> Business
Before migration (AWS): We had about 0.1 FTE on infra — most of the time went into deployment pipelines and occasional fine-tuning (the usual AWS dance). After migration (Hetzner + OVHCloud + DIY stack): After stabilizing it is still 0.1 FTE (but I was 0.5 FTE for 3-4 months), but now it rests with one person. We didn’t hire a dedicated ops person. On scaling — if we grew 5-10×: * For stateless services, we’re confident we’d stay DIY — Hetzner + OVHCloud + automation scales beautifully. * For stateful services, especially the Postgres database, I think we'd investigate servicing clients out of their own DBs in a multi-tenant setup, and if too cumbersome (we would need tenant-specific disaster recovery playbooks), we'd go back to a managed solution quickly.
I can't speak for cloud FTE toll vs a series of VPS servers in the big boys league ($ million in monthly consumption) and in the tiny league but at our league it turns out that it is the same FTE requirement.
Anyone want to see my scripts, hit me up at jk@datapult.dk. I'm not sure it'd be great security posture to hand it out on a public forum.
Have you considered doing your own HA Load balance? If yes what tech options did you consider
I took for granted that Hetzner and OVHcloud would be prone to failures due to their bad rep, not my own experience, so I wanted to be able to direct traffic to one if the other was down.
Doing load balancing ourselves in either of the two clouds gave rise to some chicken and egg situations now that we were assuming that one of them could be down (again not my lived experience).
Doing this externally was deliberate and picking something with a better rep than Hetzner and OVHcloud was obvious in that case.
How did you setup/secure the connection between the clouds?
Our logging server will switch primary DB in case the original primary DB server is down. Since we are counting on downtime, the monitoring server is by default not hosted in the same places as the primary DB but in the same place as the secondary DB.
We assume that each clod will go down but not at the same time.
With Ansible, I can version everything — from server hardening to DB backups — and ensure idempotent, transparent provisioning. I don’t have to reverse-engineer how a PaaS layer configures things under the hood, or worry about opaque defaults that might not meet compliance requirements.
There's nothing wrong with these tools, but once you're in the mood for the ISO certification, and once you start doing these things yourself, they actually seem like a step backwards or add very little value.
I also prefer running my own DB backups rather than relying on magic snapshots — it's easier to integrate with encrypted offsite storage and disaster recovery policies that align with ISO requirements. This lets me lock down the environment exactly as needed, with no surprise moving parts.
Tools like Kamal/Dokku/CapRover shine for fast, developer-friendly deploys, but for regulated workloads, I’ll take boring, explicit, and auditable any day.
Have you looked into others as well, like IONOS and Scaleway?
Scaleway came up but is more expensive. IONOS did not come up in our research.
Part of what we tried to do was to make ourselves independent from traditional cloud services and be really good at doing stuff on a VPS. Once you start doing that, you can actually allow yourself to look more at uptimes and at costs. Also, since we wanted everything to be fully automated, Terraform support was important for us, and OVHcloud and Hetzner had that.
I'm sure there's many great cloud providers out in Europe, but it's hard to vet them to understand if they can meet demand and if they are financially stable. We would want not to keep switching cloud providers. So picking two of the major ones seemed like a safe choice.
I don't remember a single such case. I remember reading a lot of speculations like "it's highly likely that it was done by Russians" every single time without a trace of evidence.
It's undeniable that core European infrastructure is targeted currently
Personally I think the amount of special pleading required to imagine that it is _not_ Russia is a bit much (particularly around the deep sea cable cuts; at that point you’re really claiming that Russia is deniably pretending that it is them, but really it’s someone else), but you do you. It doesn’t change the overarching point; both Hetzner and OVH would be obvious targets for, ah, whoever it is.
We talked to a few premium hosting vendors in Denmark and to build our own redundancy beyond what they guarantee, it actually became more expensive than AWS.
If the whole company can run on 200$/month in VPSes, they probably went on AWS too early.
Being able to stay compliant and protect revenue is worth far more than quibbling over which cloud costs a little less.
The real ratio to look at is cloud spend vs. the revenue.
For me, switching from AWS to European providers wasn’t just about saving on cloud bills (though that was a nice bonus). It was about reducing risk and enabling revenue. Relying on U.S. hyperscalers in Europe is becoming too risky — what happens if Safe Harbor doesn’t get renewed? Or if Schrems III (or whatever comes next) finally forces regulators to act?