Managed NAT gateways are also 10000x more expensive than my router.
This is a boring argument that has been done to death.
And yes we’ve been heavy users of both AWS and Google Cloud for years, mainly because of the credits they initially provided, but also used VMs, dedicated servers and other services from Hetzner and OVH extensively.
In my experience, in terms of availability and security there’s not much difference in practice. There are tons of good tools nowadays to treat a physical server or a cluster of them as a cloud or a PaaS, it’s not really more work or responsibility, often it is actually simpler depending on the setup you choose. Most workloads do not require flexible compute capability and it’s also easy and fast to get it from these cheaper providers when you need to.
I feel like the industry has collectively accepted that Cloud prices are a cost of doing business and unquestionable, “nobody ever got fired for choosing IBM”. Thinking about costs from first principles is an important part of being an engineer.
Or you need to restore your Postgres database and you find out that the backups didn't work.
And finally you have a brilliant idea of hiring a second $150k/year dev ops admin so that at least one is always working and they can check each other's work. Suddenly, you're spending $300k on two dev ops admins alone and the cost savings of using cheaper dedicated servers are completely gone.
Or you need to debug why your Lambda function is throttling and you find out that the CloudWatch logs were never properly configured and you’ve been flying blind for three months.
And finally you have a brilliant idea of hiring a second $150k/year AWS solutions architect so that at least one person can actually understand the bill and they can check each other’s Terraform configs. Suddenly, you’re spending $300k on two cloud wizards alone and the cost savings of "not managing your own infrastructure" are completely gone.
The snide rebuttal basically writes itself.
Except - wait, you do have to think about, because of course you. So the promise of AWS is gone.
Or when you need to post on Hackernews to get support from your cloud provider as locked out of your account, being ignored and the only way to get access is try to create as much noise as possible it gets spotted.
Or your cloud provider wipes your account and you are a $135B pension fund [1]
Or your cloud portfolio is so big you need a "platform" team of multiple devops/developer staff to build wrappers around/package up your cloud provider for you and your platform team is now the bottleneck.
Cloud is useful but it's not as pain free as everyone says when comparing with managing your own, it still costs money and work. Having worked on several cloud transformations they've all cost more and taken more effort than expected. A large proportion have also been canned/postponed/re-evaluated due to cost/size/time/complexity.
Unless you are a big spender with dedicated technical account manager, your support is likely to be as bad as a no name budget VPS provider.
Both cloud and traditional hosting have their merits and place.
[1] https://arstechnica.com/gadgets/2024/05/google-cloud-acciden...
Or when you need to post on Hackernews to get support from your cloud provider as locked out of your account, being ignored and the only way to get access is try to create as much noise as possible it gets spotted.
https://news.ycombinator.com/item?id=42365295https://www.reddit.com/r/hetzner/comments/1ha5qgk/hetzner_ca...
It's actually kinda frustrating - as an industry we're accepting worse outcomes due to misperceptions. That's how the free market goes sometimes.
As opposed to be with the small provider round the corner who is currently having a beer and will look at that tomorrow morning.
Now - I am in the phase where I ap seriously considering to move my email from Google to a small player in Europe (still not sure who) so this is what may ultimately be my fate :)
Customers call and complain about downtime, I can just vaguely point at everything being on fire from Facebook to Instagram to online banking sites.
They get it.
When the self-hosted server fries itself, I'm on the hook for fixing it ASAP.
If my own dedicated server goes down, I'm going to need to call my admin at 3am 10 times just to wake him up.
in my experience you always need a "Devops team" to operate all that cloud stuff; so to paraphrase - suddenly you're spending $400k on three devops to operate $500k cloud
I think The Promise behind the cloud was you just pay for the service and not worry about it, but in practice you need some team to maintain it
One included a whole OVH building burning down with our server in it, and recovery was faster than the recent AWS and Cloudflare outages. We felt less impotent and we could do more to mitigate the situation.
If you want to, these providers also offer VMs, object storage and other virtualized services for way cheaper with similar guarantees, they are not stuck in the last century.
And I don’t know how people are using cloud, but most config issues happen above the VM/Docker/Kubernetes level, which is the same wether you are on cloud or not. Even fully managed database deployments or serverless backends are not really that much simpler or less error-prone than deploying the containers yourself. Actually the complexity of Cloud is often a worse minefield of footguns, with their myriad artificial quirks and limitations. Often dealing with the true complexities of the underlying open-source technologies they are reselling ends up being easier and more predictable.
This fearmongering is really weakening us as an industry. Just try it, it is not as complex or dangerous as they claim.
Higher-level services like PaaS (Heroku and above) genuinely do abstract a number of details. But EC2 is just renting pseudo-bare computers—they save no complexity, and they add more by being diskless and requiring networked storage (EBS). The main thing they give you is the ability to spin up arbitrarily many more identical instances at a moment’s notice (usually, at least theoretically, though the amount of the time that you actually hit unavailability or shadow quotas is surprisingly high).
But I'd like to sleep at night and the cost of AWS is not a significant issue to the business.
And yes of course such costs are nothing if you are thinking of $300K just on a couple sysadmins. But this is just a bizarre bubble in a handful of small areas in the US and I am not sure how it can stay like that for much longer in this era of remote work.
We built a whole business with $100K in seed and a few government grants. I have worked with quite a few world-class senior engineers happily making 40K-70K.
Once my business requires reliability and I need to hire a dedicated person to manage, I'd absolutely move to the cloud. I personally like Digital Ocean/Render.
The initial impression that we don't need to hire many people because AWS takes care of everthing, fades away pretty quick.
You still need to hire the same people, they just do the same things in a different way.
What is your point?
Every team with dedicated hardware in a data center it was generally 1-2 people who would have fixed stuff quickly, no matter the size of the company (small ones, of course - so 10-50 devs). And that's with available replacement hardware.
I'm not even one of the "cloud is so great" people - but it you're generally doing software it's actually a lot less friction.
And while the ratio of cost difference may sound bad, it's generally not. Unless we're talkign huge scale, you can buy a lot of AWS crap for the yearly salary of a single person.
AWS isn't going to help you setup your security, you have to do it yourself. Previously a sysadmin would do this, now it's the devs. They aren't going to monitor your database performance. Previously a sysadmin would do this, now it's the devs. They aren't going to setup your networking. Previously a sysadmin would do this, ...
Managing hardware and updating hosts is maybe 10% of the work of a sysadmin. You can't buy much on 1/10th of a sysadmins salary, and even the things you can, the quality and response time are generally going to be shit compared to someone who cares about your company (been there).
It doesn't change anything, especially as I did not blatantly argue cloud=good,hardware=bad. That is a completely different question.
My point is that given some circumstances, you need a lot less specialized deep knowledge if all your software just works[tm] on a certain level of the stack upwards. Everyone knows the top 1/3 of the stack and you pay for the bottom 2/3 part.
I didn't mean to say "let's replace a sysadmin with some AWS stuff", my point was "100k per year on AWS makes a lot of small companies run".
Also my experience was with having hardware in several DCs around the world, and we did not have people there (small company, but present in at least 4 countries) - so we had to pay for remote hands and the experience was mostly bad . Maybe my bosses chose bad DCs, or maybe I'd trust sysadmins at "product companies" more than those working as remote hands at a hoster...
is that because they were using AWS so hired people who knew AWS?
I would personally have far more confidence in my ability to troubleshoot or redeploy a dedicated server than the AWS services to replace it.
> Every team with dedicated hardware in a data center it was generally 1-2 people who would have fixed stuff quickly, no matter the size of the company (small ones, of course - so 10-50 devs). And that's with available replacement hardware.
There are lots of options for renting dedicated hardware, that the service provider will maintain,. Its still far cheaper than AWS. Even if you have redundancy for everything its still a lot cheaper.
The truth is that there's still a lot of things you have to handle, including cloud bugs and problems. And other problems you don't have to think about anymore, especially with fully managed, high-level PaaS- like services.
I ran a cloud backend service for a startup with users, using manged services, and we still had an on-call team. The cloud is not magic.
I'll have you know I am a cantaloupe, you insensitive clod!
Using a cloud platform means that while your needs are small, you're overpaying. Where it pays off is when you have a new requirement that needs to be met quickly.
I've done my share of managing database instances in the past. I can spin up a new RDS Postgres instance in much less time than I can configure one from scratch, though. Do we need a read replica? Multi-site failover? Do we need to connect it to Okta, or Formal, so we can stand up a process to provision access to specific databases, tables, or even columns? All of those things I can do significantly faster and more quickly on AWS than I can do it by hand.
What if a NoSQL database is the right solution for us? I have much less experience adminning those, so will either have to allocate a fair amount of my time to skill up or hire someone who already has those skills.
Need a scheduled task? Sure, I could set up a Jenkins server somewhere and we could use that... or we could just add an ECS scheduled task to our existing cluster.
Need an API endpoint to handle inbound Zoom events and forward them to an internal queue? Sure, I can set up a new VPC for that... that'll be a couple of days... or we whip up a Lambda, hook it up to API Gateway, and be up and running in a couple of hours.
AWS helps me do more in less time - and my time is a cost to the business. It's also extremely flexible, and will let us add things far more quickly than we otherwise could.
IMO, the correct comparison isn't "what would it cost to run this platform on Hetzner?" - it's "What would it cost to run it, plus what would cost to acquire the talent to build it, plus retain that talent to maintain it?"
AWS isn't competing with other infrastructure providers. They're competing with other providers and the salaries of the engineers you need to make them work.
If you run your own hardware, getting stuff shipped to a datacenter and installed is 2 to 4 weeks (and potentially much longer based on how efficient your pipeline is)
And even if you are building with microservices, most standard servers can handle dozens in a single machine at once. They are all mostly doing network calls with minimal compute. Even better actually if they are in the same host and the network doesn’t get involved.
If you want to, there are simple tools to hook a handful of them as a cluster and/or instantly spawn extra slightly costlier VMs in case of failure or a spike in usage, if a short outage is really a world-ending event, which it isn’t for almost every software system or business. These capabilities have not been exclusive to the major cloud providers for years.
Of course we are generalizing a lot by this point, I’d be happy to discuss specific cases.
I suspect that if you broke projects on AWS down by the numbers, the vast majority don't needed it.
There are other benefits to using AWS (and drawbacks) bit "easy scaling" isn't just premature optimisation because if you build something to do something it's never going to do that's not optimisation it's simply waste.
Not too different from how many other lines of business get their clients in the door.
Agreed, there's definitely a heavy element of that to it.
But, at the risk of again being labelled as an AWS Shill - there's also other benefits.
If your organisation needs to deploy some kind of security/compliance tools to help with getting (say) SOC2 certification - then there's a bunch of tools out there to help with that. All you have to do then is plug them into your AWS organisation. They can run a whole bunch of automated policy checks to say you're complying with whatever audit requirements.
If you're self-hosting, or using Hetzner - well, you're going to spend a whole lot more time providing evidence to auditors.
Same goes with integrating with vendors.
Maybe you want someone to load/save data for you - no problems, create an AWS S3 bucket and hand them an AWS IAM Role and they can do that. No handing over of creds.
There's a bunch of semi-managed services where a vendor will spin up EC2 instances running their special software, but since it's running in your account - you get more control/visiblity into it. Again, hand over an AWS IAM Role and off you go.
It's the Slack of IAAS - it might not be the fastest, it's definitely not the cheapest, and you can roll your own for sure. But then you miss out on all these integrations that make life easier.
That's why AWS can get away with charging the prices they do, even though it is expensive, for most companies it is not expensive enough to make it worth their while to look for cheaper alternatives.
From our experience, you can actually end up in a situation that requires less engineering effort and be more stable, while saving on costs, if you dare to go to a bit lower abstraction layers. Sometimes being closer to the metal is simpler, not more complex. And in practice complexity is much more often the cause of outages rather than hardware reliability.
If you are having a company that warrants building a data center, then AWS does not add much.
Other wise you face the 'if you want to build apple pie from scratch, you need to first invent the universe' problem. Simply put you can get started right on day one, in a pay as you go model. Like you can write code, deploy and ship from the very first day, instead of having to go deep down the infrastructure rabbit hole.
Plus shutting down things is easy as well. Things don't workout? Good news! You can shut down the infrastructure that very day instead of having to worry about the capital expenditure spent to build infrastructure, and without having to worry about its use later.
Simply put, AWS is infrastructure you can hire and fire at will.
And if you are willing to pay, you can significantly over-provision dedicated servers, solving much of the scaling problem as well.
If the author's point was to make a low effort "ha ha AWS sucks" video, well sure: success, I guess.
Nobody outside of AWS sales is going to say AWS is cheaper.
But comparing the lowest end instances, and apparently, using ECS without seeming to understand how they're configuring or using it just makes their points about it being slower kind of useless. Yes you got some instances that were 5-10x slower than Hetzner. On it's own that's not particularly useful.
I thought, going in, that this was going to be along the lines of others I have seen, previously: you can generally get a reasonably beefy machine with a bunch of memory and local SSDs that will come in half or less the cost of a similar spec EC2 instance. That would've been a reasonable path to go. Add on that you don't have issues with noisy neighbors when running a dedicated box, and yeah - something people can learn from.
But this... Yeah. Nah. Sorry
Maybe try again but get some help speccing out the comparison configuration from folks who do have experience in this.
Unfortunately it will cost more to do a proper comparison with mid-range hardware.
Shared instances is something even European "cloud" providers can do so why is EC2 so much more expensive and slower?
Of the services you list, S3 is OK. I would rather admin an RDBMS than use RDS at small scale
> Large enough customers also don't pay list price for AWS.
At that scale the cost savings on not hiring sysadmins becomes much smaller, so what is the case for using AWS? The absolute cost savings will be huge.
Even "only" ECS users often benefit from load balancing there. Other clouds sometimes have their own (Hetzner), but generally it's kind of a hard problem to do well, if you don't have a cloud service like Elastic IP's that you can use to handle fail over.
Generally everywhere I've worked has been pretty excited to have a lot more than just ecs managed for them. There's still a strong perception that other people managing services is a wonderful freedom. I'd love some day if the stance could shift some, if the everyday operator felt a little more secure doing some of their own platform engineering, if folks had faith in that. Having a solid secure day-2 stance starts with simple pieces but isnt simple, is quite hard, with inherent complexity: I'm excited by those many folks out there saddling up for open source platform engineering works (operators/controllers).
To use an analogy it's like someone who's never driven a car, and really only read some basic articles about vehicles deciding to test the performance of two random vehicles.
Maybe one of them does suck, and is overpriced - but you're not getting the full picture if you never figured out that you've been driving it in first gear the whole time.
Moved it to AWS on a small instance running Server 2012 / IIS / SqlExpress and it ran like a champ for 10 USD a month. Did that for years. Only main thing I had to do was install Fail2Ban, because being on cloud IP space seemed to invite more attackers.
10 dollars a month is probably less than I paid in electricity to run my home server.
It's good - makes it's point well.
I myself used EC2 instances with locally attached NVMe drives with (mdadm) RAID-0 on BTRFS that was quite fast. It was for a CI/CD pipeline so only the config and the most recent build data needed to be kept. Either BTRFS or the CI/CD database (PostgreSQL I think) would eventually get corrupted and I'd run a rebuild script a few times a year.
For what it's worth - my day job does involve running a bunch of infrastructure on AWS. I know it's not good value, but that's the direction the organisation went long before I joined them.
Previous companies I worked for had their infrastructure hosted with the likes of Rackspace, Softlayer, and others. Every now and then someone from management would come back from an AWS conference saying how they'd been offered $megabucks in AWS Credit if only we'd sign an agreement to move over. We'd re-run the numbers on what our infrastructure would cost on AWS and send it back - and that would stop the questions dead every time.
So, I'm not exactly tied to doing it one way or another.
I do still think though that if you're going to do a comparison on price and performance between two things, you should at least be somewhat experienced with them first, OR involve someone who is.
The author spun up an ECS cluster and then is talking about being unsure of how it works. It's still not clear whether they spun up Fargate nodes or EC2 instances. There's talk of performance variations between runs. All of these things raise questions about their testing methodology.
So, yeah, AWS is over-priced and under-performing by comparison with just spinning up a machine on Hetzner.
But at least get some basics right. I don't think that's too much to ask.
FWIW, I firmly believe non "cloud native" platforms should be hosted using PXE-booted bare metal withing the physical network constructs that cloud provider software-defined-network abstractions are designed to emulate.
For value here in this thread I'm definitely meaning monetary value.
There are Gitops solution that give you all the benefits that are promised by it, without any of the downsides or compromises. You just have to bite the bullet and learn kubernetes. It may be a bit more of a learning curve, but in my experience I would say not by much. And you have much more flexibility in the precise tech stack that you choose, so you can reduce it by using stuff you're already know well.
I'm going to say what I always say here - for so many SME's the hyperscaler cloud provider has been the safe default choice. But as time goes on a few things can begin to happen. Firstly, the bills grow in both size and variability, so CFOs start to look increasingly askance at the situation. Secondly, so many technical issues start to arise that would simply vanish on fixed-size bare-metal (and the new issues that arise are well addressed by existing tooling). So the DevOps team can find themselves firefighting while the backlog keeps growing.
The problem really is one of skills and staffing. The people who have both the skills and desire actually implement and maintain the above tend to be the greying-beards who were installing RedHat 6 in their bedrooms as teenagers (myself included). And there are increasingly few of us who are not either in management and/or employed by the cloud providers.
So if companies can find the staff and the risk appetite, they can go right ahead and realise something like a 90% saving on their current spend. But that is unusual for an SME.
So we started Lithus[0] to do this for SMEs. We _only_ offer a 50% saving, not 90%. But take on all the risk and staffing issues. We don't charge for the migration, and the billing cycle only starts once migration is complete. And we provide a fixed number of engineering days per month included. So you get a complete Kubernetes cluster with open source tooling, and a bunch of RedHat-6-installing greying-beards to use however you need. /pitch
I don't really totally miss the days where I had to configure multipath storage with barely documented systems ("No, we don't support Suse, Debian, whatever...", "No, you don't pay for the highest support level, you can't access the knowledge base..."), or integrate disparate systems that theoretically were using an open standard but was botched and modified by every vendor (For example DICOM. Nowadays the situation is way better.) or other nightmare situations. Although I miss accessing the lower layers.
But I've been working for years with my employers and clients cloud providers, and I've seen how the bills climb through the roof, and how easy is to make a million-dollar mistake, how difficult (and expensive) is to leave in some cases, and how the money and power is concentrated in a handful of companies, and I've decided that I should work on that situation. Although probably I'll earn less money, as the 'external contractor' situation is not that good in Spain as in some other countries, unless you're very specialized.
But thankfully, the situation is in some cases better than in the 00s: documentation is easier to get, hardware is cheaper to come by and experiment or even use it for business, WAN connections are way cheaper...
I find Supabase immensely helpful to minimize overhead in the beginning, but would love to better understand where it starts breaking and how hard an eventual migration would be.
The problems we've seen or heard about with Supabase are:
* Cost (in either magnitude or variability). Either from usage, or having to go onto their Enterprise-tier pricing for one reason or another * The usual intractable cloud-oddities – dropped connections, performance speed-bumps * Increased network latency (just the way it goes when data has to cross a network fabric. Its fast, but not as fast as your own private network) * Scaling events tend not to be as smooth as one would hope
None of these are unique to Supabase though, they can simply all arise naturally from building infrastructure on a cloud platform.
Regarding self-hosted Supabase - we're certainly open to deploying this for our clients, we've been experimenting with it internally. Happy to chat with you or anyone who's interested. Email is adam@ company domain.
I believe their bare metal servers should have even better price/perf ratio, but I don't have data to back that up.
Son: Why does the croissant cost €2.80 here while it's only €0.45 in Lidl? Who would buy that?
Me: You're not paying for the croissant, you're paying for the staff to give it to you, for the warm café, for the tables to be cleaned and for the seat to sit on.
I also like the "why does a bottle of water cost $5 after security at airports" example.
You have no choice. You’re locked in and can’t get out.
Maybe that’s the better analogy?
So for enough people the price is not an issue. Someone else is paying.
On other side. People are pretty bad at this sort of cost analysis. I fall on this issue, prefer to spend more time myself on something I should just recommend to buy.
We don't pay million $ bills on AWS to "hang out" in a cozy place. I mean, you can, but that's insanity.
AWS is just an extremely expensive Lidl.
EDIT: autocorrect typo, coffee to café
What do you get for this? A redundant database without support (because while AWS support really tries so hard to help that I feel bad saying this, they don't get time to debug stuff, and redundant databases are complicated whether or not you use the cloud). You also get S3 distributed storage, and serverless (which is kind of CGI, except using docker and AWS markups to make one of most efficient stateless ways to run code on the web really expensive). Btw: for all of these better open source versions are available as a helm chart, with effectively the same amount of support.
You can use vercel to get out from under this, but that only works for small companies' "I need a small website" needs. It cannot do the integration that any even medium sized company requires.
Oh, and you get Amazon TLA, which is another brilliant amazon invention: during the time it takes you to write a devops script Amazon TLA comes up with another three-letter AWS service that you now have to use, because one of the devs wants it on his resume, is 2x as expensive as anything else, doesn't solve any problem and you now have to learn. It's all about using AI for maximizing uselessness.
And you'll do all this on Amazon's patented 1994-styled webpages because even claude code doesn't understand the AWS CLI. And the GCP and Azure ones are somehow worse (their websites look a lot nicer though, I'll readily admit that. But they're not significantly more functional)
Conclusion: while cloud has changed the job of sysadmin somewhat, there is no real difference, other than a massive price increase. Cloud is now so expensive that, for a single month's cloud services, you can buy hardware and put it on your desk. As the youtube points out, even an 8GB M1 mac mini, even a chinese mini-pc with AMD, runs docker far better than the (now reduced to 2GB memory) standard cloud images.
This in turn means that you always have several options, and more importantly you can invent a new way to enjoy the experience you hope to get from that interaction at a café in your mind, maybe a scene from your past or from a movie, which you’re no longer as likely to experience on average.
That said, I’ve got a favorite café where I used to spend time frequently. But their service deteriorated. And the magic is gone. So I moved on with my expectations.
Back to the analogy with the hyperscalers. I had bad experience with Azure and GCP, I’ve experienced the trade-offs of DigitalOcean and Linode and Hetzner, and of running on-premises clusters. It turned out, I’m the most comfortable with the trade-offs that AWS imposes.
People can have different opinions on this, of course, but personally, if I have a choice, I'd rather not be juggling both product development and the infrastructure headaches that come with running everything myself. That trade-off isn’t worth it for me.
"But are your database backups okay?" Yeah, I coded the backup.sh script and confirmed that it works. The daily job will kick up a warning if it ever fails to run.
"But don't you need to learn Linux stuff to configure it?" Yeah, but I already know that stuff, and even if I didn't, it's probably easier to learn than AWS's interfaces.
"But what if it breaks and you have to debug it?" Good luck debugging an AWS lambda job that won't run or something; your own hardware is way more transparent than someone else's cloud.
"But don't you need reproducible configurations checked into git?" I have a setup.sh script that starts with a vanilla Ubuntu LTS box, and transforms it into a fully-working setup with everything deployed. That's the reproducible config. When it's time to upgrade to the next LTS release (every 4 years or so), I just provision a new machine and run that script again. It'll probably fail on first try because some ubuntu package name changed slightly, but that's a 5-minute fix.
"But what about scaling?" One of my crazy-fast dedicated machines is equal to ~10 of your slow-ass VPSes. If my product is so successful that this isn't enough, that's a good problem to have. Maybe a second dedicated machine, plus a load balancer, would be enough? If my product gets so popular that I'm thinking about hundreds of dedicated machines, then hopefully I have a team to help me with that.
Since the industry has matured now, there must be a lot of opportunity to optimize code and run it on bare metal to make systems dramatically faster and dramatically cheaper.
If you think about it, the algorithms that we run to deliver products are actually not that complicated and most of the code is about accommodating developers with layers upon layers of abstraction.
For example, if the service is using a massive dataset hosted on AWS such as Sentinel 2 satellite imagery, then the bandwidth and egress costs will be the driving factors.
Each project has certainly its own requirements. If you have the manpower and a backup plan with blue/green for every infrastructure component, then absolutely harness that cost margin of yours. If it’s at a break even when you factor in specialist continuity - training folks so nothing’s down if your hardware breaks, then AWS wins.
If your project can tolerate downtime and your SREs may sleep at night, then you might profit less from the several niners HA SLOs that AWS guarantees.
It’s very hard and costly to replicate what AWS gives you if you have requirements close to enterprise levels. Also, the usual argument goes - when you’re a startup you’ll be happy to trade CAPEX for OPEX.
For an average hobby project maybe not the best option.
As for latency, you can get just as good. Major exchanges run their matching engines in AWS DCs, you can co-locate.
The video argues that AWS is dramatically overpriced and underpowered compared to cheap VPS or dedicated servers. Using Sysbench benchmarks, the creator shows that a low-cost VPS outperforms AWS EC2 and ECS by large margins (EC2 has ~20% of the VPS’s CPU performance while costing 3× more; ECS costs 6× more with only modest improvements). ECS setup is also complicated and inconsistent. Dedicated servers offer about 10× the performance of similarly priced AWS options. The conclusion: most apps don’t need cloud-scale architecture, and cloud dominance comes from marketing—not superior value or performance.
There have also been a couple of thread in text based form about the same topic. Some like text, some like video.
Not to mention what happens when you pay per megabyte and someone ddos-es you. Cloud brought back almost all hosting antipatterns, and means denial-of-service attacks really should be renamed denial-of-wallet attacks. And leaving a single S3 bucket, a single Serverless function, a single ... available (not even open) makes you vulnerable if someone knows of figures out the URL.
In my experience, if you reserve a bare metal instance for 3 years (which is the biggest discount), it costs 2 times the price of buying it outright.
I'm surprised to hear about the numbers from the video being way different, but then, it's a video, so I didn't watch it and can't tell if he did use the correct pricing.
For traditional, always-on servers, you should reserve them for 3 years. You still have the ability to scale up, just not down. You can always go hybrid if you don't know what your baseline usage is.
At two difference companies, I've seen a big batch of committed instances finally go off contract, and we replaced them with more modern instances that improved performance significantly while not costing us anything more or even letting us shrink the pool, saving us money.
It's a pain, but auto scaling groups with a couple different spot instance types in it seems to be quasi necessary for getting an ok aws compute value.
When you add up all these costs plus the electricity bill, I wager that many cloud providers are on the cheaper side due to the economy of scale. I'd be interested in such a more detailed comparison for various locations / setups vs cloud providers.
What almost never goes into this discussion, however, is the expertise and infrastructure you lose when you put your servers into the cloud. Your own servers and their infrastructure are MOAT that can be sold as various products if needed. In contrast, relying on a cloud provider is mostly an additional dependency.
That's nothing compared to an average AWS bill.
You also absolutely need this with EC2 instances, which is what the comparison was about. So no, it's not unfair.
If you're using an AWS service built on top of EC2, Fargate, or anything else, you WILL see the same costs (on top of the extremely expensive Ops engineer you hire to do it, of course).
> need to pay for the premises and their physical security, too [...] plus the electricity bill
...and all of this is included in the Hetzner service.
Once again comments conflating "dedicated server" with "co-location".
I am a Hetzner customer for my forthcoming small company in order to keep running costs low, but it's not as if companies using AWS were irrational. You get what you pay for.
This has absolutely nothing to do with "georedundancy" or "physical security" or "electricity".
It's a lot like the old Mac comparisons of days old. Well you see, the 5K iMac is actually a good value, because a 5K monitor costs 1500 dollars! Okay... but a 4k monitor doesn't, and it's almost the same thing.
Amazon markets itself as competitive by doing the whole 'you have to compare apples to apples' thing. But do you want the apples? Will you eat them? Any product can make itself seem like a good value when it throws in a bunch of stuff you'll never use.
This is one of the most common sales tactics out there. Go to a car dealership, and they'll talk your ear off about the amazing!!1 dealership addons. Whooaaa dude it's such a good value, look you get an oil change coupon and this stripe painted on your door!! Those other dealerships don't give you that, you gotta factor that in man!
The entire point of AWS is so you don't have to get a dedicated server.
It's infra as a service.
The point of having a private chef is so you don’t have to cook food by yourself.
It’s still extremely useful to know if the private chef is cheaper or more expensive than cooking by yourself and by how much, so you can make a decision more aware of the trade offs involved.
Translating:
A lot of people work with AWS, are making bank, and are terrified of their skill set being made obsolete.
They also have no idea what it means to use a dedicated server.
That’s why we get the same tired arguments and assumptions (such as the belief that bare-metal means “server room here in the office”) in every discussion.
I remember the day I discovered some companies, and not just tech ones (Walmart, UPS, Toyota,…) actually own, operate, and use their own datacenters.
And there companies out there specialized in planning and building datacenters for them.
I mean, it’s kind of obvious. But it made me realize at how small a scale I both thought and operated.
I worked for a company that was attempting to sell software to walmart.
About people who work with it, I'm just alluding to the famous quote "It is difficult to get a man to understand something when his salary depends upon his not understanding it".
- "I'm not calling anyone dumb."
- "I'm just alluding to the famous quote"
The bailey:
- "A lot of [Them] are making bank [and are] terrified of their skill set being made obsolete."
- "They have no idea what it means to use a dedicated server."
- "[They believe] bare-metal means “server room here in the office”"
FWIW, it definitely plays great: we all love to believe everyone who disagrees with us is ignorant, usually I'd upvote something like this and downvote a reply like mine, but this was so bad it hit an internal tripwire of "This is literally just a bunch of comments about They, where They is all the other comments."
You can easily play it off with "I didn't call other commenters DUMB, I just said they don't know a server is just a computer and they don't have to be in your office or in Amazon's data center to serve things!"
To riff on the famous quote you "just meant to allude to": "It is difficult to get an [interlocutor] to understand something [about their argumentation] when [they're playing to the crowd and being applauded]" I hope reading that gives you a sense of how strong of a contribution it is to discussion, as well as how well-founded it is, as well as what it implies.
Is it the fact that you don't want to spend the time cooking? or is it cooking plus shopping plus cleaning up after?
Or is it counting the time to take cooking lessons? and including the cost of taking the bus to those cooking lessons?
Does the private chef even use your house, or their own kitchen? Or can you get a smaller house without a kitchen alltogether? Especially at the rate of kitchen improvement, where kitchens don't last 20 years anymore, you're gonna need a new kitchen every 5 years. (granted the analogy is starting to fail here, but you get my point)
Big companies have been terrible at managing costs and attributing value. At least with cloud the costs are somewhat clear. Also, finding staff that is skilled is a considerable expense for businesses with a more than a few pieces of code, and takes time, you can't just get them on a whim and get rid of them.
The key point is that being aware of the cost trade off is useful.
Getting a chef would be hiring your own devops team
But the point of AWS is that you can buy these services with very fine granularity
With cloud, you hire a private chef and ALSO have to cook the food by yourself.
You don't hire a team to maintain the server infrastructure, but you hire a team to maintain cloud infrastructure.
Yet every company I've worked for still used at least a bunch of AWS VPS exactly as they would have used dedicated servers, just for ten times the cost.