I'd try to show how much a single process could scale by writing a simple benchmark, and comparing the performance, cost and latency of the alternative.
The results should speak for themselves at that point.
(But also, spinning up a lambda to wait for a db request? Wat? Must be missing some more context/details)
I think there's a more pervasive problem of all engineers, especially younger ones not realizing that spinning up new serverless infra is harder then firing a coroutine.
>(But also, spinning up a lambda to wait for a db request? Wat? Must be missing some more context/details)
there's this crud server that serves as an admin view for a bunch of IOT devices. Sometimes we want to run a task that doesn't block this server. So they use lambdas for those external task runners.
Except those tasks are mostly database operations. It is a crud app after all. But I'm trying to tell them that you don't need to spin up a new CPU when the database is doing all the computation. Have the server wait for the database. Don't have the server wait for a lambda that waits for the Database. If you do that you're just spinning up a CPU and telling it to wait. It's more complicated.
At my company they think doing this is normal. It's pervasive how normal people feel this type of thing is. I'm just baffled. I'm wondering maybe it's just pervasive across the board. I don't think they realized that query calls on lambdas just make the lambdas wait on the database. The abstraction is so heavy they likely aren't even thinking about how they are provisioning an entirely new CPU to wait for a database.
Why wait for an aws lambda to wait for a database when you can just wait for the database yourself?
You spin up an entire CPU just to ask it to wait for something when you can just fire off a coroutine. Spinning up CPUs are for spinning up new computational resources. It's pointless to ask it to wait for shit. It's like asking someone to wait in line for you while you stand next to him waiting for him.
It's a little bit like the difference between writing a monolithic piece of software and writing a bunch of command-line utilities and scripting them together using the shell.
It sounds like they do not respect your take on it, but you aren't respecting theirs, either.
If your server is your typical crud server that hits a database, your database will be overwhelmed before your server gets hit to the point where it can't even wait.
It's not about respect, I don't mind the disrespect. It's more about how they're wrong and they don't realize it and this over engineering is pervasive in the industry.
I don't think you understanding how cheap it is on the CPU for a server to wait for an external IO call.
But do they mind? Because so far you are saying anyone who doesn't agree with you doesn't understand. Sounds like while you might not mind being disrespected, you fail to realize that other people do mind.
You also clearly have an over-sized server if you think 10K concurrent connections can be covered on it. If that is true, you could save a heap of money by downgrading that server and going to lambdas. The more you defend your position, the less tenable it becomes.
Is that disrespect or is that just sensitivity? From my POV this is fact. Like 1 + 1 = 2. I'm stating that. Or should I be inclusive and say that 1 + 1 maybe equals 2 but I respect your alternative opinion?
If I believe in something as a fact. I should be able to talk about it as a fact and talk about it as if the other person is wrong. You have that right as well. Same with my young coworker.
They probably do mind. But I feel that's just life. He called my way of doing it using "assembly language" even, and I let it go. If someone is wrong, we shouldn't be afraid to slightly offend them and tell them they are wrong. That goes both ways. I don't want to live in society where we have to dance around everyone's feelings even when they're utterly wrong and borderline delusional.
>You also clearly have an over-sized server if you think 10K concurrent connections can be covered on it. If that is true, you could save a heap of money by downgrading that server and going to lambdas. The more you defend your position, the less tenable it becomes.
This is not true. Your laptop on a 10 year old intel chip can handle 10k concurrent connections. Some guy pushed it up to a million here: https://unetworkingab.medium.com/millions-of-active-websocke...
The lambda architecture doesn't work for our case because our server maintains long running websocket connections. We just need async non blocking tasks. So I'm saying, launch a concurrent coroutine (aka go routine) and call it a day.
I think you're not aware of how much concurrency modern servers can handle. Back in the days of LAMP this would be a problem. But not anymore.
In short: The server is the origin of the task. By definition the task doesn't exist without the origin/trigger.
Why spend extra on a lambda? I think you're lost. Especially if you built architecture like this where servers wait for lambdas that just wait for database calls. (FYI waiting time is slightly longer this way, cost is slightly more expensive, and architecture becomes unnecessarily complex)
Literally first reply is someone who thinks a lambda is the right tool. I wasn't sure how bad it was until this young man replied.