In a Rails app served with Puma, I've always had a hard time understanding what would be the canonical way for having a loop doing some periodic work.
I know Puma has plugin support but I don't see much documentation there.
Forking a process / threads is something that we're used having Rails / Puma take care for us.
Pressed for time and without having time to deep dive, we ended up settling with sidekiq-cron, and it's been serving us so nicely.
It's surprisingly simple. You can check out the relevant source here: https://github.com/rails/solid_queue/blob/main/lib%2Fsolid_q...
Happy to answer any questions.
I suppose my primary question is: What does this do better than Sidekiq+Redis; or, why should I convert my Sidekiq jobs to use Solid Queue? I'm curious also if there are comparisons of performance anywhere.
All-in-all, though, it looks technically quite promising!
some of us are happy rails users so more rails content is always welcome
We solved debugging by sending all log lines to a centralized server. We were running on the Google cloud.
We were not multiregion though.
My current Rails project uses sidekiq a lot to send mail, generate PDFs, any activity that does not have to necessarily complete before we return the response. We keep the interactive web app up to date by websockets and with callbacks for clients using our public API. I don't think we would have done it differently in any other language.
By the way, we built our slimmer version of sidekiq for Elixir because the language plus the OTP libraries have a lot of functionality but we still need to persist jobs, retry them even after a complete reboot, exponential back off, etc.
If some endpoint is too slow to return a response to the frontend within a reasonable time, enqueueing it via a worker makes sense to me.
That doesn't cover all performance issues but it handles a lot of them. You should also do things like optimize SQL queries, cache in redis or the db, perhaps run multiple threads within an endpoint, etc. but I don't see anything wrong with specifically having dozens of workers/queues. We have that in my work's Rails app.
Happy to hear how I can do things better if I'm missing something.
1) Designers don't understand that things are going to happen async, and the UI ends up wanting to make assumptions that everything is happening in real time. Even if it works with the current design, it's one small change away from being impossible to implement.
This is a general difficulty with working in eventually consistent systems, but if you're putting something in a queue because you're too lazy to optimize (rather than the natural complexity of the workload demanding it) you're going to be hurting yourself unnecessarily.
2) Errors get swallowed really easily. Instead of being properly reported to the team and surfaced to a user in a timely manner, the default setting of some configurations to just keep retrying the job later means if you're not monitoring closely you'll end up with tens of thousands of jobs retrying over and over at various intervals.
These are data and compute heavy workloads that take anywhere from minutes to hours for a request to be completed, but the UI takes this into account.
Users submit a request and then continue onto whatever is the next thing they intend to do and then they can subscribe to various async notification channels.
It’s not the right choice for everything, but it’s the right choice for something’s.
1. Yes this is true but Rails now comes with nice support for async UI built to push updates to the browser via Hotwire and Turbo.
You’d need something like that anyway anytime you’re calling an external service you don’t control.
2. Again this is also a good point but even running every request synchronously you still need good error logging because you don’t want to share details of an error with your frontend.
With background jobs you definitely need to be on top of monitoring and retry logic. I also think you need to be very careful about idempotency amd retry logic.
I see that as the engineering trade offs for that pattern. There’s very little in the way of silver bullets in engineering; different solutions just come with different trade offs.
Anything running in sidekiq is written in Ruby too.
But if you are just smoothing out some work its pretty normal, just make sure you are modeling things instead of putting it in the magic queue.
IMO - there’s a lot of things that queues are an excellent answer to. Potentially including performance.
But - queues (generally and among other things) solve the problem of “this will take some time AND the user doesn’t need an immediate response.”
If that’s not your problem, then using queues might not be the solution. If it’s something that’s taking too long and the user DOES need a response, then (as you say) optimizing is what you should try, not queues. Or some product redesign so the user either doesn’t need an immediate response. Or finding a way to split up the part producing an immediate response and the part that takes awhile.
For example: validating uploaded bulk data is in the right “shape”, and then enqueuing the full validation and insertion.
Also really really avoid jobs that enqueue jobs. Sometimes they’re necessary (spacing out some operation on chunks of a group; or a job that ONLY spawns other jobs) but mostly they’re a route to spaghetti.
The parent indicated the cross region dynamic required extra routing logic and introduced debugging problems.
MRI (CRuby) has a GVL which is why you might use a forking web server like Puma or Pitchfork.
JRuby and TruffleRuby though have true multi-threading and no GVL.
I’ve used the Concurrent Ruby library with JRuby and Tomcat quite a bit and find works very well for what I need.
Regardless, in my experience, when you run into scenarios that need queueing, multi-threading, etc., you need to know what you’re doing.
If we were to take Byzantine in a more accurate, historical sense, something truly “Byzantine” should be evolving, enduring, top-tier, and built to last for 1k years.
Also you will see that the answer to most actual performance problems tend to be queues even in other languages. At least in mature places - mostly because it is possible to inspect what a queue is doing. Though it will of course be a problem if it is part of a big spaghetti architecture.
So we're only left with personal opinion
You're not going to misuse an API that takes a Person or Cart, but mixing up two hashes cause you used two different strings as keys can happen easily.
(I do think dynamic typing is mostly fine, but I do wish ruby had optional static typing with some nice syntax instead of RBS)
This is more or less trivial to catch and fix, I'm just not sure a type system is worth it's weight for that kind of case.
I’m really excited about Sorbet getting behind the new RBS-inline comment syntax and the prospect of both runtime and static analysis as optional tools when needed.
Plus, Ruby has lots of easy ways for you to check typing, if you want to.