But today, all streaming systems (or workarounds) with per message key acknowledgements incur O(n^2) costs in either computation, bandwidth, or storage per n messages. This applies to Pulsar for example, which is often used for this feature.
Now, now, this degenerate time/space complexity might not show up every day, but when it does, you’re toast, and you have to wait it out.
My colleagues and I have studied this problem in depth for years, and our conclusion is that a fundamental architectural change is needed to support scalable per message key acknowledgements. Furthermore, the architecture will fundamentally require a sorted index, meaning that any such a queuing / streaming system will process n messages in O (n log n).
We’ve wanted to blog about this for a while, but never found the time. I hope this comment helps out if you’re thinking of relying on per message key acknowledgments; you should expect sporadic outages / delays.
It processes unrelated keys in parallel within a partition. It has to track what offsets have been processed between the last committed offset of the partition and the tip (i.e. only what's currently processed out of order). When it commits, it saves this state in the commit metadata highly compressed.
Most of the time, it was only processing a small number of records out of order so this bookkeeping was insignificant, but if one key gets stuck, it would scale to at least 100,000 offsets ahead, at which point enough alarms would go off that we would do something. That's definitely a huge improvement to head of line blocking.
Yup, this is one more example, just like Pulsar. There are definitely great optimizations to be made on the average case. In the case of parallel consumer, if you'd like to keep ordering guarantees, you retain O(n^2) processing time in the worst case.
The issues arise when you try to traverse arbitrary dependency topologies in your messages. So you're left with two options:
1. Make damn sure that causal dependencies don't exhibit O(n^2) behavior, which requires formal models to be 100% sure. 2. Give up ordering or make some other nasty tradeoff.
At a high level the problem boils down to traversing a DAG in topological order. From computer science theory, we know that this requires a sorted index. And if you're implementing an index on top of Kafka, you might as well embed your data into and consume directly from the index. Of course, this is easier said than done, and that's why no one has cracked this problem yet. We were going to try, but alas we pivoted :)
Edit: Topological sort does not required a sorted index (or similar) if you don't care about concurrency. But then you've lost the advantages of your queue.
Is there another way to state this? It’s very difficult for me to grok.
> DAG
Directed acyclic graph right?
A graphical representation might be worth a thousand words, keeping in mind it's just one example. Imagine you're traversing the following.
A1 -> A2 -> A3...
|
v
B1 -> B2 -> B3...
|
v
C1 -> C2 -> C3...
|
v
D1 -> D2 -> D3...
|
v
E1 -> E2 -> E3...
|
v
F1 -> F2 -> F3...
|
v
...
Efficient concurrent consumption of these messages (while respecting causal dependency) would take O(w + h), where w = the _width_ (left to right) of the longest sequence, and h = the _height_ (top to bottom of the first column)
But Pulsar, Kafka + parallel consumer, Et al. would take O(n^2) either in processing time or in space complexity. This is because at a fundamental level, the underlying data storages store looks like this
A1 -> A2 -> A3...
B1 -> B2 -> B3...
C1 -> C2 -> C3...
D1 -> D2 -> D3...
E1 -> E2 -> E3...
F1 -> F2 -> F3...
Notice that the underlying data storage loses information about nodes with multiple children (e.g., A1 previously parented both A2 and B1)
If we want to respect order, the consumer will be responsible for declining to process messages that don't respect causal order. E.g., attempting to process F1 before E1. Thus we could get into a situation where we try to process F1, then E1, then D1, then C1, then B1, then A1. Now that A1 is processed, kafka tries again, but it tries F1, then E1, then D1, then C1, then B1... And so on and so forth. This is O(n^2) behavior.
Without changing the underlying data storage architecture, you will either:
1. Incur O(n^2) space or time complexity
2. Reimplement the queuing mechanism at the consumer level, but then you might as well not even use Kafka (or others) at all. In practice this is not practical (my evidence being that no one has pulled it off).
3. Face other nasty issues (e.g., in Kafka parallel consumer you can run out of memory or your processing time can become O(n^2)).
If you don't mind another followup (and your patience with my ignorance hasn't run out :P), wouldn't the efficient concurrent consumption imply knowing the dependency graph before the events are processed? IE, is it possible in any instance to get to O(w+h) in a stream?
Yes, order needs to be known.
So no, it’s not possible to do O(w+h) with streams partitioned by key. Unless, of course you use a supplementary index, but then you might as well not use the streams storage at all and store the records in the same storage as the index.
It’s worth noting that Pulsar does something like this (supplementary way to keep track of acknowledged messages), but their implementation has O(n^2) edge cases.
Let's imagine ourselves as a couple of engineers at Acme Foreign Exchange House. We'd like to track Acme's net cash position across multiple currencies, and execute trades accordingly (e.g., heding). And we'd like to retrospectively analyze our hedges, to assess their effectiveness.
Let's say I have this set of transactions (for accounts A, B, C, D, E, F, etc.)
A1 -> A2 -> A3 -> A4
B1 -> B2 -> B3 -> B4
C1-> C2
D1 -> D2 -> D3 -> D4
E1 -> E2
F1
Let's say that that:
- E1 was a deposit made into account E for $2M USD.
- E2 was an outgoing transfer of $2M USD sent to account F (incoming £1.7M GBP at F1).
If we consume our transactions and partiton our consumption by account id, we could get into a state where E1 and F1 are reflected in our net position, but E2 isn't. That is, our calculation has both $2M USD and £1.7M GBP, when in reality we only ever held either $2M USD or £1.7M GBP.
So what could we do?
1. Make sure that we respect causality order. I.e., there's no F1 reflected in our net position if we haven't processed E2.
2. Make sure that pairs of transactions (e.g., E2 and F1) update our net position atomically.
This is otherwise known as a "consistent cut" (see slide 25 here https://www.cs.cornell.edu/courses/cs6410/2011fa/lectures/19...).
Opinion: the world is causally ordered in arbitrary ways as above. But the tools, frameworks, and infrastructure more readily available to us struggle at modeling arbitrary partially ordered causality graphs. So we shrug our shoulders, and we learn to live with the edge cases. But it doesn't have to be so.
I'd rather debug a worker problem than an infra scaling problem every day of the week and twice on Sundays.
The parallel consumer nearly entirely solved this problem. Only the most egregious cases where keys were ~3000 times slower than other keys would cause an issue, and then you could solve it by disabling that key for a while.
I tend to prefer other queueing mechanisms in those cases, although I still work hard to make 99ths and medians align as it can still cause issues (especially for monitoring)
Would using a sorted index have an impact on the measured servicing time of each message? (Not worst-case, something more like average-cass). It's made extremely clear in the Kafka docs that Kafka's relies heavily on the operating systems filesystem cache for performance, and that seeking through events on disk turns out to be very slow compared to just processing events in-order.
1. Sequential IO when reading from disk.
2. Use of disk cache (instead of reading from disk) when re-reading recently read events.
#2 helps when you have many consumer groups reading from the tail. And this advantage would extend to index-based streaming.
But #1 would not fully extend to index-based streaming.
When does this matter? When adding a new consumer group you would lose the speed advantage of sequential IO, because it consumes from the beginning (which isn’t in disk cache).
BUT this has become less important now that SSDs are so prevalent and affordable. Additionally, in practice, the bottleneck isn’t in disk IO. Consumers tend to perform IO in other systems that incur O(log n) per insert. Or network cards can get saturated way before disk IO is the limiting factor.
I speculate that we got Kafka et al because we didn’t have such abundance of SSDs in the early 2010’s.
So, returning to your question, you wouldn’t notice the difference in the average case, as long as there are SSDs under the hood.
I'm guessing this is mostly around how backed up the stream is. n isn't the total number of messages but rather the current number of unacked messages.
Would a radix structure work better here? If you throw something like a UUID7 on the messages and store them in a radix structure you should be able to get O(n) performance here correct? Or am I not understanding the problem well.
And yes, O(n log n ) is not bad at all. Sorted database indexes (whether SQL, NoSQL, or AcmeVendorSQL, etc.) already take O(n log n) to insert n elements into data storage or to read n elements from data storage.
Not to mention you then also have a KV store. Most problems can be solved with redis + Postgres
https://docs.nats.io/nats-concepts/jetstream/key-value-store
I had to really dig (outside of that website) to understand even what NATS is and/or does
It goes too hard on the keyword babbling and too little on the "what does this actually do"
> Services can live anywhere and are easily discoverable - decentralized, zerotrust security
Ok cool, this tells me absolutely nothing. What service? Who to whom? Discovering what?
NATS is mainly two things:
1. Core NATS, an ephemeral message broker. It's a lightweight pub/sub system where routing of messages is based on wildcard paths. All in memory, lightning fast, extremely lightweight. You can use it for RPC, queues, broadcasting, file transfer, anything.
2. JetStream, which is a Kafka/Pulsar-like log built on top of Core NATS. Streams are indexed, meaning there's no strong need for partitioning or consumer groups, since readers can efficiently filter the stream by interest (you can still partition for write performance). Supports both durable and ephemeral consumers, in-memory streams, ack/nack, priority groups, deduplication, exactly-once delivery, hierarchical clusters, offline clusters ("leaf clusters"), mirroring, etc.
I often find it difficult to explain the magic of NATS. It's a communication model that doesn't really exist anywhere else, as far as I've seen. The closest might be ZeroMQ.
JetStream could be explained as "Kafka for people who don't want to administer Kafka". It's very low-maintenance, very easy to use, feels super lightweight, and still offers much of the performance and reliability of Kafka, as well as a much richer feature set that maps better to what people may want from a streaming log.
Still, if a hypothetical new Kafka would incorporate some of Nats' features, that would be a good thing.
We use it for streaming tick data, system events, order events, etc, into kdb. We write to kafka and forget. The messages are persisted, and we don't have to worry if kdb has an issue. Out of band consumers read from the topics and persist to kdb.
In several years of doing this we haven't really had any major issues. It does the job we want. Of course, we use the aws managed service, so that simplifies quite a few things.
I read all the hate comments and wonder what we're missing.
Kafka is perhaps the most aptly named software I've ever used.
That said, it's rock solid and I continue to recommend it for cases where it makes sense.
Like the article outlines, partitions are not that useful for most people. Instead of removing them, how about having them behind a feature flag, i.e. not on by default. That would ease 99% of users problems.
The next point in the article which to me resonates is the lack of proper schema support. That's just bad UX again, not inherent complexity of the problem space.
On testing side, why do I need to spin up a Kafka testcontainer, why is there no in-memory kafka server that I can use for simple testing purposes.
Take a look at Debezium's KafkaCluster, which is exactly that: https://github.com/debezium/debezium/blob/main/debezium-core....
It's used within Debezium's test suite. Check out the test for this class itself to see how it's being used: https://github.com/debezium/debezium/blob/main/debezium-core...
If you take action a, then action b, your system will throw 500s fairly regularly between those two steps, leaving your user in an inconsistent state. (a = pay money, b = receive item). Re-ordering the steps will just make it break differently.
If you stick both actions into a single event ({userid} paid {money} for {item}) then "two things" has just become "one thing" in your system. The user either paid money for item, or didn't. Your warehouse team can read this list of events to figure out which items to ship, and your payments team can read this list of events to figure out users' balances and owed taxes.
(You could do the one-thing-instead-of-two-things using a DB instead of Kafka, but then you have to invent some kind of pub-sub so that callers know when to check for new events.)
Also it's silly waiting around to see exceptions build up in your dev logs, or for angry customers to reach out via support tickets. When your implementation depends on publishing literal events of what happened, you can spin up side-cars which verify properties of your system in (soft) real-time. One side-car could just read all the ({userid} paid {money} for {item}) events and ({item} has been shipped) events. It's a few lines of code to match those together and all of a sudden you have a monitor of "Whose items haven't been shipped?". Then you can debug-in-bulk (before the customers get angry and reach out) rather than scour the developer logs for individual userIds to try to piece together what happened.
Also, read this thread https://news.ycombinator.com/item?id=43776967 from a day ago, and compare this approach to what's going on in there, with audit trails, soft-deletes and updated_at fields.
Confluent has rather good marketing and when you need messaging but can also gain a persistent, super scalable data store and more, why not use that instead? The obvious answer is: Because there is no one-size-fits-all-solution with no drawbacks.
It took 4 years to properly integrate Kafka into our pipelines. Everything, like everything is complicated with it: cluster management, numerous semi-tested configurations, etc.
My final conclusion with it is that the project just doesn't really know what it wants to be. Instead it tries to provide everything for everybody, and ends up being an unbelievably complicated mess.
You know, there are systems that know what they want to be (Amazon S3, Postres, etc), and then there are systems that try to eat the world (Kafka, k8s, systemd).
It's a distributed log? What else is it trying to do?
I am not sure about this taxonomy. K8s, systemd, and (I would add) the Linux kernel are all taking on the ambitious task of central, automatic orchestration of general purpose computing systems. It's an extremely complex problem and I think all those technologies have done a reasonably good job of choosing the right abstractions to break down that (ever-changing) mess.
People tend to criticize projects with huge scope because they are obviously complex, and complexity is the enemy, but most of the complexity is necessary in these cases.
If Kafka's goal is to be a general purpose "operating system" for generic data systems, then that explains its complexity. But it's less obvious to me that this premise is a good one.
it's real goal is to make Linux administration as useless as windows so RH can sell certifications.
tell me the output of systemctl is not as awful as opening the windows service panel.
But both are kind of hard to understand end-to-end, especially for an occasional user.
systemd whole premise is "people will not read the distro or bash scripting manual"...
then nobody read systemd's (you have even less reason, since it's badly written, ever changing in conflicting ways, and a single use tool)
so you went from complaining your coworkers can't write bash to complaining they don't know they have to use EXEC= EXEC=/bin/x
because random values, without any hint, are lists of commands instead of string values.
Granted it doesn't happen often, if you plan correctly, but the possibility of going wrong in the partitioning and replication makes updates and upgrades nightmare fuel.
Then we settled into an era where server rooms grew and workloads demanded horizontal scaling and for the high profile users running an odd number of processes was a rounding error and we just stopped doing it.
But we also see this issue re-emerge with dev sandboxes. Running three copies of Kafka, Redis, Consul, Mongo, or god forbid all four, is just a lot for one laptop, and 50% more EC2 instances if you spin it up in the Cloud, one cluster per dev.
I don’t know much Kafka, so I’ll stick with Consul as a mental exercise. If you take something like consul, the voting logic should be pretty well contained. It’s the logic for catching up a restarted node and serving the data that’s the complex part.
There was a time in my early to mid career when I had to defend my designs a lot because people thought my solutions were shallower than they were and didn’t understand that the “quirks” were covering unhappy paths. They were often load bearing, 80/20 artifacts.
I get it, there are lots of knobs and dials I can adjust to tune the cluster. A one-line description for each item is often insufficient to figure out what the item is doing. You can get a sense for the problem eventually if you spin up a local environment and one-by-one go through each item to see what it does, but that's super time consuming.
Really? I got scared by Kafka by just reading through the documentation.
Just don't use Kafka.
Write to the downstream datastore directly. Then you know your data is committed and you have a database to query.
This reminds me of the OOP vs DOD debate again. OOP adherents say they don't know all the types of data their code operates on; DOD adherents say they actually do, since their program contains a finite number of classes, and a finite subset of those classes can be the ones called in this particular virtual function call.
What you mean is that your system is structured in such a way that it's as if you don't know who's listening. Which is okay, but you should be explicit that it's a design choice, not a law of physics, so when that design choice no longer serves you well, you have the right to change it.
(Sometimes you really don't know, because your code is a library or has a plugin system. In such cases, this doesn't apply.)
> Arguably, I'd not use Kafka to store actual data, just to notify in-flight.
I believe people did this initially and then discovered the non-Kafka copy of the data is redundant, so got rid of it, or relegated it to the status of a cache. This type of design is called Event Sourcing.
So yes, in large companies, your development team is just a small cog, you don't set policy for what happens to the data you gather. And in some sectors, like finances, you are an especially small cog with little power, which might sound strange if you only ever worked for a software startup.
Disclosure: I work part time for Oracle Labs and know about these features because I'm using them in a project at the moment.
The messaging tech being separate from the database tech means the architects can swap out the database if needed in the future without needing to rewrite the producers and consumers.
In the plural, accomplishing that task in a performant way at enterprise scale seems to involve turning every function call into an asynchronous, queued service of some sort.
Which then begets additional deployment and monitoring services.
A queued problem requires a cute solution, bringing acute pain.
There may be many things that go wrong and how you handle this depends on your data guarantees and consistency requirements.
If you're not queuing what are you doing when a write fails, throwing away the data?
The standard regex joke really works for all values of X. Some people, when confronted with a problem, think "I know - I'll use a queue!" Now they have two problems.
Adding a queue to the system does not make it faster or more reliable. It makes it more asynchronous (because of the queue), slower (because the computer has to do more stuff) and less reliable (because there are more "moving parts"). It's possible that a queue is a required component of a design which is more reliable and faster, but this can only be known about the specific design, not the general case.
I'd start by asking why your data store is randomly failing to write data. I've never encountered Postgres randomly failing to write data. There are certainly conditions that can cause Postgres to fail to write data, but they aren't random and most of them are bad enough to require an on-call engineer to come and fix the system anyway - if the problem could be resolved automatically, it would have been.
If you want to be resilient to events like the database disk being full (maybe it's a separate analytics database that's less important from the main transactional database) then adding a queue (on a separate disk) can make sense, so you can continue having accurate analytics after upgrading the analytics database storage. In this case you're using the queue to create a fault isolation boundary. It just kicks the can down the road though, since if the analytics queue storage fills up, you still have to either drop the analytics or fail the client's request. You have the same problem but now for the queue. Again, it could be a reasonable design, but not by default and you'd have to evaluate the whole design to see whether it's reasonable.
The reason message queue systems exist is scale. Good luck sending a notification at 9am to your 3 million users and keeping your database alive in the sudden influx of activity. You need to queue that load.
It's just the table that's getting is essentially append only (excepting the cleanup processes it supports).
The backing db in this wishlist would be something in the vein of Aurora to achieve the storage compute split.
I feel like Kafka is a victim of it's own success, it's excellent for what it was designed, but since the design is simple and elegant, people have been using it for all sorts of things for which it was not designed. And well, of course it's not perfect for these use cases.
Engineering solutions which only exist because AWS pricing is whack are...well, certainly a choice.
I can also think of lots of cases where whatever you're running is fine to just run in a single AZ since it's not critical.
Even if this were to change, using object storage results in a lot of operational simplicity as well compared to managing a bunch of disks. You can easily and quickly scale to zero or scale up to handle bursts in traffic.
An architecture like this also makes it possible to achieve a truly active-active multi-region Kafka cluster that has real SLAs.
See: https://buf.build/blog/bufstream-multi-region
(disclosure: I work at Buf)
Kafka is misused for some weird stuff. I've seen it used as a user database, which makes absolutely no sense. I've also seen it used a "key/value" store, which I can't imagine being efficient as you'd have to scan the entire log.
Part of it seems to stem from "We need somewhere to store X. We already have Kafka, and requesting a database or key/value store is just a bit to much work, so let's stuff it into Kafka".
I had a client ask for a Kafka cluster, when queried about what they'd need it for we got "We don't know yet". Well that's going to make it a bit hard to dimension and tune it correctly. Everyone else used Kafka, so they wanted to use it too.
It will become slower. It will become costlier (to maintain). And we will end up with local replicas for performance.
If only people looked outside AWS bubble and realised they are SEVERELY overcharged for storage, this would be mute point.
I would welcome getting partition dropped in favour of multi-tenancy ... but for my use cases this is often equivalent.
Storage is not the problem though.
Kafka is simple and elegant?
What if we wrote it in Rust. And leveraged and WASM.
We have been at it for the past 6 years. https://github.com/infinyon/fluvio
For the past 2 years we have also been building Flink using Rust and WASM. https://github.com/infinyon/stateful-dataflow-examples/
Any chance you’re going to be reviving support for the Kafka wire protocol?
How would you say your project compares to Arroyo?
Arroyo is SQL first stream processing. Fluvio is streaming transport which can send data to Arroyo and there is an integration.
Stateful DataFlow and Arroyo are similar in the stream processing pattern and the use of Apache Arrow.
The interfaces are different. Fluvio and Stateful DataFlow support for SQL is the same dialect as columnar SQL supported by Polars. The Fluvio and Stateful DataFlow paradigm is more intricate more expressive and the platform is broader and deeper.
As such, you can no longer use existing software that is built on Kafka as-is. It may not be a grave concern for LinkedIn, but it could be for others that currently benefit from using the existing Kafka ecosystem.
As such, even with Xinfra deployed, you have to rewrite all the software that connects to Kafka, regardless of programming language.
> "Key-level streams (... of events)"
When you are leaning on the storage backend for physical partitioning (as per the cloud example, where they would literally partition based on keys), doesnt this effectively just boil down to renaming partitions to keys, and keys to events?
That depends on how you are using partitions. A partition per topic is effectively going to give you exactly that. What you call a key is then just a topic, and hierarchy (including multitenancy and other forms of namespacing) can be implemented via topic naming convention. This isn't even a novel way to use kafka - its quite a common approach in practice.
Obviously this then comes at the cost of throughput - which is exactly why systems that use these approaches are often much slower than partitioned topics in kafka. Even in your object store example there needs to be synchronization across the storage partitions, and that overhead will give you substantially reduced throughput - as you are effectively using a distributed lock for each write to the complete quorum.
I skipped learning Kafka, and jumped right into Pulsar. It works great for our use case. No complaints. But I wonder why so few use it?
StreamNative seems like an excellent team, and I hope they succeed. But as another comment has written, something (puslar) being better (than kafka) has to either be adopted from the start, or be a big enough improvement to change— and as difficult and feature-poor that Kafka is, it still gets the job done.
I can rant longer about this topic but Pulsar _should_ be more popular, but unfortunately Confluent has dominated here and rent-seeking this field into the ground.
Just because something is 10-30% better in certain cases almost never warrants its adoption, if on the other side you get much less human expertise, documentation/resources and battle tested testimonies.
This, imo, is the story of most Kafka competitors
If I need multiple independent consumers, I just instead publish to SNS FIFO and let my consumers create their own SQS fifo queues that are subscribed to the topic. The ordering is maintained across SNS and SQS. I also get native DLQ support for poison pills and an SQS consumer is dead simple to operate vs a Kafka consumer.
It does not solve all of the mentioned problems like being able to see what the keys are in the queue or lookup by a given key but as a messaging solution that offers ordering for a key, this is hard to beat.
Given an arbitrary causality graph between n messages, it would be ideal if you could consume your messages in topological order. And that you could do so in O(n log n).
No queuing system in the world does arbitrary causality graphs without O(n^2) costs. I dream of the day where this changes.
And because of this, we’ve adapted our message causality topologies to cope with the consuming mechanisms of Kafka et al
To make this less abstract, imagine you have two bank accounts, each with a stream. MoneyOut in Bob’s account should come BEFORE MoneyIn when he transfers to Alice’s account, despite each bank account having different partition keys.
Example Option 1
You give up on the guarantees across partition keys (bank accounts), and you accept that balances will not reflect a causally consistent state of the past.
E.g., Bob deposits 100, Bob sends 50 to Alice.
Balances: Bob 0 Alice 50 # the source system was never in this state Bob 100 Alice 50 # the source system was never in this state Bob 50 Alice 50 # eventually consistent final state
Example Option 2
You give up on parallelism, and consume in total order (i.e., one single partition / unit of parallelism - e.g., in Kafka set a partitioner that always hashes to the same value).
Example Option 3
In the consumer you "wait" whenever you get a message that violates causal order.
E.g., Bob deposits 100 Bob sends 50 to Alice (Bob-MoneyOut 50 -> Alice-MoneyIn 50).
If we attempt to consume Alice-MoneyIn before Bob-MoneyOut, we exponentially back off from the partition containing Alice-MoneyIn.
(Option 3 is terrible because of O(n^2) processing times in the worst case and the possibility for deadlocks (two partitions are waiting for one another))
We thought about building it ourselves, because we know the data structures, high level algorithms, and disk optimizations required. BUT we pivoted our company, so we've postponed this for the foreseeable future. After all, theory is relatively easy, but a true production grade implementation takes years.
or the problem is that again this is O(n^2)? (because then the consumers now need to buffer [potentially] n key streams (and then search for them every time - so "n" times)?
This is exactly what I interpret from these kind of articles: engineering just for the cause of engineering. I am not saying we should not investigate on how to improve our engineered artifacts, or that we should not improve them. But I see a generalized lack of reflection on why we should do it, and I think it is related to a detachment from the domains we create software for. The article suggests uses of the technology that come from so different ways of using it, that it looses coherence as a technical item.
Its a compiled binary - no JVM to manage. Java apps have always been a headache for me. Plus, no zookeeper - one less thing to break.
The biggest benefit I've seen is the performance. Redpanda just outperforms Apache Kakfa on similar hardware. Its also Kafka compliant in every way I've noticed, so all my favorite tools that interact with Kafka work the same with Redpanda.
Redpanda, like Kafka, writes to disk, so you'll always be limited by your hardware no matter what you use (but NVMe's are fast and affordable).
YMMV, but its been a good experience for me.
IANAL, but it looks pretty open source to me.
Kafka's biggest strength is the wide and useful ecosystem built on top of it.
It is also a weaknesses, as we have to keep some (but not of all) the design decisions we wouldn't have made had we started from scratch today. Or we could drop backwards compatibility, at the cost of having to recreate the ecosystem we already have.
I’ve been working on a datastore that’s perfect for this [1], but I’m getting very little traction. Does anyone have any ideas why that is? Is my marketing just bad, or is this feature just not very useful after all?
Mature projects have too much bureacracy, and even spending time talking to you = opportunity cost. So making a case for why you're going to solve a problem for them is tough.
New projects (whether at big companies or small companies) have 20 other things to worry about, so the problem isn't big enough.
I wrote about this in our blog if you're curious: https://ambar.cloud/blog/a-new-path-for-ambar
That's cool, but but I would prefer to not reinvent the wheel. If you have a simple library, that would already be useful.
Some simple code or request examples would be convenient as well. I really don't know how easy or difficult your interface design is. It would be cool to see the API docs.
I'm also skeptical of the graph on your front page that claims S3 cost as much as DynamoDB.
that alone makes it look like total nonsense.
as someone else said, extraordinary claims require extraordinary evidence.
Keep the data in S3 for 0.023 USD per GB-month. If you have a billion keys that can be useful.
> I'm also skeptical of the graph on your front page that claims S3 cost as much as DynamoDB.
Good point. Could have put a bit more work into that.
On second thought, and after looking at my cost estimates, the reason DynamoDB ends up costing about the same as S3 for this kind of use case is storage costs. DynamoDB is a lot cheaper than S3 to write to, but 5-10x more expensive to keep data stored in. So after about 16-32 months you reach break even.
Like distinctly recall running it at home as a global unioning filesystem where content had to fully fit on the specific device it was targeted at (rather then being striped or whatever).
But don’t public cloud providers already all have cloud-native event sourcing? If that’s what you need, just use that instead of Kafka.
have some of you some experience with those and able to give pros/cons?
Redpanda most importantly is faster that Apache Kafka. We were able to get a lot more throughput. Its also stable, especially compared to dealing with anything that requires a JVM.
The problem with guaranteed order is that you have to have some agreed upon counter/clock for ordering, otherwise a slow write from one producer to S3 could result in consumers already passing that offset before it was written, thus the write is effectively lost unless the consumers wind-back.
Having partitions means we can assign a dedicated writer for that partition that guarantees that writes are in order. With s3-direct writing, you lose that benefit, even with a timestamp/offset oracle. You'd need some metadata system that can do serializable isolation to guarantee that segments are written (visible to consumers) in the order of their timestamps. Doing that transactional system directly against S3 would be super slow (and you still need either bounded-error clocks, or a timestamp oracle service).
>I cannot make you understand. I cannot make anyone understand what is happening inside me. I cannot even explain it to myself. -Franz Kafka, The Metamorphosis
The user gets global ordering when
1. you-the-MQ assign both messages and partitions stable + unique + order + user-exposed identifiers;
2. the user constructs a "globally-collatable ID" from the (perPartitionMsgSequenceNumber, partitionID) tuple;
3. the user does a client-side streaming merge-sort of messages received by the partitions, sorting by this collation ID. (Where even in an ACK-on-receive design, messages don't actually get ACKed until they exit the client-side per-partition sort buffer and enter the linearized stream.)
The definition of "exposed to users" is a bit interesting here, as you might think you could do this merge-sort on the backend, just exposing a pre-linearized stream to the client.
But one of the key points/benefits of Kafka-like systems, under high throughput load (which is their domain of comparative advantage, and so should be assumed to be the deployed use-case), is that you can parallelize consumption cheaply, by just assigning your consumer-workers partitions of the topic to consume.
And this still works under global ordering, under some provisos:
• your workload can be structured as a map/reduce, and you don't need global ordering for the map step, only the reduce step;
• it's not impractical for you to materialize+embed the original intended input collation-ordering into the transform workers' output (because otherwise it will be lost in all but very specific situations.)
Plenty of systems fit these constraints, and happily rely on doing this kind of post-linearized map/reduce parallelized Kafka partition consumption.
And if you "hide" this on an API level, this parallelization becomes impossible.
Note, however, that "on an API level" bit. This is only a problem insofar as your system design is protocol-centered, with the expectation of "cheap, easy" third-party client implementations.
If your MQ is not just a backend, but also a fat client SDK library — then you can put the partition-collation into the fat client, and it will still end up being "transparent" to the user. (Save for the user possibly wondering why the client library opens O(K) TCP connections to the broker to consume certain topics under certain configurations.)
See also: why Google's Colossus has a fat client SDK library.
Feels like there is another squeeze in that idea if someone “just” took all their docs and replicated the feature set. But maybe that’s what S2 is already aiming at.
Wonder how long warpstream docs, marketing materials and useful blogs will stay up.
warpstream has latency issues, which downstream turn into cost issues
That said, they were at least "good enough" to make "buy" more appealing than "build"
I've seen these comments for over 15 years yet for some "unknown", "silly" reason java keeps being used for really,really useful software like kafka.
I could provide examples myself, but I'm not convinced it's about java vs c++ or go: hadoop, cassandra, zookeeper
Java hasn't focused on developing the standard libraries to keep up with the modern world. There is no standard http server or API libraries. A lot of the "standard" libraries in software projects are from 3d party sources like Apache, which obviously can't be trusted if they make decisions like above.
Lombok is still heavily used, and its almost a sin to write getters and setters manually instead of using it, but it works by hacking the AST, and the language maintainers never thought to include that as part of the language features.
Furthermore, the biggest use of java in android app development has shifted to use Kotlin instead, because Kotlin aims to address a lot of the pain points of Java. Yet language maintainers of Java don't seem to care.
So overall, if you look at the java ecosystem, its clear that the people working in it have no connection to use in the real world, and are taking the language into some direction that fits their convoluted ideas of what a language should be.
From a more functional perspective, in the modern age of the cloud, ARM chips, and cheap memory, developer cost outweighs infrastructure cost by orders of magnitude. If you write your service in python for MVP, you end up saving a shitload of time, through things like massive amounts of 3d party libraries, not having to wait for compile step, being able to develop during debugging session, much more flexibility in how you write code, and so on. Then as you move from MVP to a product, its much easier to refactor the code with MyPy type checking, convert collections of functions into Classes if need be, and so on. Finally, with PyPy running your code much faster than than CPython, you still will on the average be slower than Java, but not by much, and the added infra cost will be much less than a single developer salary per month.
Future of development is going to be Python -> optimized machine code. Its already possible to do this with LLMs with manual prompting.
An embedded interpreter and JIT in rust basically but jostled around a bit to make it more cohesive and the data interop more fluid— PyO3 but backwards.
But I'm doubtful that it's going to make things simpler if one can't even decide on a language.
Step 4, Rewrite it in Zig.
Step 5, due to security issues rewrite it in Java 63.
I'm currently building a full workload scheduler/orchestrator. I'm sick of Kubernetes. The world needs better -> https://x.com/GeoffreyHuntley/status/1915677858867105862
Go on then. Post the repo when you have :)
Fast forward into 2025, there are many performant, efficient and less complex alternatives to Kafka that save you money, instead of burning millions in operational costs "to scale".
Unless you are at a hundred million dollar revenue company, choosing Kafka in 2025 is doesn't make sense anymore.
Kafka shouldn't be used for low dataflow systems, true, but you can scale a long way with a simple 3 node cluster.