Just to mention a few headaches I've been dealing with over the years: multicast sockets that joins the wrong network adapter interface (due to adapter priorities), losing multicast membership after resume from sleep/hibernate, switches/routers just dropping multicast membership after a while (especially when running in VMs and "enterprise" systems like SUSE Linux and Windows Server, all kinds of socket reuse problems, etc.
I don't even dare to think about how many hours I have wasted on issues listed above. I would never rely on multicast again when developing a new system.
But that said, the application suite, a mission control system for satellites, works great most of the time (typically on small, controlled subnets, using physical installations instead of VMs) and has served us well.
But this was because the IT people put effort into making it work well. They knew we needed multicast, so they made sure multicast worked. I have no idea what that involved, but presumably it means buying switches that can handle multicast reliably, and then configuring them properly, and then doing whatever host-level hardware selection and configuration is required.
In a previous job, we tried to use multicast having not done any groundwork. Just opened sockets and started sending. It did not go so well - fine at first, but then packets started to go missing, and we spent days debugging, and finding the obscure errors in our firewall config. In the end, we did get it working, but i would't have done it again. Multicast is a commitment, and we weren't ready to make it.
However I had very semi-strict control of my network, but used plenty of random routers for testing.
Aeron latency histograms vs TCP are quite nice in the same DC on enterprise-grade networking hardware. But it really makes sense to use if a single-digit or low-double digit microsecond latency improvement on P50 is worth the effort. Or if the long tail with TCP is a dealbreaker, as Aeron has much nicer P99+ regardless of how well optimized a TCP setup is. Also, if one can leverage multicast that's nice, but not only clouds have it disabled, and Aeron works fine with unicast to N.
However, there are gotchas with threading and configuration overall. Cross-DC setup may surprise in a bad way if buffers are not configured to account for bandwidth-delay product. Any packet loss on high-latency network leads to a nasty NACK storm that is slow to recover under load. It's better to set the highest QoS and ensure the network is never dropping packets, e.g. calculate the real peak instant load vs hardware capacity. Relative latency savings cross-DC become less interesting the longer the distance, so there's nothing wrong with TCP there. Another note is that, e.g. ZMQ is slow not because of TCP but because of its internals, almost 2x slower for small packets than raw well-tuned TCP sockets, which are not that bad vs Aeron. Also, Aeron is not for sending big blobs around, the best is to use it with small payloads.
Aeron is designed with mechanical sympathy in mind by the guys who coined this term and have been evangelizing it for years, and it's visible. Lots to learn from the design & implementation (tons of resources on the web) even without using it in prod.
Another time I had a backup job using uftp (a multicast file xfer tool) and it was a similar story. Systems literally sitting one rack over couldn't talk.
We involved all of our CC*-certified guys, wasted a week, and eventually just used the explicit command line switches to configure the cluster.
The hardware is not up to the task, physical or virtual, as far as I can tell.
https://archive.fosdem.org/2023/schedule/event/om_virt/attac...
"...Cut through mode reduces switch latency at the risk of decreased reliability. Packet transmissions can begin immediately after the destination address is processed. Corrupted frames may be forwarded because packet transmissions begin before CRC bytes are received..."
https://www.arista.com/en/um-eos/eos-data-transfer?searchwor...
How did that happened? Seems multicast is already built in, just use that for massive broadcast. Is TCP used just so we can get an ACK that it is received. Seems multicast and UDP shouldn't be a problem if we just want massive people to listen in on it, but if we want to also track these people then that is another story.
From a user perspective, use UDP/multicast all the way. Let the client to request something if it is dropped or missing or otherwise just multicast for everything.
Long fat pipe sees dramatic throughput drops with tcp and relatively small packet loss. Possibly we were holding it wrong; would love to know if there is some definitive guide to doing it right. Good success with UDT.
I would think of UDP with redundant encoding / FEC, to avoid retransmits.
[0] https://en.m.wikipedia.org/wiki/TCP_congestion_control#TCP_B...
Looking at their transport protocol benchmarks on AWS [1][2], they average ~3 million 288-byte messages per second on c5.9xlarge (36 vCPU) instances. When increasing to their MTU limit of 1,344 bytes per message that drops to 700 thousand messages per second [2] or ~1 GB/s (~7.5 Gbps) over 36 cores. That is just ~200 Mbps per core assuming it is significantly parallel.
Looking at their transport protocol benchmarks on GCP [3], they average ~4.7 million 288 byte messages per second on C3 (unspecified type) instances. Assuming it scales proportionally to the AWS test, as they do not provide a max message size throughput number for GCP, that would be ~1 million messages per second or ~1.5 GB/s (~12 Gbps).
TCP stacks can routinely average 10 Gbps per individual core even without aggressive tuning, but Aeron appears to struggle to achieve parity with 36x as many cores. That is not to say that there might not be other advantages to Aeron such as latency, multicast support, or whatever their higher levels are doing, but 36x worse performance than basic off-the-shelf protocols does not sound like "high performance".
[1] https://hub.aeron.io/hubfs/Aeron-Assets/Aeron_AWS_Performanc... Page 13
[2] https://aws.amazon.com/blogs/industries/aeron-performance-en... Search "Test Results"
[3] https://aeron.io/other/aeron-google-cloud-performance-testin...
The particular transport benchmark mentioned here is an echo test where a message is sent between two machines and echoed back to the sender. This is a single threaded test using a single stream (flow) between publisher and subscriber. On each box there is one application thread that sends and receives data and a standalone media driver component running in a DEDICATED mode (i.e. with 3 separate threads: conductor/sender/receiver).
AWS limits single flow traffic [2]. This test was using cluster placement group placement policy which has a nominal limit of 10 Gbps. However, this is true only for TCP. For UDP the actual limit is 8 Gbps when CPG is used (this is not documented anywhere).
Aeron adds a 32 byte header to each message so 288 bytes payload becomes 320 bytes on the network. At a 3M msgs/sec rate Aeron was sending data at 7.68 Gbps (which is 96% of 8 Gbps limit) on a single CPU core. At that rate it was still achieving p99 < 1ms latency target.
We chose `c5n.9xlarge` instance for this test, because it reserved an entire CPU socket to a single VM. This was done to avoid interference from other VMs, i.e. busy neighbour problem.
GCP test was done on `c3-highcpu-88` instance type. Again choosing an instance with so many cores was done to avoid sharing CPU socket with other VMs.
Aeron can easily saturate a 10 GbE NIC even without kernel bypass (given proper configuration). However, this is not a very useful test. Much harder problem is sending small/medium sized messages at high rates and handling bursts of data.
Aeron transport was designed to achieve both low and predictable latency and high throughput at the same time. The two are not at odds with each other.
[1] https://github.com/aeron-io/benchmarks
[2] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-inst...
The benchmark is clearly artificially bottlenecking on (non-disclosed by the vendor) I/O limits and being provided excess compute for stability/"target deployment" reasons and is thus not indicative of the actual protocol compute bottleneck.
It might be beneficial to include these details in the documentation so that your benchmarks do not appear to show much worse performance to a casual reader who does not know the internal structure of the benchmarked system. That or present a benchmark that is not artificially bottlenecked (or show compute load of the bottlenecked implementation) to demonstrate the actual performance limits of the protocol.
On the face of it, the ability to use the majority of the bandwidth of the instance with small messages is impressive.
[1] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-inst...
It's not. The Aeron media driver has 1 RX thread and 1 TX thread in the most heavily threaded configuration (+1 admin thread)
So you set up an Aeron Server on your machine. That handles all external network communication at the message layer (NAKs, etc). Every “Aeron Client” process communicates with that Server to stand up shared memory pipes. The messaging client solely deals with those pipes (cache efficient, etc). They subscribe to channels at host:port/topic but it is not direct network delivery to them (the client is transport agnostic, besides the subscription). The network service directs data to the clients shared memory queue.
Once they have that base networked-IPC setup, you can maximize performance with it using reliable UDP and fan out common data (eg marlet data) using multicast.
Then Aeron further adds Archiving (to persist) and Clustering (to scale late join / state replication) components. This stuff works well with kernel bypass :)
[1] It could work with RDMA as well, but the ticket regarding that was closed 8 years ago. Maybe if there was AI/GPU workload interest.
While that has some sick throughput benchmarks, it is a pretty different architecture and Aeron will have way (orders) lower and more stable latencies. But it won’t work on as many networks as Iggy.
Iggy client SDKs are easy to make because they just need to speak socket. Aeron client SDKs are “easy” to make because they just need to figure out framing shared memory. (And the orchestrator handles the network getting it there).
As people discover the problems with their approach, they rewrite it.
I'm my experience it's generally better not to have transparently reliability for lost data as well. The correct handling should be application-specific.