161 pointsby openWrangler7 days ago11 comments
  • Conasg6 days ago
    I took a cursory look and I like what I see – the service maps are really good, I love the level of detail. I will say, one thing I'm looking for with this kind of software, to maximise value, is structured logging support, and from what I could see, each log line just has the raw payload currently. Is that something you have on your roadmap?
  • maknee6 days ago
    Great work! It's nice seeing another observability tool. Demo is neat and easy to navigate.

    Couple of questions:

    What's the overhead of tracing + logging observed by users? I see many tools being built on top of the OpenTelemetry eBPF tracer, which is nice to see.

    The OpenTelemetry eBPF tracer uses sampling to capture traces. Do other types of logging in the tool use sampling as well (HTTP traces)?

    When finding SLO violations, can this tool find the bug if the latency spikes do not happen frequently (ie, latency spikes happens every 5minutes - 1hour)? I'm curious if the team have had experienced such events and even if those pmax latencies matter to customers since it may not happen frequently.

    I see that the flamegraph is a CPU flamegraph - does off-cpu sampling matter (Disk/Network, etc...)? Or does the CPU flamegraph provide enough for developers to solve the issue?

    • nikolay_sivko6 days ago
      1. Regarding overhead — we ran a benchmark focused on performance impact rather than raw overhead [1]. TL;DR: we didn’t observe any noticeable impact at 10K RPS. CPU usage stayed around 200 millicores (about 20% of a single core).

      2. Coroot’s agent captures pseudo-traces (individual spans) and sends them to a collector via OTLP. This stream can be sampled at the collector level. In high-load environments, you can disable span capturing entirely and rely solely on eBPF-based metrics for analysis.

      3. We’ve built automated root cause analysis to help users explain even the slightest anomalies, whether or not SLOs are violated. Under the hood, it traverses the service dependency graph and correlates metrics — for example, linking increased service latency to CPU delay or network latency to a database. [2]

      4. Currently, Coroot doesn’t support off-CPU profiling. The profiler we use under the hood is based on Grafana Pyroscope’s eBPF implementation, which focuses on CPU time.

      [1]: https://docs.coroot.com/installation/performance-impact [2]: https://demo.coroot.com/p/tbuzvelk/anomalies/default:Deploym...

  • emmanueloga_6 days ago
    I looked into eBPF-based observability tools for k8s some time ago and found at least four tools that look incredibly similar: Pixie, Parca, Coroot, and Odigos. There are probably others I missed too. Do you have any thoughts about this?

    From a user perspective, having several tools that overlap heavily but differ in subtle ways makes evaluation and adoption harder. It feels like if any two of these projects consolidated, they’d have a good shot at becoming the "default" eBPF observability solution.

    • nikolay_sivko6 days ago
      From a user’s perspective, it doesn’t really matter how the data is collected. What actually matters is whether the tool helps you answer questions about your system and figure out what’s going wrong.

      At Coroot, we use eBPF for a couple of reasons:

      1. To get the data we actually need, not just whatever happens to be exposed by the app or OS.

      2. To make integration fast and automatic for users.

      And let’s be real, if all the right data were already available, we wouldn’t be writing all this complicated eBPF code in the first place:)

    • edenfed6 days ago
      Speaking for Odigos (disclosure: I’m the creator), here are two significant differences between us and the other mentioned players:

      - Accurate distributed traces with eBPF, including context propagation. Without going into other tools, I highly recommend trying to generate distributed traces using any other eBPF solution and observing the results firsthand.

      - We are agent-only. Our data is produced in OpenTelemetry format, allowing you to integrate it seamlessly with your existing observability system.

      I hope this clarifies the differences.

      • PeterZaitsev6 days ago
        I wonder if anyone tried to integrate Odigos with Coroot - looks like it could be really powerful!
  • mrbluecoat6 days ago
    Can it parse Zeek logs to identify long-running TCP connections and/or identify user attempts to access a DNS blocked domain?
    • nikolay_sivko6 days ago
      We could totally add that, but no one's asked for it so far
  • IOT_Apprentice6 days ago
    Can this also be used in a non-cloud environment? Or even say in promox based setup locally?
    • nikolay_sivko6 days ago
      It only requires a modern Linux kernel. Note: The agent does not support Docker-in-Docker environments, such as KinD or Minikube (D-in-D plugin).
  • akdor11546 days ago
    I already have Opentelemetry traces and logs going to Clickhouse with the Clickhouse otel exporter.

    Can i use Coroot to show my existing data, without it taking control of my DDL?

    • nikolay_sivko6 days ago
      Initially, we relied on the ClickHouse OTEL exporter and its schema, but for performance optimization, we decided to modify our ClickHouse schema, and they are no longer compatible :(
      • akdor11546 days ago
        Bummer, it'd be awesome if i could point it at data i already have, even if that meant a reduced feature set.
        • PeterZaitsev6 days ago
          How are you using this data right now ? If you plan to use Coroot for visualization why not to convert it to more efficient format Coroot uses ?
  • bryancoxwell6 days ago
    This is somewhat off topic, but are there any common uses for eBPF outside of observability/monitoring? Or is that kind of its whole thing?
  • tureg6 days ago
    Thanks for sharing! If the connections are TLS-enabled, can Coroot still display the associated telemetry?
    • nikolay_sivko6 days ago
      Yes, it captures traffic before encryption and after decryption using eBPF uprobes on OpenSSL and Go’s TLS library calls.
  • fjwuafasd6 days ago
    I like what I see. What are the differences between the enterprise and community editions?
    • nikolay_sivko6 days ago
      Enterprise Edition = Community Edition + Support + AI-based Root Cause Analysis + SSO + RBAC
  • toobulkeh6 days ago
    We're on sentry today, but have been waiting for a fully OSS solution like this.
    • nikolay_sivko6 days ago
      (I'm a co-founder). At Coroot, we're strong believers in open source, especially when it comes to observability. Agents often require significant privileges, and the cost of switching solutions is high, so being open source is the only way to provide real guarantees for businesses.
  • esafak6 days ago
    What's the data transformation story; for ML on metrics?
    • nikolay_sivko6 days ago
      Coroot builds a model of each system, allowing it to traverse the dependency graph and identify correlations between metrics. On top of that, we're experimenting with LLMs for summarization — here are a few examples: https://oopsdb.coroot.com/failures/cpu-noisy-neighbor/
      • esafak6 days ago
        That looks like a built-in feature. I'm asking about extensibility. How do we use custom metrics transformations (libraries), for example?
        • nikolay_sivko6 days ago
          Currently, you can define custom SLIs (Service Level Indicators, such as service latency or error rate) for each service using PromQL queries. In the future, you'll be able to define custom metrics for each application, including explanations of their meaning, so they can be leveraged in Root Cause Analysis