88 pointsby arwt10 hours ago8 comments
  • piccirello6 hours ago
    I work on security at PostHog. We resolved these SSRF findings back in October 2024 when this report was responsibly disclosed to us. I'm currently gathering the relevant PRs so that we can share them here. We're also working on some architectural improvements around egress, namely using smokescreen, to better protect against this class of issue.
    • piccirello6 hours ago
      Here's the PR[0] that resolved the SSRF issue. This fix was shipped within 24 hours of receiving the initial report.

      It's worth noting that at the time of this report, this only affected PostHog's single tenant hobby deployment (i.e. our self hosted version). Our Cloud deployment used our Rust service for sending webhooks, which has had SSRF protection since May 2024[1].

      Since this report we've evolved our Cloud architecture significantly, and we have similar IP-based filtering throughout our backend services.

      [0] https://github.com/PostHog/posthog/pull/25398

      [1] https://github.com/PostHog/posthog/commit/281af615b4874da1b8...

  • yellow_leadan hour ago
    Need an edit here

    > As it described on Clickhouse documentation, their API is designed to be READ ONLY on any operation for HTTP GET As described in the Clickhouse documentation, their API is designed to be READ ONLY on any operation for HTTP GET requests.

  • lkt8 hours ago
    Out of interest, how much does ZDI pay for a bug like this?
  • anothercat8 hours ago
    Does this require authenticated access to the posthog api to kick off? In that case I feel clickhouse and posthog both have their share of the blame here.
    • nightpool7 hours ago
      It looks like the entire class of bugs here are "if you have access to Posthog's admin dashboard, you can configure webhook URLs that hit Posthog's internal services". That's not particularly surprising for a self-hosted system like the author's, but I expect it would pretty bad if you were using their cloud-hosted product.
  • 10 hours ago
    undefined
  • thenaturalist9 hours ago
    Wow, chapeau to the author.

    What an elegant, interesting read.

    What I don't quite understand: Why is the Clickhouse bug not given more scrutiny?

    Like that escape bug was what made the RCE possible and certainly a core DB company like ClickHouse should be held accountable for such an oversight?

    • matmuls8 hours ago
      ssrf was the entry point, and clickhouse is supposed to be an internal only service, but one could reach it only with that ssrf, so hence less of "scrutiny". The 0day by itself wouldnt be useful, unless an attacker can reach clickhouse, which they usually can't.
      • thenaturalist8 hours ago
        But if they do, prohibiting SQL injection, a critical last mile vulnerability, seems trivial?
        • ch20267 hours ago
          Sure, it’s a bug they can fix. But it’s more the setup itself that’s the issue. For example clickhouse’s HTTP interface would normally require user/pass auth and not have access to all privileges. Clickhouse has a table engine that maps to local processes too (eg select from a python process you pipe stdin into).

          No need for postgres if you have a fully authenticated user.

        • nightpool7 hours ago
          The author already had basically full Clickhouse querying abilities, and Clickhouse lets you run arbitrary SQL on postgres, the fact that the author used a read-only command to execute it wasn't the author bypassing a security boundary (anyone with access to the Clickhouse DB also had access to the Postgres DB), it was just a gadget that made the SSRF more convenient. They could have escalated it into a different internal HTTP API instead.
    • simonw5 hours ago
      The ClickHouse bug was fixed here: https://github.com/ClickHouse/ClickHouse/pull/74144
  • 5 hours ago
    undefined
  • taw_12659 hours ago
    PostHog does a lot of vibe coding, I wonder how many other issues they have.
    • Nextgrid9 hours ago
      Not that I’m disproving it but do you have a source? Companies say all kinds of things for hype and to attract investors, but it doesn’t necessarily make it true.
      • matmuls8 hours ago
        looking at their commits, there are about 300+ commits tagged with " Generated with https://claude.com/claude-code" attribution.
        • dewey8 hours ago
          Just because AI tools are involved doesn't mean it's "Vibe coding".
          • somat3 hours ago
            If you leave "Generated with claude-code" in the commit message, It was vibe coded.
          • bopbopbop76 hours ago
            What does it mean?
            • simonw5 hours ago
              The preferred definition of "vibe coding" is when you have AI generate code that you use without reviewing it first: https://simonwillison.net/2025/Mar/19/vibe-coding/

              Unfortunately a lot of people think it means any time an LLM helps write code, but I think we're winning that semantic battle - I'm seeing more examples of it used correctly than incorrectly these days.

              It's likely that the majority of code will be AI assisted in some way in the future, at which point calling all of it "vibe coding" will lose any value at all. That's why I prefer the definition that specifies unreviewed.

              • chrisweekly4 hours ago
                I share your preference. (I also mourn the loss of the word "vibe" for other contexts.) In this case there were apparently hundreds of commit messages stating "generated by Claude Code". I feel like there's a missing set of descriptors -- something similar to Creative Commons with its now-familiar labels like "CC-BY-SA" -- that could be used to indicate the relative degree of human involvement. Full-on "AI-YOLO-Paperclips" at one extreme could be distinguished from "AI-IDE-TA" for typeahead / fancy autocomplete at the other. Simon, you're in a fantastic position to champion some kind of basic system like this. If you run w/ this idea, please give me a shout-out. :)
              • bopbopbop74 hours ago
                I also hope that majority of the code in the future is AI assisted like it is with PostHog because my cyber security firm is going to make so much money.
          • hsbauauvhabzb7 hours ago
            It sure is a pretty good indicator, and if you underestimate human laziness you’re gonna have a bad time regardless.
            • jwpapi7 hours ago
              Also looking at how much they’ve released and how fast and how they blog like they own the world (or design the website)

              I used to look up to Posthog as I thought, wow this is a really good startup. They’re achieving a lot fast actually.

              But turns out a lot was sloppy. I don’t trust them no more and would opt for another platform now.