It's worth noting that at the time of this report, this only affected PostHog's single tenant hobby deployment (i.e. our self hosted version). Our Cloud deployment used our Rust service for sending webhooks, which has had SSRF protection since May 2024[1].
Since this report we've evolved our Cloud architecture significantly, and we have similar IP-based filtering throughout our backend services.
[0] https://github.com/PostHog/posthog/pull/25398
[1] https://github.com/PostHog/posthog/commit/281af615b4874da1b8...
> As it described on Clickhouse documentation, their API is designed to be READ ONLY on any operation for HTTP GET As described in the Clickhouse documentation, their API is designed to be READ ONLY on any operation for HTTP GET requests.
What an elegant, interesting read.
What I don't quite understand: Why is the Clickhouse bug not given more scrutiny?
Like that escape bug was what made the RCE possible and certainly a core DB company like ClickHouse should be held accountable for such an oversight?
No need for postgres if you have a fully authenticated user.
Unfortunately a lot of people think it means any time an LLM helps write code, but I think we're winning that semantic battle - I'm seeing more examples of it used correctly than incorrectly these days.
It's likely that the majority of code will be AI assisted in some way in the future, at which point calling all of it "vibe coding" will lose any value at all. That's why I prefer the definition that specifies unreviewed.
I used to look up to Posthog as I thought, wow this is a really good startup. They’re achieving a lot fast actually.
But turns out a lot was sloppy. I don’t trust them no more and would opt for another platform now.