63 pointsby cebert13 hours ago12 comments
  • theden6 hours ago
    I'm kinda shocked (yet not surprised) at how bad railway has been with this:

    - Why were they making CDN changes in prod? With their 100M funding recently they could afford a separate env to test CDN changes. Did their engineering team even properly understand surrogate keys to feel confident to roll out a change in prod? I don't think they're beating the AI allegations to figure out CDN configs, a human would not be this confident to test surrogate keys in prod.

    - During and post-incident, the comms has been terrible. Initial blog post buried the lede (and didn't even have Incident Report in the title). They only updated this after negative feedback from their customers. I still get the impression they're trying to minimise this, it's pretty dodgy. As other comments mentioned, the post is vague.

    - They didn't immediately notify customers about the security incident (people learned from their users). The apparently have emailed affected customers only, many hours after. Some people that were affected that still haven't been emailed, and they seem to be radio silent lately.

    - Their founder on twitter keeps using their growth as an excuse for their shoddy engineering, especially lately. Their uptime for what's supposed to be a serious production platform is abysmal, they've clearly prioritised pushing features over reliability https://status.railway.com/ and the issues I've outlined here have little to do with growth, and more to do with company culture.

    Honestly, I don't think railway is cut out for real production work (let alone compliance deployments), at least nothing beyond hobby projects.

    Their forum is also getting heated, customers have lost revenue, had medical data leaked etc., with no proper followup from the railway team

    https://station.railway.com/questions/data-getting-cached-or...

    • justjake36 minutes ago
      Railway founder here, providing some color

      > Why were they making CDN changes in prod? With their 100M funding recently they could afford a separate env to test CDN changes. Did their engineering team even properly understand surrogate keys to feel confident to roll out a change in prod? I don't think they're beating the AI allegations to figure out CDN configs, a human would not be this confident to test surrogate keys in prod.

      We went deep on them, tested them prior, and then when rubber met road in production we ran into cases we didn't see in testing. The large issue, and mentioned in the blogpost, is that we didn't have a mechanism to to a staged release.

      > During and post-incident, the comms has been terrible. Initial blog post buried the lede (and didn't even have Incident Report in the title). They only updated this after negative feedback from their customers. I still get the impression they're trying to minimise this, it's pretty dodgy. As other comments mentioned, the post is vague.

      Our initial post definitely could have been more clear, and we revised it the moment we got customer feedback to do so.

      > They didn't immediately notify customers about the security incident (people learned from their users). The apparently have emailed affected customers only, many hours after. Some people that were affected that still haven't been emailed, and they seem to be radio silent lately.

      We notified customers even before we did a wide release, as is process for anything security related. You create space for as much disclosure area as possible, and then follow up with a public disclosure

      > Their founder on twitter keeps using their growth as an excuse for their shoddy engineering, especially lately. Their uptime for what's supposed to be a serious production platform is abysmal, they've clearly prioritised pushing features over reliability https://status.railway.com/ and the issues I've outlined here have little to do with growth, and more to do with company culture.

      Do you have any specifics here? We're scaling the system at 100x YoY growth right now, working 24/7 to scale the entire thing. Again, all ears on if you have specific crits as we're always open to receiving feedback on how we can do things better!

      > Their forum is also getting heated, customers have lost revenue, had medical data leaked etc., with no proper followup from the railway team

      There are team members in that thread linked, are you certain you linked the right thread? Happy to have a look at anything you believe we're missing!

    • edenstrom5 hours ago
      Yeah, this was really the nail in the coffin for us. Most services are already moved from Railway, but the rest will follow during this week.
    • daavoo4 hours ago
      I was affected and got no communication at all, had to find out from user reports and take immediate action with 0 signal from railway about the issue (even though they were already aware according to the timeline).

      I've been trying to defend railway since we built our initial prototype there and I wanted to avoid the cost of migrating to some "serious infra" until proven needed, but they have been making their defense a really hard job (without mentioning that their overall reliability has been really bad the past weeks)

  • varun_chopra11 hours ago
    The status page [1] has the actual root cause (enabling "Surrogate Keys" silently bypassed their CDN-off logic). The blog post doesn't. That's backwards.

    "0.05% of domains" is a vanity metric -- what matters is how many requests were mis-served cross-user. "Cache-Control was respected where provided" is technically true but misleading when most apps don't set it because CDN was off. The status page is more honest here too: they confirmed content without cache-control was cached.

    They call it a "trust boundary violation" in the last line but the rest of the post reads like a press release. No accounting of what data was actually exposed.

    [1] https://status.railway.com/incident/X0Q39H56

  • lossoth6 hours ago
    These incidents are a perfect example of how misleading "simple" systems can be.

    From the outside, it looks like "just a cache misconfiguration," but in reality, the problem is more insidious because it's distributed across multiple layers: - application logic (authentication limitations) - CDN behavior -> infrastructure - default settings that users rely on (no cache headers because the CDN was disabled)

    The hardest part of debugging these cases isn't identifying what happened, but realizing where the model is flawed: everything appears correct locally, the logs don't report any issues, yet users see completely different data.

    I've seen similar cases where developers spent hours debugging the application layer before even considering that something upstream was silently changing the behavior.

    These are the kind of incidents where the debugging path is anything but linear.

  • stingraycharles12 hours ago
    This write up doesn’t make sense. Authenticated users are the ones without a Set-Cookie? Surely the ones with the cookie set are the authenticated ones?

    There are dozens of contradictions, like first they say:

    “this may have resulted in potentially authenticated data being served to unauthenticated users”

    and then just a few sentences later say

    “potentially unauthenticated data is served to authenticated users”

    which is the opposite. Which one is it?

    Am I missing something, or is this article poorly reviewed?

    • justjake12 hours ago
      Fixed the typo in that second paragraph and aligned the section on the Set-Cookie stuff. Anything else that can be made more clear?
      • DrewADesign11 hours ago
        It appears that your company experienced an incident during which a blog entry was made available in which readers became informed about certain information about a server condition that resulted in certain users receiving a barrage of indirect clauses etc. etc. etc.

        Be more direct. Be concise. This blog post sounds like a cagey customer service CYA response. It defeats the purpose of publishing a blog post showing that you’re mature, aware, accountable, and transparent.

      • codechicago27711 hours ago
        The problem is that these visible errors make us wonder what other errors in the post are less visible. Fixing them doesn’t fix the process that led to them.
        • slopinthebag11 hours ago
          I'm pretty sure it's AI.

          https://x.com/JustJake/status/2007730898192744751

          I wouldn't be surprised if most of Railway's infra is running on Claude at this point.

          • antics10 hours ago
            The CEO says it's not: https://x.com/JustJake/status/2038799619640250864

            A lot of people are confident in enough in their ability to spot AI infra that they are willing to dismiss a firsthand source on this, and I admit I have no idea why. There isn't any upside to making this claim, and anyway, I assure you that people need no help at all from AI to make these kinds of mistakes.

            • slopinthebag9 hours ago
              Their reply doesn't make much sense, they're supposedly soc2 compliant. How are they compliant but letting a single engineer push out a change like that?

              I'm sure Claude didn't literally ship the feature itself with no oversight, but I also find it hard to believe that their approach to adopting AI didn't factor in at all. Even just like, the mental overhead of moving faster and adopting AI code with less stringent review leading to an increase in codebase complexity could cause it. Couple that with an AI hallucinating an answer to the engineer who shipped this change, I'm not sure why people are so quick to discount this as a potential source of the issue. Surely none of us want our infra to become less secure and reliable, and so part of preventing that from happening is being honest about the challenges of integrating AI into our development processes.

              • antics8 hours ago
                > I'm not sure why people are so quick to discount [AI] as a potential source of the issue.

                Because (per the link above) the CEO said that (1) it was their fault, and (2) it had nothing to do with AI.

                I understand that on this forum statements like this are inevitably greeted with some amount of skepticism, but right now I'm seeing no particular reason to disbelieve Jake, and the reason that "if they did use AI they'd deny it" should frankly not be considered good enough to fly around here. Like probably everyone in this comment section I'm open to evidence that they used AI to slop-incident themselves, but until we can reach that standard let's please calm down and focus on what we actually know to be true.

                • hihicoderhian hour ago
                  During this whole incident, Railway have made a wide range of misleading and straight out false claims to cover themselves, so them saying it wasn't AI is pretty much meaningless
                  • justjake43 minutes ago
                    Would you mind pointing out these claims? Happy to address them personally
                • slopinthebag7 hours ago
                  Come on man, their CEO is a massive vibe coding proponent and his company spent $300,000 on Claude this month. But yeah, I'm sure Claude had nothing to do with any of it. I bet they don't use it to write any code.

                  https://xcancel.com/JustJake/status/2030063630709096483#m

                  • stingraycharles5 hours ago
                    Both things can be true: they’re doing a lot of vibe coding, and this was a human error that didn’t involve AI.
                    • johnisgood5 hours ago
                      I have no skin in the game but that is a very charitable perspective.
          • stingraycharles10 hours ago
            It's fine they use AI, it's not fine they don't proofread things.
  • rileymichael10 hours ago
    pretty hard to find this on their blog, looks like incidents are tucked away at the bottom. an issue of this size deserve a higher spot.

    (also looks like two versions of the 'postmortem' are published at https://blog.railway.com/engineering)

  • sebmellen12 hours ago
    Almost three years ago now, Railway poached one of our smartest engineers. They were smart to do so. I have a lot of respect for the Railway team and I’m impressed with their execution.

    I think this is their first major security incident. Good that they are transparent about it.

    If possible (@justjake) it would be helpful to understand if there was a QA/test process before the release was pushed. I presume there was, so the question is why this was not caught. Was this just an untested part of the codebase?

  • muragekibicho10 hours ago
    Does Stripe use Railway? The dashboard was down today and this is the only incident report I've encountered and the timeline matches Stripe's downtime.
  • sublinear12 hours ago
    I'm curious if having unique URLs per user session would mitigate this.

    I think that's already best practice in most API designs anyway?

    • kay_o10 hours ago
      Probably.

      No, it isn't. Ive not seen this in an API ever and only in webapps ?phpsessid= back in childhood

  • algolint8 hours ago
    [dead]
  • swq1155 hours ago
    [dead]
  • wokgr3t412 hours ago
    [dead]
  • heyethan9 hours ago
    [flagged]