23 pointsby piotrgrudzien13 hours ago3 comments
  • sgarman9 hours ago
    I don't understand the workflow of having multiple new bugs everyday that need fixed. Is there bad code being shipped? Are there 1000 devs and it's just this persons' job to fix everyone's bugs? Is this an extremely old and complicated codebase they are improving? Not trying to be snarky - I just don't understand how every day there is new bugs that are just error messages.

    If there are new bugs every day that need fixed is the AI really good enough to know the fix from just an error?

    • sothatsit9 hours ago
      Generally I think this happens when people don’t monitor for errors on a regular basis. People only notice if things are actively broken for customers, and tons of small non-fatal bugs slip through and build up over time.
  • Xeoncross9 hours ago
    > Total alerts/errors found: 7

    Apps written in an exceptions language (Java, JavaScript, PHP, etc..) are really annoying to monitor as everything that isn't the happy path triggers an 'error'/'fatal' log/metric.

    Yes, you can technically work around it with (near) Go-level error verbosity (try/catches everywhere on every call) but I've never seen a team actually do that.

    Modern languages that don't throw exceptions for every error like Rust, Go, and Zig make much more sane telemetry reports in my experience.

    On this note, a login failure is not an error, it's a warning because there is no action to take. It's an expected outcome. Errors should be actionable. WARN should be for things that in aggregate (like login failures) point to an issue.

    • Spivak9 hours ago
      > On this note, a login failure is not an error

      Login failure is like the most important error you'll track. A login failure isn't necessarily actionable but a spike of thousands of them for sure is. No single system has been more responsible for causing outages in my career than auth. And I get that it's annoying when they appear in your Rollbar but sometimes Login Failed is the only signal you get that something is wrong.

      Some 3rd party IdP saying "nope" can be innocuous when it's a few people but a huge problem when it's because they let their cert/application token expire.

      And I can already hear the "it should be a metric with an alert" and you're absolutely right. Except that it requires that devs take the positive action of updating the metric on login failures vs doing nothing and letting the exception propagate up. And you just said login failures aren't errors and "bad password" obviously isn't an error so no need to update the metric on that and cause chatty alerts. Except of course that one time a dev accidentally changed the hashing algorithm. Everyone was really bad at typing their password that day for some reason.

      • SkiFire138 hours ago
        Rather than login failures I would monitor login successes. A sharp decrease of successes likely points to some issue, but an increase in login failures might easily be someone trying tons of random credentials on your website (still not ideal, but much harder to act on)
        • Spivak7 hours ago
          Creating this metric/alert is practically a rite of passage for junior ops people who then get paged around 5pm.
  • danpalmer9 hours ago
    Why would one need to check Datadog every morning? Wouldn't alerts fire if there was something to do?
    • vrosas8 hours ago
      Almost no one actually knows how to set up their monitoring. Like, they know the words but not the full picture or how the pieces should actually fit together. Then they do shit like this to try and make up for that fact.
      • bdangubic8 hours ago
        the ones that know do not check anything every morning
    • import8 hours ago
      Well, the industry standard solution is correct monitoring and alerting. This doesn’t sound like “the right way”.
    • bak3y9 hours ago
      Exactly what I came to say, alerts need tuning if you're having to check your monitoring tools by hand.
      • dathinab9 hours ago
        I read the article as a way for AI to check, classify and potentially partial fix the alerts you see when logging-in in the morning.

        And for many alerts you need to look at other events around it to properly classify and partially solve them. Due to that you need to give the AI more then just the alerts.

        Through I do see a risk similar to wrongly tuned alerts:

        Not everything which resolves by itself and can be ignored _in this moment_ is a non issue. It's e.g. pretty common that a system with same rare ignoble warns/errs falls completely flat, when on-boarding a lot of users, introducing a new high load feature, etc. due the exactly the things which you could fully ignore before hand.

    • seneca9 hours ago
      I'm not sure if this is what the writer was getting at, but I tend to check telemetry for my production applications regularly not because I'm looking for things that would fire alerts, but to keep a sense of what production looks like. Things like request rate, average latency, top request paths etc. It's not about knowing something is broken, it's about knowing what healthy looks like.

      Understanding what your code looks like in production gives you a lot better sense of how to update it, and how to fix it when it does inevitably break. I think having AI checking for you will make this basically impossible, and that probably makes it a pretty bad idea.

      • danpalmer4 hours ago
        This is a good answer, and I agree that having a good production intuition like this is important. You're probably also right that having AI do it probably doesn't get that value.

        I'm not sure I'd do this once a day. I tend to take note of things to build that intuition when I have other reasons to go and look at dashboards, and we have a weekly SLO review as a team, but perhaps there's a place for this in some way.