2 pointsby spaquet2 days ago2 comments
  • ccosky2 days ago
    "AI is actually pretty good at writing tests, especially for common scenarios and edge cases. The tricky part is deciding what to do when tests fail. Sometimes the code is wrong. Other times the spec itself has evolved and the test needs updating."

    Yes, I agree, it is good at coming up with lots of scenarios. But after switching to AI-generated unit tests, I discovered that AI writes tests that mirror the code, not validate that the implementation is correct.

    So I have AI write the unit test with a particular pattern in the method name:

    <methodName>_when<Conditions>_<expectedBehavior>

    Then I have a Claude skill that validates that the method under test matches the first part of the method name, that the setup matches the conditions in the middle part, and that the assertions match the expected behavior in the last part. It does find problems with the unit tests this way. I also have it research whether the production code is wrong or whether the test is wrong too - no blindly having AI "fix" things.

    For more complex methods, though, I still do manual verification by checking what lines get hit for each test.

  • 2 days ago
    undefined