2 pointsby BlackVectorOps13 hours ago2 comments
  • bradleyjkemp4 hours ago
    I'd like to see some examples of before/after code samples which have the same hash.

    I can see this will be tolerant of simple renames, but seems unlikely this hash will survive any real refactor of code

  • BlackVectorOps13 hours ago
    Hello HN,

    I built this because I've become paranoid about "safe" refactors in the wake of supply chain attacks like the xz backdoor.

    We spend a lot of time reviewing code for syntax, but we lack good tools for verifying that a large refactor (e.g., renaming variables, changing loop styles) preserves the exact business logic. Standard SHA256 hashes break if you change a single whitespace or variable name, which makes them useless for verifying semantic equivalence.

    I built Semantic Firewall (sfw) to solve this. It is an open-source tool that fingerprints Go code based on its behavior, not its bytes.

    How it works:

    1. SSA Conversion: It loads the Go source into Static Single Assignment form using golang.org/x/tools/go/ssa.

    2. Canonicalization: It renames registers (v0, v1) deterministically and normalizes control flow graphs. This ensures that `if a { x } else { y }` fingerprints the same even if branches are swapped with inverted conditions.

    3. Scalar Evolution (SCEV): This was the hardest part. I implemented an SCEV engine that mathematically solves loop trip counts. This means a `for range` loop and a `for i++` loop that iterate N times produce the exact same fingerprint.

    Here is a quick example of what it catches:

      // Implementation A
      func wipe(k []byte) {
          for i := range k { k[i] = 0 }
      }
    
      // Implementation B (Refactor?)
      func wipe(buf []byte) {
          for i := 0; i < len(buf); i++ { buf[i] = 0 }
      }
    
    These two produce identical hashes. If you change the logic (e.g. `i < len(buf)-1`), the hash diverges immediately.

    It’s written in Go and available as a CLI or GitHub Action. I’d love to hear your thoughts on the approach or edge cases I might have missed in the normalization phase.

    Repo: https://github.com/BlackVectorOps/semantic_firewall