2 pointsby ashish-barmaiya5 hours ago1 comment
  • ashish-barmaiya5 hours ago
    Hi HN,

    I’m currently building a zero-knowledge digital inheritance platform called SecureVault. While designing the threat model, I kept running into a fundamental flaw with standard audit logs: they only prove internal consistency.

    If a sophisticated attacker (or a rogue admin) gets full access to my Postgres database, they could easily delete the last 10 events, forge 10 new ones, recompute the hashes, and present a perfectly "valid" log.

    I needed cryptographic proof of log integrity for SecureVault, so I stepped back and built Attest to solve it. It’s an open-source, multi-tenant audit logging service that makes history rewrites mathematically detectable.

    It works by combining strict cryptographic hash chaining (every event hashes the previous one) with a background worker that periodically anchors the "Chain Head" to an external, append-only system (like Git). To rewrite history without detection, an attacker would have to compromise both the database and the external Git repository simultaneously.

    The core trade-off: To guarantee strict serializability and a linear hash chain, writes are serialized per project. This means it maxes out around 25-30 writes/sec per project due to optimistic locking contention. It is intentionally built for high-assurance security events where integrity matters more than raw throughput.

    I would love to hear your brutal, honest feedback on the architecture, the threat model, or better ways to handle the optimistic locking approach without sacrificing strict ordering.

    Happy to answer any questions!