2 pointsby sgasser9 hours ago1 comment
  • sgasser9 hours ago
    Using LLM APIs but worried about sending client data? Built a proxy for that.

    OpenAI-compatible proxy that masks personal data and secrets before sending to your provider.

    Mask Mode (default):

      You send:      "Email sarah.chen@hospital.org about meeting Dr. Miller"
      LLM receives:  "Email <EMAIL_1> about meeting <PERSON_1>"
      You get back:  Original names restored in response
    
    Route Mode (if you run a local LLM):

      Requests with PII  →  Local LLM
      Everything else    →  Cloud
    
    What it catches:

      PII: Names, emails, phones, credit cards, IBANs, IPs, locations (24 languages)
      Secrets: Private keys, API keys (OpenAI, AWS, GitHub), JWT tokens
    
    Uses Microsoft Presidio for PII detection. ~500MB RAM, 10-50ms per request.

    Works with Cursor, Open WebUI, LangChain, or any OpenAI-compatible client.

    Docs: https://pasteguard.com/docs

    Feedback on edge cases welcome.