The architecture that makes this work: Metal-accelerated vector search -- Embeddings live directly in unified memory (MTLBuffer). Zero CPU-GPU copy overhead. Adaptive SIMD4/SIMD8 kernels + GPU-side bitonic sort = sub-millisecond search on 10K+ vectors (vs ~100ms CPU). This isn't just "faster" -- it enables interactive search UX that wasn't possible before.
Atomic single-file storage (.mv2s) -- Everything in one crash-safe binary: embeddings, BM25 index, metadata, compressed payloads. Dual-header writes with generation counters = kill -9 safe. Sync via iCloud, email it, commit to git. The file format is deterministic -- identical input produces byte-identical output.
Query-adaptive hybrid fusion -- Four parallel search lanes (BM25, vector, timeline, structured memory). Lightweight classifier detects intent ("when did I..." → boost timeline, "find documentation about..." → boost BM25). Reciprocal Rank Fusion with deterministic tie-breaking = identical queries always return identical results.
Photo/Video RAG -- Index your photo library with OCR, captions, GPS binning, per-region embeddings. Query "find that receipt from the restaurant" searches text, visual similarity, and location simultaneously. Videos get segmented with keyframe embeddings + transcript mapping. Results include timecodes for jump-to-moment navigation. All offline -- iCloud-only photos get metadata-only indexing. Swift 6.2 strict concurrency -- Every orchestrator is an actor. Thread safety proven at compile time, not runtime. Zero data races, zero @unchecked Sendable, zero escape hatches.
Deterministic context assembly -- Same query + same data = byte-identical context every time. Three-tier surrogate compression (full/gist/micro) adapts based on memory age. Bundled cl100k_base tokenizer = no network, no nondeterminism.
import Wax
let brain = try await MemoryOrchestrator(at: URL(fileURLWithPath: "brain.mv2s"))
// Index try await brain.remember("User prefers dark mode, gets headaches from bright screens")
// Retrieve let context = try await brain.recall(query: "user display preferences") // Returns relevant memories with source attribution, ready for LLM context
What makes this different:
Zero dependencies on cloud infrastructure -- No API keys, no vendor lock-in, no telemetry Production-grade concurrency -- Not "it works in my tests," but compile-time proven thread safety Multimodal from the ground up -- Text, photos, videos indexed with shared semantics Performance that unlocks new UX -- Sub-millisecond latency enables real-time RAG workflows
## Wax Performance (Apple Silicon, as of Feb 17, 2026)
- 0.84ms vector search at 10K docs (Metal, warm cache)
- 9.2ms first-query after cold-open for vector search
- ~125x faster than CPU (105ms) and ~178x faster than SQLite FTS5 (150ms) in
the same 10K benchmark
- 17ms cold-open → first query overall
- 10K ingest in 7.756s (~1289 docs/s) with hybrid batched ingest
- 0.103s hybrid search on 10K docs
- Recall path: 0.101–0.103s (smoke/standard workloads)
Built for: Developers shipping AI-native apps who want RAG without the infrastructure overhead. Your data stays local, your users stay private, your app stays fast.The storage format and search pipeline are stable. The API surface is early but functional. If you're building RAG into Swift apps, I'd love your feedback.
GitHub: https://github.com/christopherkarani/Wax
Star it if you're tired of spinning up vector databases for what should be a library call.