I've documented everything here: https://github.com/alichherawalla/off-grid-mobile-ai/blob/ma...
llama.cpp compiled as a native Android library via the NDK, linked into React Native through a custom JSI bridge. GGUF models loaded straight into memory. On Snapdragon devices we use QNN (Qualcomm Neural Network) for hardware acceleration. OpenCL GPU fallback on everything else. CPU-only as a last resort.
Image gen is Stable Diffusion running on the NPU where available. Vision uses SmolVLM and Qwen3-VL. Voice is on-device Whisper.
The model browser filters by your device's RAM so you never download something your phone can't run. The whole thing is MIT licensed - happy to answer anything about the architecture.