We're working to make Qualcomm NPUs a first-class citizen for deployment from PyTorch. Devs can write a Python function that runs a PyTorch model, then use our `@compile` decorator to transpile the model to a Qcom-specific C++ implementation (DLC) which compiles to a self-contained shared library.
The Qualcomm NPUs are fast. 1.8x faster than ONNXRuntime. See the link above.