QuarterBit AXIOM compresses training memory 15x. Same model. Same quality. Fraction of the hardware.
RESULTS:
Llama 70B: 840GB → 53GB (11 GPUs → 1 GPU) = 90% savings
Llama 13B: 156GB → 9GB (FREE on Kaggle T4) = 100% savings
91% energy reduction vs standard training. 100% trainable weights (not LoRA/adapters). 3 lines of code.HOW IT WORKS:
from quarterbit import axiom
model = axiom(model)
model.cuda()
TRY IT: pip install quarterbit
Demo (FREE): https://www.kaggle.com/code/kyleclouthier/quarterbit-axiom-1...Benchmarks: https://quarterbit.dev
AXIOM uses a novel weight representation combining lossless compression with a built-in optimizer. Weights stored at 0.62 bytes/param vs 4 bytes FP32. Gradient updates happen directly in compressed space.
Not quantization-aware training or LoRA — every parameter fully trainable, convergence matches AdamW.
Solo founder from Canada. Self-taught CUDA/ML. Applying to YC S26.
Happy to answer questions.
Like this