Cloudflare Images requires you to pre-define "variants" in the dashboard before you can use them. Need a 800x600 crop? Go to the dashboard, create a variant, name it, then reference it in your URL like imagedelivery.net/<account_hash>/<image_id>/my-variant. Every time you need a new size or quality combination, you're back in the dashboard. It's a configure → register → use flow.
img-src.io lets you specify transformations directly in the URL query string — no pre-registration needed. img-src.io/i/john/photo.webp?w=800&h=600&q=85&fit=cover just works on the first request. You can freely combine 20+ parameters (blur, sharpen, rotate, grayscale, etc.) without touching any dashboard. It's an upload → use flow.
We do offer presets (?p:thumbnail) which work similarly to Cloudflare's variants — named parameter sets you can reuse across URLs. The difference is that presets are optional and managed via API, not a prerequisite. You can go ad-hoc when prototyping and introduce presets later when your transformation needs stabilize, rather than being forced to define every combination upfront in a dashboard.
In fact, img-src.io's infrastructure runs on Cloudflare, and I initially considered wrapping Cloudflare Images for the transformation layer. However, the cost of scaling became prohibitive—if 100k free-tier users each transform 1k images, I'd be bankrupt. So I built a custom transformation service using libvips (which Cloudflare Images itself uses under the hood) and deployed it on Cloudflare Containers.
You can see the architecture here: https://docs.img-src.io/introduction#how-it-works
To be clear, this service isn't targeting folks like yourself who can spin up image transformation infrastructure in their sleep. It's for developers who find infrastructure setup tedious or daunting and just want simple image transformation with CDN caching out of the box.
Thanks for the feedback
After years of building web services on AWS, I got tired of setting up the same image optimization stack over and over: CloudFront + Lambda@Edge + S3, IAM policies, cache invalidation — 2-3 hours of config per project, every single time.
All I wanted was: upload an image, get a URL, add ?w=800 to resize it. Why did that require stitching together 3 AWS services?
So I built img-src (https://img-src.io).
Upload an image, get a global CDN URL. Transform on the fly with query params:
- https://img-src.io/i/user/photo.jpg ← original - https://img-src.io/i/user/photo.jpg?w=800 ← resize - https://img-src.io/i/user/photo.webp?q=85 ← format + quality - https://img-src.io/i/user/photo.avif?w=400&h=400&fit=cover
No CloudFront. No Lambda. No S3 bucket config.
How it works: - Upload via dashboard or REST API - Images stored on Cloudflare R2 (zero egress fees = unlimited bandwidth) - CDN Worker serves from 200+ edge locations - Rust/libvips container handles transcoding (WebP, AVIF, JPEG, PNG)
Tech stack: - Frontend: React 19 + Vite + TailwindCSS - Backend: TypeScript + Hono on Cloudflare Workers - CDN: Cloudflare Workers + R2 + KV - Transcoder: Rust + Axum + libvips (Docker container) - Auth: Clerk (JWT/JWKS) - Docs: Mintlify (https://docs.img-src.io)
Pricing: - Free: 10GB storage, 1K transforms/month, unlimited bandwidth - Pro: $5/month — 50GB, 10K transforms, unlimited bandwidth
The unlimited bandwidth is possible because R2 has zero egress costs. Pro plan infrastructure cost is ~$0.85/user/month.
I built this solo, pair-programming with Claude Code ~3 hours/day. My role was mostly testing and providing feedback on the generated code.
API docs: https://docs.img-src.io OpenAPI spec available for SDK generation (TypeScript SDK in progress).
Would love feedback from HN, especially on: - Pricing — is $5/month the right price point vs Cloudinary ($99+), ImageKit ($49)? - Missing features — what would make you switch from your current setup? - API design — anything you'd change? - Agent workflows — if your AI agent needed to handle images, what would the ideal integration look like?