I’m sharing an open-source project out of our lab called InkSight (code-named inco). Like many of us, our team found ourselves constantly distracted by notifications, glowing monitors, and endless doomscrolling on our phones. We wanted a way to consume high-quality, low-frequency information (like a Stoic quote, a minimalist daily briefing, or a quick recipe) without the cognitive load of a traditional screen.
So, we built InkSight—an open-source "slow tech" desktop companion. It uses an ESP32-C3 and an e-ink display to fetch customized LLM-generated content.
The Tech Stack & Architecture:
Hardware: ESP32-C3 written in C/C++ (Arduino framework). It supports common 2.13" / 1.54" e-ink panels.
Backend: Python & FastAPI. It acts as the brain, parsing user-defined JSON prompt templates and calling any OpenAI-compatible LLM (OpenAI, DeepSeek, Kimi, etc.).
Web Dashboard: Pure HTML/JS/CSS for easy configuration without digging into the code.
What we think HN might find interesting (The Optimizations): Instead of the ESP32 waking up, making an API call to the LLM, and waiting 5-10 seconds for a response (which drains the LiPo battery significantly), we built a smart caching layer in the backend. The backend pre-generates and caches the content. When the ESP32 wakes up, it fetches the cached payload in sub-seconds, updates the e-ink screen, and immediately goes back to deep sleep. This allows it to run for 3 to 6 months on a single charge and stay resilient against temporary network drops.
Deployment: We wanted to make it accessible to everyone, so the backend can be 1-click deployed to Vercel (for free), and the hardware uses a Captive Portal for zero-code WiFi and API configuration. Of course, it’s completely self-hostable if you prefer to run it locally.
Repo: https://github.com/datascale-ai/inksight
I’d love to hear your thoughts on the "slow tech" philosophy, any feedback on the architecture, or ideas for new content templates! I'll be hanging around the thread to answer any questions.