2 pointsby visioninmyblood2 hours ago1 comment
  • visioninmyblood2 hours ago
    LLM-based agents handle text incredibly well, but images, videos, or PDFs with visual content are hard to interpret. mm-ctx gives your CLI agent multi-modal skills.

    Try it interactively in Spaces: vlm-run/mm-ctx

    Readme: https://vlm-run.github.io/mm/ PyPI: https://pypi.org/project/mm-ctx SKILL.md: https://github.com/vlm-run/skills/blob/main/skills/mm-cli-sk...

    mm-ctx is meant to feel familiar: the UNIX tools we already love (find/cat/grep/wc), rebuilt for file types LLMs can't read natively and designed to work with agents via the CLI. - mm grep "invoice #1234" ~/Downloads searches across PDFs and returns line-numbered matches - mm cat <document>.pdf returns a metadata description of the file - mm cat <photo>.jpg returns a caption of the photo - mm cat <video>.mp4 returns a caption of the video

    A few things we obsessed over: Speed: Rust core for the hot paths Local-first, BYO model: Uses any OpenAI-compatible endpoint: Ollama, vLLM/SGLang, LMStudio with any multimodal LLM (Gemma4, Qwen3.5, GLM-4.6V). Composable: stdin + structured outputs Drops into any agent via mm-cli-skills: Claude Code, Codex, Gemini CLI, OpenClaw.

    We’d love to hear your feedback! Especially on the CLI and what file types and workflows you would like to see next.