1 pointby venkada13 days ago1 comment
  • venkada13 days ago
    Hi HN,

    I built this because I wanted the AI summary features mainly to summarize the action items discussed in a meeting, one on one convo or talking to yourself. The main logic here is that everything works offline.

    Fission runs entirely on-device:

    Transcription: Uses Vosk for offline STT.

    Summarization: Runs a quantized Llama model locally to summarize the transcript. Uses Qwen 0.6b llm model.

    Stack: React Native / Expo.

    It’s open source (GPLv3). I’d love feedback on how the inference performs on older devices. I'm currently working on better options for transcribing.