35 pointsby meetpateltech4 hours ago12 comments
  • sync3 hours ago
    Unfortunate, significant price increase for a 'lite' model: $0.25 IN / $1.50 OUT vs. Gemini 2.5 Flash-Lite $0.10 IN / $0.40 OUT.
  • vlmutolo2 hours ago
    Lots of comments about the price change, but Artifical Analysis reports that 3.1 Flash-Lite (reasoning) used fewer than half of the tokens of 2.5 Flash-Lite (reasoning).

    This will likely bring the cost below 2.5 flash-lite for many tasks (depends on the ratio of input to output tokens).

    That said, AA also reports that 3.1 FL was 20% more expensive to run for their complete Intelligence index benchmark.

    The overall point is that cost is extremely task-dependent, and it doesn’t work to just measure token cost because reasoning can burn so many tokens, reasoning token usage varies by both task and model, and similarly the input/output ratios vary by task.

    • msp268 minutes ago
      many tasks don't need any reasoning
  • guerython3 hours ago
    Flash-Lite’s $0.25/$1.50 price finally lets us run the translation+compliance queue without ripping through tokens. We push 400 req/s but keep a 20-second fuzzy cache of hashed prompts and only send the de-duplicated, heuristically filtered text so the model never re-processes the same boilerplate. The thinking-level knob is huge: level 1 by default gives us sub-200ms TTFB and we only bump to level 3 for flagged QA summaries. Anyone else pairing path-specific thinking levels with caches to keep high-frequency workloads sane?
    • zacksiri3 hours ago
      Yes, my workflows use caching intensively. It's the only way to keep things fast / economical.
  • k92943 hours ago
    You can test Gemini 3.1 Lite transcription capabilities in https://ottex.ai — the only dictation app supporting Gemini models with native audio input.

    We benchmarked it for real-life voice-to-text use cases:

                    <10s    10-30s   30s-1m    1-2m    2-3m
      Flash         2548     2732     3177     4583    5961
      Flash Lite    1390     1468     1772     2362    3499
      Faster by    1.83x    1.86x    1.79x   1.94x   1.70x
    
      (latency in ms, median over 5 runs per sample, non-streaming)
    
    Key takeaways:

    - 1.8x faster than Gemini 3 Flash on average

    - ~1.4 sec transcription time for short to medium recordings

    - ~$0.50/mo for heavy users (10h+ transcription)

    - Close to SOTA audio understanding and formatting instruction following

    - Multilingual: one model, 100+ languages

    Gemini is slowly making $15/month voice apps obsolete.

    • simianwords3 hours ago
      You know what would be great? A light weight wrapper model for voice that can use heavier ones in the background.

      That much is easy but what if you could also speak to and interrupt the main voice model and keep giving it instructions? Like speaking to customer support but instead of putting you on hold you can ask them several questions and get some live updates

      • k92942 hours ago
        It's actually a nice idea - an always-on micro AI agent with voice-to-text capabilities that listens and acts on your behalf.

        Actually, I'm experimenting with this kind of stuff and trying to find a nice UX to make Ottex a voice command center - to trigger AI agents like Claude, open code to work on something, execute simple commands, etc.

    • stri8ted2 hours ago
      Can you show some comparisons for WER and other ASR models? Especially for non english.
      • k92942 hours ago
        I've been experimenting with Gemini 3.1 Flash Lite and the quality is very good.

        I haven't found official benchmarks yet, but you can find Gemini 3 Flash word error rate benchmarks here: https://artificialanalysis.ai/speech-to-text/models/gemini — they are close to SOTA.

        I speak daily in both English and Russian and have been using Gemini 3 Flash as my main transcription model for a few months. I haven't seen any model that provides better overall quality in terms of understanding, custom dictionary support, instruction following, and formatting. It's the best STT model in my experience. Gemini 3 Flash has somewhat uncomfortable latency though, and Flash Lite is much better in this regard.

  • zacksiri3 hours ago
    This is going to be a fun one to play with. I've been conducting tests on various models for my agentic workflow.

    I was just wishing they would make a new flash-lite model, these things are so fast. Unfortunately 2.5-flash and therefore 2.5-flash-lite failed some of my agentic workflows.

    If 3.1-flash-lite can do the job, this solves basically all latency issues for agentic workflows.

    I publish my benchmarks here in case anyone is interested:

    https://upmaru.com/llm-tests/simple-tama-agentic-workflow-q1...

    P.S: The pricing bump is quiet significant, but still stomachable if it performs well. It is significant though.

  • rohansood153 hours ago
    For the last 2 years, startup wisdom has been that models will continue to get cheaper and better. Claude first, and now Gemini has shown that it's not the case.

    We priced an enterprise contract using Flash 1.5 pricing last summer, and today that contract would be unit economic negative if we used Flash 3. Flash 2.5 and now Flash 3.1 Lite barely breaks even.

    I predict open-source models and fine-tuning are going to make a real comeback this year for economic reasons.

    • xnx2 hours ago
      > We priced an enterprise contract using Flash 1.5 pricing last summer,

      Interesting. Flash 1.5 was already a year old at that point.

    • simianwords3 hours ago
      Not true. You just measure cost by amount of money spent per task. I would argue that this lite version is equivalent to older flash.
      • rohansood152 hours ago
        Yea but there is a whole world of tasks for which Flash 2.5-lite was sufficiently intelligent. Given Google's depreciation policy, there will soon be no way to get that intelligence at that price.
        • simianwords2 hours ago
          I hope they release models at every intelligence resolution although the thinking effort can be a good alternative
    • dktp2 hours ago
      Opus 4.5 became significantly cheaper than Opus 4.1
    • typs3 hours ago
      I mean the same level of intelligence does get cheaper. People just care about being on the frontier. But if you track a single level of intelligence the price just drops and drops.
      • rohansood152 hours ago
        What's the cheaper alternative from Gemini for Flash-2.5-lite level intelligence when it gets deprecated on 22nd July 2026?
  • sh4jid3 hours ago
    The Gemini Pro models just don't do it for me. But I still use 2.5 Flash Lite for a lot of my non-coding jobs, super cheap but great performance. I am looking forward to this upgrade!
    • simianwords3 hours ago
      same - pro is usually a miss for me.
  • msp2628 minutes ago
    What the fuck is this price hike? It was such a nice low end, fast model. Who needs 10 years of reasoning on this model size??

    I'm gonna switch some workflows to qwen3.5.

    There's a lot of tasks that benefit from just having a mildly capable LLM and 2.5 Flash Lite worked out of the box for cheap.

    Can we get flash lite lite please?

    Edit: Logan said: "I think open source models like Gemma might be the answer here"

    Implying that they're not interested in serving lower end Gemini models?

  • GodelNumbering2 hours ago
    That's a 150% increase in the input costs and 275% increase on output costs over the same sized previous generation (2.5-flash-lite) model
  • xnx2 hours ago
    I'm still clinging to gemini-2.0-flash which I think is free free for API use(?!).
  • 3 hours ago
    undefined
  • 3 hours ago
    undefined