130 pointsby steveharing12 hours ago16 comments
  • dash214 minutes ago
    Nah, I ain't reading that. If they can't be bothered to get a human to write it, it can't be that important. I'm glad for them though. Or sorry that happened.
  • pjmalandrino4 minutes ago
    Very impressive series of SLM by IBM here.

    I have been using it with their Chunkless RAG concept and it is fitting very well! (for curious https://github.com/scub-france/Docling-Studio)

    I convinced that SLM are a real parto of solution for true integrated AI in process...

  • 2ndorderthought2 hours ago
    I test drove it yesterday. It's pretty impressive at 8b. Runs on commodity hardware quickly.

    Qwen3.6 35b a3b is still my local champion but I may use this for auto complete and small tasks. Granite has recent training data which is nice. If the other small models got fine tuned on recent data I don't know if I would use this at all, but that alone makes it pretty decent.

    The 4b they released was not good for my needs but could probably handle tool calls or something

    • vessenesan hour ago
      Have you tried the Gemma 4 series, out of curiosity? I haven’t run a local model in a while, but the benchmarks look good. I’d take a free local tool-use model if it was relatively consistent.
      • v3ss0n30 minutes ago
        Qwen 3.6 burns it to the ground. it was not even a challenge. Gemma4 seriously fails at toolcalls and agentic works. It got all messed up after 2-3 turns of Vibecoding.
        • xrd3 minutes ago
          How do you run it? vllm? llama.cpp?

          Can you share some parameters you enable tool calling and agentic usage?

          Or, higher level, some philosophies on what approaches you are using for tuning to get better tool calling and/or agentic usage?

          I'm having surprisingly good success with unsloth/Qwen3.6-27B-GGUF:Q4_K_M (love unsloth guys) on my RTX3090/24GB using opencode as the orchestrator.

          It concocts some misleading paths, but the code often compiles, and I consider that a victory.

          You have to watch it like you would watch a 14 year old boy who says he is doing his homework but you hear the sound effects of explosions.

        • lambda3 minutes ago
          Gemma 4 31b was working ok for me; but it was consuming tons of memory on SWA checkpoints, I had to turn them way down, and as a 31b dense model is fairly slow on a Strix Halo. I did have a lot of tool calling issues on 26b-a4b, though.

          The Qwen models are quite solid though.

        • 2ndorderthought19 minutes ago
          Gemma4 is definitely not used for vibe/agentic coding. Not even worth trying. But its a different weight class.
      • 2ndorderthoughtan hour ago
        I tried the Gemma 4 I think 2 and 4b. The 2b was not useful for me at all. A little too weak for my use cases

        The 4b was okay. It didn't get all of my small math questions right, it didn't know about some of the libraries I use, but it was able to do some basic auto complete type stuff. For microscopic models I like the llama 3.2 3b more right now for what I do, it's a little faster and seems a little stronger for what I do. But everyone is different and I don't think I'll use it anymore this past month has been crazy for local model releases.

        • throwaw1216 minutes ago
          can you share your use cases for 2b and 4b models?

          curious how people are leveraging these models

    • cyanydeez17 minutes ago
      Qwen3-Coder-Next seems to be perfect sized for coding. I tried the new and just found the verbosity not really useful for coding. But probably for more analytical tasks or writing docs.
    • steveharing12 hours ago
      Yea, No doubt Qwen 3.6 open weights are far more strong
      • rnadomvirlabe2 hours ago
        Why no doubt?
        • captainblandan hour ago
          No comparison with competitor models other than the previous granite version strongly implies that it does not compete well with other comparable models. At least this is the most reasonable assumption until data comes out to the contrary
        • 2ndorderthoughtan hour ago
          Qwen 36 is effectively a pocket sized frontier model. It's really surprising for me anyway
        • steveharing1an hour ago
          Because Qwen 3.6 pushes way above its weight. Granite 8B is impressive, but Qwen still wins on raw capability, especially for coding.
          • rnadomvirlabean hour ago
            You just asserted the same thing again. Why do you say this is the case?
            • 2ndorderthought43 minutes ago
              Qwen scores above sonnet in coding benchmarks. Runs locally. In personal use it's really good. Anecdotally others have used it to vibe code or agentic code successfully. Not toy problems. Not a toy model.

              Qwen3.6 raises the bar for models of its size. There really isn't a comparison in my opinion.

            • noodletheworldan hour ago
              Having tried it.

              Qwen is really good.

              Also, generally, it makes sense. 8B models are generally not very good^.

              That this 8B model is decent is impressive, but that it could perform on par with a good model 4 times as large is a daydream.

              ^ - To be polite. The small models + tool use for coding agents are almost universally ass. Proof: my personal experience. Ive tried many of them.

              • irishcoffee42 minutes ago
                So it’s just like, your opinion, man?
                • Terretta27 minutes ago
                  College SAT scores do not tell you how the dev applying for your open back end systems engineering job is going to do once they're in your workplace harness.

                  Nor do class standings, nor hackerrank and the like.

                  What will tell you is asking them to fix a thing in your codebase. Once you ask an LLM to do that, a dozen times, I'd argue it's no longer "just your opinion man", it's a context-engineered performance x applicability assessment.

                  And it is very predictive.

                  But it's also why someone doing well at job A isn't necessarily going to be great at B, or bad at A doesn't mean will necessarily be bad at B.

                  I've often felt we should normalize a sort of mutual try-buy period where job-change seeker and company can spend a series of days without harming one's existing employment, to derisk the mutual learning. ESPECIALLY to derisk the career change for the applicant who only gets one timeline to manage, opposed to company that considers the applicant fungible.

                  But back to the LLM, yeah, the only valid opinion on whether it works for you is not benchmark, it's an informed opinion from 'using it in anger'.

                • robotmaxtron12 minutes ago
                  the (dead) internet is full of opinions exactly like this
                  • brazukadev8 minutes ago
                    you tried qwen3.6 and you think it is not good?
            • steveharing1an hour ago
              [dead]
          • actionfromafaran hour ago
            Way above its weights.
            • drittichan hour ago
              Nanobanana for scale.
          • locknitpicker41 minutes ago
            [dead]
  • cbg02 hours ago
    The real "sleeper" might be https://huggingface.co/ibm-granite/granite-vision-4.1-4b if the benchmarks hold up for such a small model against frontier models for table & semantic k:v extraction.
    • uf00lme24 minutes ago
      Woah, is this part of the future of models? Basically little models you can use as tools.
      • 2ndorderthought15 minutes ago
        It's looking like running your own mini ecosystem is the way of the future to me. No data centers, just a decent GPU 16-24gb of VRAM, CPU, and 32gb of RAM.
      • SecretDreams11 minutes ago
        Eventually we'll have models small enough to do a single thing really well and we'll call them functions.
      • cyanydeez14 minutes ago
        I'm pretty sure there's someone somewhere who'll create a proper harness that's equivalent to one giant model. The difficulty is mostly local hardware has lot of memory constraints. Targeting 128GB would seem to be the current sweet spot. If we could get out of the corporate market movers of buying up all the memory, we could maybe have more.

        Regardless, the people in the 80s capable of pruning programs to fit on small devices is likely happening now. I'd bet most of the Chinese firms are doing it because of the US's silly GPU games among other constraints.

  • Havoc2 hours ago
    Interesting to see a pivot away from MoE by both IBM and mistral while the larger classes of SOTA of models all seem to be sticking to it.

    Quick vibe check of it- 8B @ Q6 - seems promising. Bit of a clinical tone, but can see that being useful for data processing and similar. You don't really want a LLM that spams you with emojis sometimes...

    • embedding-shapean hour ago
      Makes sense, dense for small models, dense or MoE for larger ones, end up fitting various hardware setups pretty neatly, no need for MoE at smaller scale and dense too heavy at large scale.
    • npodbielski34 minutes ago
      I never want LLM to span me with emojis. What is the use case for that? I find it highly annoying.
      • 2ndorderthought14 minutes ago
        Shh people are paying for each token. Don't get them asking too many questions
      • Havoc30 minutes ago
        Think it can be a plus in moderation. eg in openclaw it can add some character

        But yea dislike that style where each heading and bullet point gets an emoji

  • 100ms2 hours ago
    > Full stop.

    Why people don't edit out obvious sloppification and expect to still have readers left

    • wewewedxfgdfan hour ago
      Third line in to the article: "But there’s one result in the benchmarks I keep coming back to."

      I hear this sort of thing all the time now on YouTube from media/news personalities:

      “And that’s the part nobody seems to be talking about.”

      "And here's what keeps me up at night."

      “This is where the story gets complicated.”

      “Here’s the piece that doesn’t quite fit.”

      “And this is where the usual explanation starts to break down.”

      “Here’s what I can’t stop thinking about.”

      “The part that should worry us is not the obvious one.”

      “And that’s where the real problem begins.”

      “But the more interesting question is the one no one is asking.”

      “And this is where things stop being simple.”

      It doesn't really worry me but I think its interesting that LLM speak sounds so distinctive, and how willing these media personalities are to be so obvious in reading out on TV what the LLM spat out.

      I've never studied what LLMs say in depth is it is interesting that my brain recognises the speech pattern so easily.

      • frereubuan hour ago
        I think this kind of language predates widespread LLM use, and has been picked up from that kind of writing. It's a "and here's where it gets interesting" pattern that people like Malcolm Gladwell and Freakonomics have used, even if the same thing could be said in a way that makes it sound much less intriguing.
      • jmbwellan hour ago
        The language of drama and import without meaningful substance. Words statistically likely to be used in a segue, regardless of the preceding or subsequent point. Particularly effective when it seems like you’re getting let in on a secret. Really fatiguing to read

        A writing teacher once excoriated me for saying that something was important. “Don’t tell me it’s important, show me, and let me decide, and if you do your job I’ll agree”

        I don’t know how a completion can tell when it needs to do this. Mostly so far it doesn’t seem capable

        • MarsIronPI35 minutes ago
          Maybe the solution is to cull the bad, cliché writing from the training data.
          • wewewedxfgdf17 minutes ago
            You can just instruct the LLM not to write like an LLM.
      • MarsIronPI36 minutes ago
        Ugh, you're making me remember the last time I listened to NPR. It's so bad.
        • stuff4ben14 minutes ago
          I listen to NPR daily and I don't think I've ever heard any of them use that phrasing.
      • bambaxan hour ago
        I notice this very often in LinkedIn posts, and it's annoying, but I had not realized it was LLM-speak? Isn't it possible that people write like this naturally?
        • spicyusernamean hour ago
          Arguably it's exactly because it was used naturally so often that the LLMs parrot it so frequently.
        • wewewedxfgdfan hour ago
          I think LLM's have that sort of "summarise, wrap it in a bow tie, give a little dramatic punch as a preview to the next few points".
        • trvzan hour ago
          Yes. Some people are very trigger happy in attributing human slop to LLMs.
        • steveharing1an hour ago
          [dead]
      • Lerc42 minutes ago
        Apparently John Oliver was an LLM before they were even invented.
    • cbg02 hours ago
      So are we saying it's fine that the article is written by an LLM as long as it doesn't have the tell-tale signs of LLMs?
      • ramon156an hour ago
        It's more about curating the things you're publishing. Why would I bother reading what you couldn't bother to read?
      • 100msan hour ago
        I don't really see reason to complain about tool use, so long as the result is cohesive, accurate and that ultimately means a human has at least read their own output before publishing. It's a bit like receiving a supposedly personal letter that starts "Dear [INSERT_FIRST_NAME_FIELD]," are you really going to read such a thing?
      • HighGoldsteinan hour ago
        An article without telltale signs of an LLM is indistinguishable from an article written by a human, so yes.
      • spicyusernamean hour ago
        My opinion is that literature and art will continue pushing the envelope in the places they always pushed the envelope. LLMs will not change this, humans love making art, and they love doing it in new ways.

        Corporate announcements were never the places that literature and art were pushing the envelope. They were slop before, and they're slop now.

    • crunisan hour ago
      Are you referring to the literal use of the expression "full stop"? I don't see it anymore in the article, maybe they edited it out?
  • agunapalan hour ago
    If you really think about why MoE came into existence, its to save significant cost during training, I don't think there was any concrete evidence of performance gains for comparable MoE vs dense models. Over the years, I believe all the new techniques being employed in post training have made the models better.
    • vessenesan hour ago
      I think you mean inference compute? I believe all expert weights are updated in each backward pass during MoE training. The first benefit was getting a sort of structured pruning of weights through the mechanism of expert selection so that the model didn’t need to go through ‘unnecessary’ parts of the model for a given token. This then let inference use memory more efficiently in memory constrained environments, where non-hot or less common experts could be put into slow RAM, or sometimes even streamed off storage.

      But I don’t think it necessarily saved training cost; if it did, I’d be interested to learn how!

      • bjourne25 minutes ago
        Each token is only routed through a few chosen (topk) experts during training. So not all expert weights are updated in the backward pass. Otoh, you may need more training to ensure all experts see enough tokens!

        I doubt MoE is actually worth it, given how complicated high-performance expert routing and training is. But who knows, I don't.

    • zozbot234an hour ago
      MoE models will have far more world knowledge than dense models with the same amount of active parameters. MoE is a no-brainer if your inference setup is ultimately limited by compute or memory throughput - not total memory footprint - or alternately if it has fast, high-bandwidth access to lower-tier storage to fetch cold model weights from on demand.
  • theblazehen10 minutes ago
    > models are judged by GPT-4

    An interesting choice

  • dissahc25 minutes ago
    qwen3.5 9b outperforms granite 4.1 30b by a huge amount (32 vs 15 on artificialanalysis benchmark)... i have no idea what made the writer of this article say so many demonstrably incorrect things
  • robotmaxtron17 minutes ago
    "open source"

    show me.

  • RugnirViking2 hours ago
    sounds interesting. Here's hoping they release a 32B model, thats a pretty good sweet spot for feasibility of home setups.

    edit: I just realised they do actually have a 30b release alongside this. Haven't tried it yet.

  • mdp20212 hours ago
    Wish they also released an embedding model, in the line of their previous: compact (while good)...
  • tokenhub_dev23 minutes ago
    [flagged]
  • whalesaladan hour ago
    [flagged]