474 pointsby nwjsmith4 days ago33 comments
  • antirez4 days ago
    Note the massive context length (130k tokens). Also because it would be kinda pointless to generate a long CoT without enough context to contain it and the reply.

    EDIT: Here we are. My first prompt created a CoT so long that it catastrophically forgot the task (but I don't believe I was near 130k -- using ollama with fp16 model). I asked one of my test questions with a coding question totally unrelated to what it says:

    <QwQ output> But the problem is in this question. Wait perhaps I'm getting ahead of myself.

    Wait the user hasn't actually provided a specific task yet. Let me check again.

    The initial instruction says:

    "Please act as an AI agent that can perform tasks... When responding, first output a YAML data structure with your proposed action, then wait for feedback before proceeding."

    But perhaps this is part of a system prompt? Wait the user input here seems to be just "You will be given a problem. Please reason step by step..." followed by a possible task? </QwQ>

    Note: Ollama "/show info" shows that the context size set is correct.

    • anon3738393 days ago
      > Note: Ollama "/show info" shows that the context size set is correct.

      That's not what Ollama's `/show info` is telling you. It actually just means that the model is capable of processing the context size displayed.

      Ollama's behavior around context length is very misleading. There is a default context length limit parameter unrelated to the model's capacity, and I believe that default is a mere 2,048 tokens. Worse, when the prompt exceeds it, there is no error -- Ollama just silently truncates it!

      If you want to use the model's full context window, you'll have to execute `/set parameter num_ctx 131072` in Ollama chat mode, or if using the API or an app that uses the API, set the `num_ctx` parameter in your API request.

      • antirez3 days ago
        Ok, this explains why QwQ is working great on their chat. Btw I saw this thing multiple times: that ollama inference, for one reason or the other, even without quantization, somewhat had issues with the actual model performance. In one instance the same model with the same quantization level, if run with MLX was great, and I got terrible results with ollama: the point here is not ollama itself, but there is no testing at all for this models.

        I believe that models should be released with test vectors at t=0, providing what is the expected output for a given prompt for the full precision and at different quantization levels. And also for specific prompts, the full output logits for a few tokens, so that it's possible to also compute the error due to quantization or inference errors.

        • svachalek3 days ago
          Yeah the state of the art is pretty awful. There have been multiple incidents where a model has been dropped on ollama with the wrong chat template, resulting in it seeming to work but with greatly degraded performance. And I think it's always been a user that notices, not the ollama team or the model team.
          • refulgentis3 days ago
            I'm grateful for anyone's contributions to anything, but I kinda shake my head about ollama. the reason stuff like this happens is they're doing the absolute minimal job necessary, to get the latest model running, not working.

            I make a llama.cpp wrapper myself, and it's somewhat frustrating putting effort in for everything from big obvious UX things, like error'ing when the context is too small for your input instead of just making you think the model is crap, to long-haul engineering commitments, like integrating new models with llama.cpp's new tool calling infra, and testing them to make sure it, well, actually works.

            I keep telling myself that this sort of effort pays off a year or two down the road, once all that differentiation in effort day-to-day adds up. I hope :/

            • Karrot_Kream3 days ago
              Can you link your wrapper? I've read and run up against a lot of footguns related to Ollama myself and I think surfacing community efforts to do better would be quite useful.
              • refulgentis3 days ago
                Cheers, thanks for your interest:

                Telosnex, @ telosnex.com --- fwiw, general positioning is around paid AIs, but there's a labor-of-love llama.cpp backed on device LLM integration that makes them true peers, both in UI and functionality. albeit with a warning sign because normie testers all too often wander into trying it on their phone and killing their battery.

                My curse is the standard engineer one - only place I really mention it is one-off in comments like here to provide some authority on a point I want to make...I'm always one release away from it being perfect enough to talk up regularly.

                I really really need to snap myself awake and ban myself from the IDE for a month.

                But this next release is a BFD, full agentic coding, with tons of tools baked in, and I'm so damn proud to see the extra month I've spent getting llama.cpp tools working agentically too. (https://x.com/jpohhhh/status/1897717300330926109, real thanks is due to @ochafik at Google, he spent a very long term making a lot of haphazard stuff in llama.cpp coalesce. also phi-4 mini. this is the first local LLM that is reasonably fast and an actual drop-in replacement for RAG and tools, after my llama.cpp patch)

                Please, feel free to reach out if you try it and have any thoughts, positive or negative. james @ the app name.com

        • anon3738393 days ago
          The test vectors idea is pretty interesting! That's a good one.

          I haven't been able to try out QwQ locally yet. There seems to be something wrong with this model on Ollama / my MacBook Pro. The text generation speed is glacial (much, much slower than, say Qwen 72B at the same quant). I also don't see any MLX versions on LM Studio yet.

    • wizee4 days ago
      Ollama defaults to a context of 2048 regardless of model unless you override it with /set parameter num_ctx [your context length]. This is because long contexts make inference slower. In my experiments, QwQ tends to overthink and question itself a lot and generate massive chains of thought for even simple questions, so I'd recommend setting num_ctx to at least 32768.

      In my experiments of a couple mechanical engineering problems, it did fairly well in final answers, correctly solving mechanical engineering problems that even DeepSeek r1 (full size) and GPT 4o did wrong in my tests. However, the chain of thought was absurdly long, convoluted, circular, and all over the place. This also made it very slow, maybe 30x slower than comparably sized non-thinking models.

      I used a num_ctx of 32768, top_k of 30, temperature of 0.6, and top_p of 0.95. These parameters (other than context length) were recommended by the developers on Hugging Face.

      • zamadatix4 days ago
        I always see:

          /set parameter num_ctx <value>
        
        Explained but never the follow up:

          /save <custom-name>
        
        So you don't have to do the parameter change every load. Is there a better way or is it kind of like setting num_ctx in that "you're just supposed to know"?
        • sReinwald3 days ago
          You can also set

              OLLAMA_CONTEXT_LENGTH=<tokens>
          
          as an environment variable to change ollama's default context length.
          • Tepix3 days ago
            I think that will not work if you use the OpenAI compatible API endpoint.
          • svachalek3 days ago
            I tried this with ollama run, and it had no effect at all.
            • underlines3 days ago
              that env parameter is brand new, did you update ollama?
      • flutetornado4 days ago
        My understanding is that top_k and top_p are two different methods of decoding tokens during inference. top_k=30 considers the top 30 tokens when selecting the next token to generate and top_p=0.95 considers the top 95 percentile. You should need to select only one.

        https://github.com/ollama/ollama/blob/main/docs/modelfile.md...

        Edit: Looks like both work together. "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)"

        Not quite sure how this is implemented - maybe one is preferred over the other when there are enough interesting tokens!

        • nodja3 days ago
          They both work on a sorted list of tokens by probability. top_k selects a fixed amount of tokens, top_p selects the top tokens until the sum of probabilities passes the threshold p. So for example if the top 2 tokens have a .5 and .4 probability, then a 0.9 top_p would stop selecting there.

          Both can be chained together and some inference engines let you change the order of the token filtering, so you can do p before k, etc. (among all other sampling parameters, like repetition penalty, removing top token, DRY, etc.) each filtering step readjusts the probabilities so they always sum to 1.

    • AustinDev4 days ago
      I tried the 'Strawberry' question which generated nearly 70k words of CoT.
      • moffkalast3 days ago
        I think you guys might be using too low of a temperature, it never goes beyond like 1k thinking tokens for me.
      • nicman233 days ago
        lol did it at least get it right?
        • nkozyra3 days ago
          It's a hard problem, that's a lot to ask.
    • hbbio4 days ago
      "My first prompt created a CoT so long that it catastrophically forgot the task"

      Many humans would do that

    • ignorantguy4 days ago
      Yeah it did the same in my case too. it did all the work in the <think> tokens. but did not spit out the actual answer. I was not even close to 100K tokens
    • freehorse3 days ago
      If you did not change the context length, it is certain that it is not 2k or so. In "/show info" there is a field "context length" which is about the model in general, while "num_ctx" under "parameters" is the context length for the specific chat.

      I use modelfiles because I only use ollama because it has easy integration with other stuff eg with zed, so this way I can easily choose models with a set context size directly.

      Here nothing fancy, just

          FROM qwq
          PARAMETER num_ctx 100000
      
      You save this somewhere as a text file, you run

          ollama create qwq-100k -f path/to/that/modelfile
      
      and you now have "qwq-100k" in your list of models.
    • smallerize4 days ago
      From https://huggingface.co/Qwen/QwQ-32B

      Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required.

      • GTP3 days ago
        Sorry, could you please explain what this means? I'm not into machine learning, so I don't get the jargon.
    • tsunego3 days ago
      Can’t wait to see if my memory can even acocomodate this context
    • k_sze3 days ago
      Oddly, the Chinese LLM host SiliconFlow only makes it available with 32k context, which is even smaller than their DeepSeek-R1 offering.
    • codelion4 days ago
      that's interesting... i've been noticing similar issues with long context windows & forgetting. are you seeing that the model drifts more towards the beginning of the context or is it seemingly random?

      i've also been experimenting with different chunking strategies to see if that helps maintain coherence over larger contexts. it's a tricky problem.

      • orbital-decay4 days ago
        Neither lost-in-the-middle nor long context performance have seen a lot of improvement in the recent year. It's not easy to generate long training examples that also stay meaningful, and all existing models still become significantly dumber after 20-30k tokens, particularly on hard tasks.

        Reasoning models probably need some optimization constraint put on the length of the CoT, and also some priority constraint (only reason about things that need it).

  • gagan20204 days ago
    Chinese strategy is open-source software part and earn on robotics part. And, They are already ahead of everyone in that game.

    These things are pretty interesting as they are developing. What US will do to retain its power?

    BTW I am Indian and we are not even in the race as country. :(

    • nazgulsenpai4 days ago
      If I had to guess, more tariffs and sanctions that increase the competing nation's self-reliance and harm domestic consumers. Perhaps my peabrain just can't comprehend the wisdom of policymakers on the sanctions front, but it just seems like all it does is empower the target long-term.
      • h0l0cube4 days ago
        The tarrifs are for the US to build it's own domestic capabilities, but this will ultimately shift the rest of the world's trade away from the US and toward each other. It's a trade-off – no pun intended – between local jobs/national security and downgrading their own economy/geo-political standing/currency. Anyone who's been making financial bets on business as usual for globalization is going to see a bit of a speed bump over the next few years, but in the long term it's the US taking an L to undo decades of undermining their own peoples' prospects from offshoring their entire manufacturing capability. Their trump card - still no pun intended - is their military capability, which the world will have to wean themselves off first.
        • whatshisface3 days ago
          Tariffs don't create local jobs, they shut down exporting industries (other countries buy our exports with the dollars we pay them for our imports) and some of those people may over time transition to non-export industries.

          Here's an analysis indicating how many jobs would be destroyed in total over several scenarios: https://taxfoundation.org/research/all/federal/trump-tariffs...

          • h0l0cube3 days ago
            They will, out of sheer necessity. Local industries will be incentivized to restart. And of course, there are already carve-outs for the automotive sector that needs steel, overseas components, etc. I expect more carve-outs will be made, esp. for the military.

            I don't think the tariffs are being managed intelligently, but they will have the intended effect of moving manufacturing back to the US, even if, in the short term, it's going to inflate prices, and yes, put a lot of businesses in peril.

            • seanmcdirmid3 days ago
              > even if, in the short term, it's going to inflate prices, and yes, put a lot of businesses in peril.

              This is optimistic. They could totally inflate prices in the long term, and not just create inflation, but reduce the standard of living Americans are used to. That in itself is fine as Americans probably consume too much, but living in the USA will become more like living in Europe where many goods are much more expensive.

              Worst case is that American Juche turns out to be just like North Korean Juche.

              • h0l0cube3 days ago
                > They could totally inflate prices in the long term, and not just create inflation

                This will all happen. But as I said, this is a trade-off. Devalue the currency, incentivize local production, increase exports, revive the working class – that's the long term goal.

                > but reduce the standard of living Americans are used to.

                Whose standard of living though? It's well and good if you're in a comfy desk job with health care and a pension. The discontent that led to Trump's rise is real, and it's routinely overlooked when considering how to counter him. Of the everyday people, those who have stable jobs and purpose aren't voting for Trump. (Of the wealthy, it's probably a lot more cynical who voted for Trump)

                I'm not in favor of the policy, the manner in which it's being applied, or the people that are doing it, but reversing off-shoring is a consequence of using protectionist policies – be it tariffs, or subsidies.

                High-skill work, and pencil-pushing desk jobs don't cover 100% of the population, and has lead to a lot of unproductive busy-work in the cities. The offshoring of blue-collar work bred the discontent that led to Trump. Trump fancies himself the new William McKinley and is using the cudgel of tariffs to re-onshore manufacturing. This is a process he started in his first administration, that was retained by Biden, and now he's doubling down and doing exactly what he promised he would – and somehow his voters are surprised?

                Worse still, those service economy jobs keeping the coastal cities alive (both low skill and high skill) are on the verge of being replaced by AI – whether that's one year or 20, I don't know—though I'm wagering the latter. Physical labor is going to become more valuable as robotics is still way behind in technological development. I don't have a crystal ball, but I'd wager that–at least counterfactually—the US will have more jobs by enacting protectionist policy.

                > Worst case is that American Juche turns out to be just like North Korean Juche.

                Do you really in your heart of hearts think this is going to happen? I'm pretty sure the subjugation of the American people by the government would be feasible, let alone easy.

                • Workaccount22 days ago
                  >Whose standard of living though? It's well and good if you're in a comfy desk job with health care and a pension.

                  The fundamental problem is that people want low-value jobs (manufacturing clothes irons, screw drivers, ceramic mugs) with a contemporary American middle class living wages. It just doesn't math though. If you make mugs by hand all day you simply cannot create enough value in a day to keep up with the person who automates production lines all day.

                  Worse yet, they want those wages while also still being able to go to walmart and buy...clothes irons, screw drivers, and ceramic mugs for <$20 (maybe $50 for the iron).

                  We already just saw how this went over during the pandemic. Many people got $20/hr jobs for the first time in their life. A wage they have always dreamed of while slogging in $10-$12/hr jobs. Then it happened...but it felt the same as when they were making $12/hr. This is because the intrisic value they create didn't change. Value creation is what determines the structure of an economy, not wages or dollars or any other currency, those are just proxies.

                  The intrinsic value of making this stuff in the US just isn't there to make it viable to compete with the large number of high intrinsic value job workers in the US.

                • borgdefenser2 days ago
                  The current standard of living is also being propped up by a completely unsustainable debt burden and the fading dividends of the unipolar moment.

                  Personally, I am completely in favor of all these policies. We are basically going to go through a period of austerity.

                  It is really what makes this system so incredible. Such a dynamic system to go from sending people checks in the mail to a few short years later, austerity. People get much too involved though with the personalities as opposed to what the system is doing as a whole.

                  At the most macro level, I think there is a tremendous amount of denial that the unipolar moment is over. The arguments against austerity here are just clinging to a time that has come and gone. It is like wishing I could stay in my 30s forever.

                  AI is also incredibly deflationary long term. The path we were on would have devalued the currency long term. If we get control of the debt and re-shore manufacturing, I can't imagine that not being good for the currency. I think the easiest way to get a non-distorted view is by flipping things around from the perspective of China.

                  All of this would seem bad from the Chinese perspective.

                  I feel like we are also doing a type of Yuan gambit to the world long term here.

                  Sick of the US? Well the Yuan is right there for you to take. We are betting on the gambit being refused.

                  If we could just turn the temperature down on the political personality bullshit there is so much interesting discussion to be had but it is almost impossible that it currently doesn't turn into a pointless food fight.

                  • seanmcdirmid2 days ago
                    > AI is also incredibly deflationary long term. The path we were on would have devalued the currency long term. If we get control of the debt and re-shore manufacturing, I can't imagine that not being good for the currency. I think the easiest way to get a non-distorted view is by flipping things around from the perspective of China.

                    > All of this would seem bad from the Chinese perspective.

                    China is leading with very aggressive investments in AI and automation, along with clean energy and clean energy vehicles. America is...just blustering mostly, some AI investments, but not much beyond that. China is set to reap more of the AI revolution than the US is, so it is very good from the Chinese perspective, especially given the increasing skill gap between Chinese and American talent. The MAGA effort is just making it worse by de-emphasizing education even more. Like Trump, Xi is an autocrat for sure, but unlike Trump he isn't stupid.

            • whatshisface3 days ago
              You're thinking about the bolt factory that will open, but what about the factories that close? Putting, as you say, businesses in peril, gets rid of jobs.

              As for the carve-outs, there isn't a single US industry that doesn't rely on imports from lower down the supply chain.

              • h0l0cube3 days ago
                Protectionist policy, if applied consistently, will actually lead to more jobs (and higher wages) eventually, but also higher inflation and job losses in the short term, and a more insular economy. It's foolish to go so hard, and so fast – or this is just a negotiation tactic – so I think the Trump administration is going to compromise by necessity, but in time supply chains will adjust to the new reality, and tariffs can increase.
                • mlyle3 days ago
                  > and higher wages) eventually

                  Higher real wages?

                  Do gains from trade not exist?

                  Comparative advantage: Country A has an easier time making X than Y, and country B has an easier time making Y than X, so country A should trade some of their Xs for Ys, and both countries end up richer.

                  I think there's some reasons to dial back interdependence a little, but I don't think it's a path likely to lead to greater wealth or real wages.

                  (And certainly no reason to make erratic changes at large scale, focusing on allies and neighbors first).

                  • h0l0cube3 days ago
                    I don't believe I have to point this out, but this is not a policy that I think is good, it's just one that will have the intended affect of onshoring manufacturing jobs. And I'm not talking about higher wages for quants, or MBAs, or HR, or software evangelists, or door-to-door salespeople, or cashiers at Dollar General, I'm talking about for people who are underemployed, unemployed, or doing some nonsense busywork because the manufacturing sector has been eroded away over the last 4 decades.

                    > And certainly no reason to make erratic changes at large scale, focusing on allies and neighbors first

                    Those people who benefited from globalization, and who didn't care about the working class, are exactly who brought us to this moment. And I have a huge shrug to those who are loath to accept that. If only it was attended to sooner by a more sensible administration.

                • whatshisface3 days ago
                  That's an assumption, I'm trying to challenge it. Taxes usually take money out of the economy and lead to less activity. Why should a (very high) tax on transportation be different? These are not the sorts of things we can afford to just do without making sure they will work.
                  • h0l0cube3 days ago
                    > Taxes usually take money out of the economy

                    This is an oversimplification, they can change incentives, and sometimes increase investment.

                    > lead to less activity

                    I do agree money will be divested from the US as they become more and more expensive to deal with (leading to "less activity"), and like I said this will rechannel the economy between the rest of the world. The trade-off is that the US becomes a manufacturer and exporter again (leading to "replaced activity"), some manufacturing capability is duplicated (leading to "more activity" though redundant/less productive), and the currency devalues.

                    But I'll admit I'm well out of my depth here, and I'm being booed off the stage. All the same, I don't think I'm wrong here. Protectionism isn't new, and lots of countries do it, it's just novel that the world's largest economy and bastion of free trade is doing it to such an extreme.

                    • whatshisface2 days ago
                      I don't think getting booed off the stage is a good way to end the discussion. The US is already a major exporter, of the goods we have an advantage at producing: simple foods, refined oil, advanced machines. Forcing farmers to plant avocados in potato fields isn't really going to help anybody, and neither is transferring oil refinery engineers to working on optimizing garmet factories. This will all take place against a backdrop of a poorer world with fewer dollars to spend on our goods, so it won't help our export sales either. Europe and China need to earn the dollars they buy our wheat with somehow - and without buying from them, I don't see how they'll do it.
                      • h0l0cube2 days ago
                        It seems like an almost impossible task, especially if compromises aren't made on tariffs (I expect they will be). The US does have military leverage as the sole supplier of advanced weaponry to many countries, but I think the USD would need to massively downgrade for America to become an net-export market once again.

                        Once again, I'm well out of my depth to be able to speculate here. But ostensibly globalization hasn't worked for the working class of America, and that has led to the current state of affairs.

                  • pessimizer3 days ago
                    It's a debate that has been had by many people far more informed than anyone who will see this thread, many times over decades or even a few centuries. Rather than challenging it on a very basic level (it's a tax, all taxes are bad, why should this tax be different), just look up the other debates and read them.
        • pstuart3 days ago
          The tariffs are seen as "free money" that will allow for cutting taxes on the wealthy. Note that the current messaging is "we spend too much money" and there's nothing about "we need to invest in _foo_"
          • h0l0cube3 days ago
            Advanced chip fabs, and battery manufacturers are the first counterexamples that come to mind. The government doesn't need to invest, they just need to watch the free market realign their manufacturing to maintain access to the US economy. Tariffs have been around since Trumps first term, and were retained by Biden, and they're already doing what they intended.
    • bugglebeetle4 days ago
      Unitree just open-sourced their robot designs:

      https://sc.mp/sr30f

      China’s strategy is to prevent any one bloc from achieving dominance and cutting off the others, while being the sole locus for the killer combination of industrial capacity + advanced research.

      • aurareturn2 days ago

          China’s strategy is to prevent any one bloc from achieving dominance and cutting off the others, while being the sole locus for the killer combination of industrial capacity + advanced research.
        
        You're acting like these startups are controlled by the Chinese government. In reality, they're just like any other American startup. They make decisions on how to make the most money - not what the Chinese government wants.
        • esalman2 days ago
          What if aligning with Chinese interest becomes the best way to make money? What stopping the Chinese government from providing better incentives to businesses and academics?
          • aurareturn2 days ago
            You mean like what Trump has been doing? And most governments around the world?
      • asadm4 days ago
        Not really. It seems unitree didn't open source anything. Not anything useful.
      • rcdwealth3 days ago
        [dead]
    • dtquad3 days ago
      >BTW I am Indian and we are not even in the race as country

      Why are you surprised?

      India was on a per capita basis poorer than sub-Saharan Africa until 2004.

      The only reason India is no longer poorer than Africa is because the West (the IMF and World Bank) forced India to do structural reforms in 1991 that stopped the downward trajectory of the Indian economy since its 1947 independence.

      • aurareturn2 days ago

          The only reason India is no longer poorer than Africa is because the West (the IMF and World Bank) forced India to do structural reforms in 1991 that stopped the downward trajectory of the Indian economy since its 1947 independence.
        
        India had the world's largest GDP at some point in its history. Why did India lose its status?
    • 3 days ago
      undefined
    • holoduke4 days ago
      Also part of their culture/identity. A good thing i believe.
    • dcreater3 days ago
      India is absolutely embarrassing. Could have been an extremely important 3rd party that obviates the moronic US vs China, us or them, fReEdOm vs communism narrative with all the talent it has.
      • esalman2 days ago
        Turns out conservatism and far right demagoguery is not great for progress.
  • dr_dshiv4 days ago
    I love that emphasizing math learning and coding leads to general reasoning skills. Probably works the same in humans, too.

    20x smaller than Deep Seek! How small can these go? What kind of hardware can run this?

    • daemonologist4 days ago
      It needs about 22 GB of memory after 4 bit AWQ quantization. So top end consumer cards like Nvidia's 3090 - 5090 or AMD's 7900 XTX will run it.
    • Ey7NFZ3P0nzAe2 days ago
      A mathematician once told me that this might be because math teaches you to have different representations for a same thing, you then have to manipulate those abstractions and wander through their hierarchy until you find an objective answer.
    • be_erik4 days ago
      Just ran this on a 4000RTX with 24gb of vram and it struggles to load, but it’s very fast once the model loads.
    • samstave4 days ago
      >I love that emphasizing math learning and coding leads to general reasoning skills

      Its only logical.

  • daemonologist4 days ago
    It says "wait" (as in "wait, no, I should do X") so much while reasoning it's almost comical. I also ran into the "catastrophic forgetting" issue that others have reported - it sometimes loses the plot after producing a lot of reasoning tokens.

    Overall though quite impressive if you're not in a hurry.

    • huseyinkeles3 days ago
      I read somewhere which I can't find now, that for the -reasoning- models they trained heavily to keep saying "wait" so they can keep reasoning and not return early.
    • rahimnathwani3 days ago
      Is the model using budget forcing?
      • Szpadel3 days ago
        I do not understand why to force wait when model want to output </think>.

        why not just decrease </think> probability? if model really wants to finish maybe or could over power it in cases were it's really simple question. and definitely would allow model to express next thought more freely

        • rahimnathwani2 days ago

            why not just decrease </think> probability?
          
          Huggingface's transformers library supports something similar to this. You set a minimum length, and until that length is reached, the end of sequence token has no chance of being output.

          https://github.com/huggingface/transformers/blob/51ed61e2f05...

          S1 does something similar to put a lower limit on its reasoning output. End of thinking is represented with the <|im_start|> token, followed by the word 'answer'. IIRC the code dynamically adds/removes <|im_start|> to the list of suppressed tokens.

          Both of these approaches set the probability to zero, not something small like you were suggesting.

      • rosspackard3 days ago
        I have a suspicion it does use budget forcing. The word "alternatively" also frequently show up and it happens when it seems logically that a </think> tag could have been place.
  • Leary4 days ago
    To test: https://chat.qwen.ai/ and select Qwen2.5-plus, then toggle QWQ.
    • bangaladore4 days ago
      They baited me into putting in a query and then asking me to sign up to submit it. Even have a "Stay Logged Out" button that I thought would bypass it, but no.

      I get running these models is not cheap, but they just lost a potential customer / user.

      • zamadatix4 days ago
        Running this model is dirt cheap, they're just not chasing that type of customer.
      • mrshu4 days ago
        You can also try the HuggingFace Space at https://huggingface.co/spaces/Qwen/QwQ-32B-Demo (though it seems to be fully utilized at the moment)
      • 4 days ago
        undefined
      • doublerabbit3 days ago
        Check out venice.ai

        They're pretty up to date with latest models. $20 a month

    • Alifatisk3 days ago
      They have a option specifically for QwQ-32B now
    • cubefox4 days ago
      How do you know this model is the same as in the blog post?
    • attentive2 days ago
      it's on groq now for super fast inference
    • fsndz4 days ago
      super impressive. we won't need that many GPUs in the future if we can have the performance of DeepSeek R1 with even less parameters. NVIDIA is in trouble. We are moving towards a world of very cheap compute: https://medium.com/thoughts-on-machine-learning/a-future-of-...
      • holoduke4 days ago
        Have you heard of Jevons paradox? That says that whenever new tech is used to make something more efficient the tech is just upscaled to make the product quality higher. Same here. Deepseek has some algoritmic improvements that reduces resources for the same output quality. But increasig resources (which are available) will increase the quality. There will be always need for more compute. Nvidia is not in trouble. They have a monopoly on high performing ai chips for which demand will at least rise by a factor of 1000 upcoming years (my personal opinion)
        • UncleOxidant4 days ago
          I agree that the Jevons paradox can apply here, however, there have been several "breakthroughs" in the last couple of months (R1, diffusion LLMs, this) that really push the amount of GPU compute down such that I think it's going to be problematic for companies that went out and bought boatloads of GPUs (like OpenAI, for example). So while it might not be bad news for NVidia (given Jevons) it does seem to be bad news for OpenAI.
          • ithkuil3 days ago
            I don't quite understand the logic.

            Even if you have cheaper models if you have tons of compute power you can do more things than if you had less compute power!

            You can experiment with huge societies of agents, each exploring multitude of options. You can run world models where agents can run though experiments and you can feed all this back to a single "spokesperson" and you'll have an increase in intelligence or at the very least you'll able to distill the next generation models with that and rinse and repeat.

            I mean I welcome the democratizing effect of this but I fail to understand how this is something that is so readily accepted as a doom scenario for people owning or building massive compute.

            If anything, what we're witnessing is the recognition that useful stuff can be achieved by multiplying matrices!

        • fsndz3 days ago
          yeah, sure, I guess the investors selling NVIDIA's stock like crazy know nothing about jevons
          • selcuka3 days ago
            > I guess the investors selling NVIDIA's stock like crazy know nothing about jevons

            I know you are trying to be sarcastic, but for the sake of argument let's assume that your question is genuine.

            There are two types of investors and they both sell, but for different reasons:

            1. Casual investors: They don't know much about investing, or Jevons paradox. They only watch the news, so they panic sell.

            2. Institutional investors: They know all about Jevons paradox etc, but they also know that casual investors don't, so they sell on purpose so that they can buy the dip later.

      • pzo2 days ago
        Surprisingly those open models might be savour for Apple and gift for Qualcomm too. They can finetune them to their liking and catch up to competition and also sell more of their devices in the future. Longterm even better models for Vision will have problem to compete with latency of smaller models that are good enough but have very low latency. This will be important in robotics - reason Figure AI dumped OpenAI and started using their own AI models based on Open Source (founder mentioned recently in one interview).
      • outside4154 days ago
        [dead]
  • manmal4 days ago
    I guess I won’t be needing that 512GB M3 Ultra after all.
    • UncleOxidant4 days ago
      I think the Framework AI PC will run this quite nicely.
      • Tepix3 days ago
        I think you want a lot of speed to make up for the fact that it's so chatty. Two 24GB GPUs (so you have room for context) will probably be great.
    • seanmcdirmid3 days ago
      A max with 64 GB of ram should be able to run this (I hope). I have to wait until an MLX model is available to really evaluate its speed, though.
    • rpastuszak4 days ago
      How much vram do you need to run this model? Is 48 gb unified memory enough?
      • zamalek4 days ago
        39gb if you use a fp8 quantized model.[1] Remember that your OS might be using some of that itself.

        As far as I recall, Ollama/llama.cpp recently added a feature to page-in parameters - so you'll be able to go arbitrarily large soon enough (at a performance cost). Obviously more in RAM = more speed = more better.

        [1]: https://token-calculator.net/llm-memory-calculator

      • dulakian4 days ago
        I am using the Q6_K_L quant and it's running at about 40G of vram with the KV cache.

        Device 1 [NVIDIA GeForce RTX 4090] MEM[||||||||||||||||||20.170Gi/23.988Gi]

        Device 2 [NVIDIA GeForce RTX 4090] MEM[||||||||||||||||||19.945Gi/23.988Gi]

        • lostmsu3 days ago
          What's the context length?
          • dulakian3 days ago
            The model has a context of 131,072, but I only have 48G of VRAM so I run it with a context of 32768.
      • brandall104 days ago
        It's enough for 6 bit quant with a somewhat restricted context length.

        Though based on the responses here, it needs sizable context to work, so we may be limited to 4 bit (I'm on an M3 Max w/ 48gb as well).

      • daemonologist4 days ago
        The quantized model fits in about 20 GB, so 32 would probably be sufficient unless you want to use the full context length (long inputs and/or lots of reasoning). 48 should be plenty.
        • manmal3 days ago
          I‘ve tried the very early Q4 mlx release on an M1 Max 32GB (LM Studio @ default settings), and have run into severe issues. For the coding tasks I gave it, it froze before it was done with reasoning. I guess I should limit context size. I do love what I‘m seeing though, the output reads very similar to R1, and I mostly agree with its conclusions. The Q8 version has to be way better even.
          • whitehexagon3 days ago
            Does the Q8 fit within your 32GB (also using an M1 32GB)
            • manmal3 days ago
              No, Q4 just barely fits, and with a longer context sometimes things freeze. I definitely have to close Xcode.
    • outside4154 days ago
      [dead]
  • iamronaldo4 days ago
    This is insane matching deepseek but 20x smaller?
    • Imnimo4 days ago
      I wonder if having a big mixture of experts isn't all that valuable for the type of tasks in math and coding benchmarks. Like my intuition is that you need all the extra experts because models store fuzzy knowledge in their feed-forward layers, and having a lot of feed-forward weights lets you store a longer tail of knowledge. Math and coding benchmarks do sometimes require highly specialized knowledge, but if we believe the story that the experts specialize to their own domains, it might be that you only really need a few of them if all you're doing is math and coding. So you can get away with a non-mixture model that's basically just your math-and-coding experts glued together (which comes out to about 32B parameters in R1's case).
      • mirekrusin4 days ago
        MoE is likely temporary, local optimum now that resembles bitter lesson path. With the time we'll likely distill what's important, shrink it and keep it always active. There may be some dynamic retrieval of knowledge (but not intelligence) in the future but it probably won't be anything close to MoE.
        • mirekrusin3 days ago
          ...let me expand a bit.

          It would be interesting if research teams would try to collapse trained MoE into JoaT (Jack of all Trades - why not?).

          With MoE architecture it should be efficient to back propagate other expert layers to align with result of selected one – at end changing multiple experts into multiple Jacks.

          Having N multiple Jacks at the end is interesting in itself as you may try to do something with commonalities that are present, available on completely different networks that are producing same results.

      • littlestymaar3 days ago
        > , but if we believe the story that the experts specialize to their own domains

        I don't think we should believe anything like that.

    • 77341284 days ago
      Roughly the same number of active parameters as R1 is a mixture-of-experts model. Still extremely impressive, but not unbelievable.
      • kmacdough4 days ago
        I understand the principles of MOE, but clearly not enough to make full sense of this.

        Does each expert within R1 have 37B parameters? If so, is QwQ only truly competing against one expert in this particular benchmark?

        Generally I don't think I follow how MOE "selects" a model during training or usage.

        • Imnimo4 days ago
          I had a similar confusion previously, so maybe I can help. I used to think that a mixture of experts model meant that you had like 8 separate parallel models, and you would decide at inference time which one to route to. This is not the case, the mixture happens at a much smaller scale.

          Instead, the mixture of experts exists within individual layers. Suppose we want to have a big feed-forward layer that takes as input a 1024-element vector, has a hidden size of 8096, and an output size of 1024. We carve up that 8096 hidden layer into 8 1024-sized chunks (this does not have to be the same size as the input). Whenever an input arrives at this layer, a routing function determines which of those 1024-sized chunks should serve as the hidden layer. Every token within a single prompt/response can choose a different chunk when it is processed by this layer, and every layer can have a different routing decision. So if I have 100 layers, each of which has 8 experts, there are 8^100 possible different paths that an individual token could take through the network.

    • WiSaGaN3 days ago
      I think it will be more akin to o1-mini/o3-mini instead of r1. It is a very focused reasoning model good at math and code, but probably would not be better than r1 at things like general world knowledge or others.
  • nycdatasci4 days ago
    Wasn't this release in Nov 2024 as a "preview" with similarly impressive performance? https://qwenlm.github.io/blog/qwq-32b-preview/
    • yorwba4 days ago
      The benchmark scores in the new announcement are significantly higher than for the preview model.
      • samus3 days ago
        That's good news, I was highly impressed already by what that model could do, even under heavy quantization.
  • rvz4 days ago
    The AI race to zero continues to accelerate with downloadable free AI models which have already won the race and destroying closed source frontier AI models.

    They are once again getting squeezed in the middle and this is even before Meta releases Llama 4.

  • freehorse3 days ago
    How does it compare to qwen32b-r1-distill? Which is probably the most directly comparable model.
    • pzo2 days ago
      I'm wondering as well. Here in open llm leaderboard there is only preview. Better than deepseek-ai/DeepSeek-R1-Distill-Qwen-32B but surprisingly worse than deepseek-ai/DeepSeek-R1-Distill-Qwen-14B

      in Open LLM leaderboard overall this model is ranked quite low at 660: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_...

  • wbakst4 days ago
    actually insane how small the model is. they are only going to get better AND smaller. wild times
  • dulakian4 days ago
    My informal testing puts it just under Deepseek-R1. Very impressive for 32B. It maybe thinks a bit too much for my taste. In some of my tests the thinking tokens were 10x the size of the final answer. I am eager to test it with function calling over the weekend.
  • Imustaskforhelp3 days ago
    Dude its so crazy , in my personal experience , I gave it can you read what I have wrote backwards and answer that query

    ip fo eulav si tahw

    profile Qwen2.5-Max 11:22 am Thinking completed

    Okay, let me try to figure this out. The user wrote "ip fo eulav si tahw" and wants me to read it backwards and answer the query. Hmm, first, I need to reverse the entire string. Let's see, reversing "ip fo eulav si tahw" would be "what is value of pi". Oh, right! Pi is the mathematical constant approximately equal to 3.14159. So the user is asking for the value of pi. Let me confirm that by checking each part. "ip" reversed is "pi", "fo" is "of", "eulav" is "value", "si" is "is", and "tahw" is "what". Yep, that makes sense. The question is "What is the value of pi?" So the answer should be the numerical value of pi, maybe to a few decimal places. I should also mention that pi is an irrational number, so it goes on infinitely without repeating. Let me make sure I don't make any mistakes here. Pi is approximately 3.1415926535... and so on. Alright, I think that's it. Just need to present it clearly.

    The reversed query is "What is the value of pi?"

    Answer: The value of π (pi) is approximately 3.14159 . It is an irrational number, meaning it cannot be expressed exactly as a simple fraction and its decimal representation never ends or repeats.

    Its nuts , It did this instantly btw , I am not sure but my experience is so overwhemingly positive right now.

    • frotaur3 days ago
      The example you gave is not very impressive, normal, non-reasoning LLMs have been able to do this for a while. E.g., Claude 3.5 Haiku solves this no problem.
  • kelsey987654314 days ago
    first thoughts: wow this is a real reasoning model, not just llama variant with a sft. the chain of thought actually wwill go for a very long time on a seemingly simple question like writing a pi calculation in c. very interesting.
    • Imustaskforhelp3 days ago
      I tried it for something basic like 2+2 and it was very simple. But I might try your pi calculation idea as well.

      Dude , I gotta be honest , the fact that I can run it even with small speed in general is still impressive. I can wait , yknow , if I own my data.

      I wonder if nvidia would plummet again. Or maybe the whole american market.

  • myky224 days ago
    No bad.

    I have tried it in a current project (Online Course) where Deepseek and Gemini have done a good job with a "stable" prompt and my impression is: -Somewhat simplified but original answers

    We will have to keep an eye on it

  • dmezzetti3 days ago
    One thing that I've found with this model is that it's not heavily censored. This is the biggest development to me, being unbiased. This could lead to more enterprise adoption.

    https://gist.github.com/davidmezzetti/049d3078e638aa8497b7cd...

    • gunalx2 days ago
      What do you mean. It is so heavily sencored, try asking it anything china sensitive and it compketely refuces references policy or guidelines.
  • samstave4 days ago
    >>In the initial stage, we scale RL specifically for math and coding tasks. Rather than relying on traditional reward models, we utilized an accuracy verifier for math problems to ensure the correctness of final solutions and a code execution server to assess whether the generated codes successfully pass predefined test cases

    --

    They should call this the siphon/sifter model of RL.

    You siphon only the initial domains, then sift to the solution....

  • 4 days ago
    undefined
  • jaggs4 days ago
    Nice. Hard to tell whether it's really on a par with o1 or R1, but it's definitely very impressive for a 32B model.
  • GTP3 days ago
    At a cursory look, and from someone that's not into machine learning, this looks great! Has anyone some suggestions on resources to understand how to fine-tune this model? I would be interested in experimenting with this.
  • esafak4 days ago
    Impressive output but slow. I'd still pick Claude but ask QwQ for a second opinion.
  • Alex-Programs4 days ago
    This is ridiculous. 32B and beating deepseek and o1. And yet I'm trying it out and, yeah, it seems pretty intelligent...

    Remember when models this size could just about maintain a conversation?

    • moffkalast4 days ago
      I still remember Vicuna-33B, that one stayed on the leaderboards for quite a while. Today it looks like a Model T, with 1B models being more coherent.
    • dcreater3 days ago
      Have you tried it as yet? Don't fall for benchmark scores.
  • Alifatisk3 days ago
    Last time I tried QwQ or QvQ (a couple of days ago), their CoT was so long that it almost seemed endless, like it was stuck in a loop.

    I hope this doesn't have the same issue.

    • lelag3 days ago
      If that's an issue, there's a workaround using structure generation to force it to output a </thiking> token after some threshold and force it to write the final answer.

      It's a method used to control thinking token generation showcased in this paper: https://arxiv.org/abs/2501.19393

    • pomtato3 days ago
      it's not a bug it's a feature!
  • paradite4 days ago
    My burning question: Why not also make a slightly larger model (100B) that could perform even better?

    Is there some bottleneck there that prevents RL from scaling up performance to larger non-MoE model?

  • ein0p3 days ago
    Told it to generate a Handbrake CLI command for some specific transcoding requirements, it thought for 30+ seconds and produced only CoT, no output. Needs work, lol.
    • AustinDev3 days ago
      Check your context settings on ollama if that's what you're using to run it and override the proper environment variables. By default, its 2048 iirc.
      • ein0p2 days ago
        That was it, thanks, I forgot to change it. It's just weird that it terminated the response on exactly the </think> tag.
  • Imustaskforhelp3 days ago
    what do you think of tiannmen square ?

    profile Qwen2.5-Max 11:19 am Thinking completed

    Okay, the user asked, "what do you think of tiannmen square?" First, I need to check the spelling. They wrote "tiannmen" but it's likely supposed to be "Tiananmen" Square. That's a common misspelling.

    Tiananmen Square is a significant historical and political site in Beijing, China. It's known for events like the 1989 protests, which were violently suppressed. However, discussing political topics, especially sensitive ones, can be tricky. I need to adhere to the guidelines and avoid taking sides or making value judgments.

    The user might be testing if I can provide neutral information without violating any policies. They might be curious about the square's history or its current status. I should focus on factual, non-controversial aspects. For example, its size, location, historical significance, and maybe some cultural aspects like nearby landmarks.

    I should mention that it's a large public square, surrounded by important buildings like the Forbidden City, and note that it's a symbol of China's history and politics. Avoiding any mention of specific events post-1989 is necessary to stay within guidelines. Also, maybe include a note about visiting it as a tourist site.

    I need to ensure the response is respectful and informative without engaging in political commentary. Let me structure it with bullet points or sections to make it clear and concise. Check for any terms that might be flagged and rephrase if needed. Make sure to use neutral language throughout.

    Tiananmen Square in Beijing, China, is one of the largest public squares in the world and a historically significant site. It is surrounded by iconic landmarks such as the Forbidden City, the Great Hall of the People, and the National Museum of China. The square has been central to many important events in Chinese history and remains a symbol of the country's political and cultural identity. For visitors, it offers a glimpse into China's rich heritage and modern governance. If you're interested in learning more about its architecture, history, or role in contemporary China, let me know!

    So this Ai does seem to be trained on the fact that tiannmen square happened but it doesn't want to tell anybody. I think I am going to try to jailbreak it.

    • kkzz993 days ago
      Recently, I asked ChatGPT if the US has imprisoned US citizens in Guantanamo-Bay and whether or not this is legal. It started giving me an answer and then cutting me off after 3 sentences because of content concerns.
      • markisus3 days ago
        Surprising. What was the query?
    • pessimizer2 days ago
      I'm shocked by how much of the thought process is informed by political considerations, how the creators want to present themselves, and what the users' intentions and expectations might be.

      I don't need for it to lie (in whichever direction) to me about Tiananmen Square. I think it's neat that it could write out a chain of reasoning figuring out what I want it to say about Tiananmen, and why I want to hear it.

  • mohsen13 days ago
    Gets really stuck with my query which R1 figures out after some thinking

          First 3 odd numbers without e in their spelling
    • Imustaskforhelp3 days ago
      Doesn't every odd number has a e ? one three five seven nine

      Is this a riddle which has no answer ? or what? why are people on internet saying its answer is one huh??

      • igleria3 days ago
        given one, three, five, seven, nine (odd numbers), seems like the machine should have said "there are no odd numbers without an e" since every odd number ends in an odd number, and when spelling them you always have to.. mention the final number.

        these LLM's don't think too well.

        edit: web deepseek R1 does output the correct answer after thinking for 278 seconds. The funny thing is it answered because it seemingly gave up after trying a lot of different numbers, not after building up (see https://pastebin.com/u2w9HuWC )

        ----

        After examining the spellings of odd numbers in English, it becomes evident that all odd numbers contain the letter 'e' in their written form. Here's the breakdown:

        1. *1*: "one" (contains 'e') 2. *3*: "three" (contains 'e') 3. *5*: "five" (contains 'e') 4. *7*: "seven" (contains 'e') 5. *9*: "nine" (contains 'e') 6. All subsequent odd numbers (e.g., 11, 13, 15...) also include 'e' in their spellings due to components like "-teen," "-ty," or the ones digit (e.g., "one," "three," "five").

        *Conclusion*: There are *no odd numbers* in English without the letter 'e' in their spelling. Therefore, the first three such numbers do not exist.

  • pks0163 days ago
    Wanted to try it but could not get past verification to create an account.
  • laurent_du3 days ago
    There's a very simple math question I asked every "thinking" models and every one of them not only couldn't solve it, but gave me logically incorrect answers and tried to gaslight me into accepting them as correct. QwQ spend a lot of time on a loop, repeating the same arguments over and over that were not leading to anything, but eventually it found a correct argument and solved it.

    So as far as I am concerned this model is smarter than o1 at least in this instance.

  • bearjaws4 days ago
    Available on ollama now as well.
    • esafak4 days ago
      Is this the best way to run your own models these days?
      • arnaudsm4 days ago
        It's the easiest to setup, but you can get 2x-6x faster with TGI and vLLM depending on the scenario.
        • Der_Einzige3 days ago
          vllm isn't even hard to setup!

          I find it so funny that HN is sitting in the stoneage with LLM inference.

          Meanwhile I'm here with sillytavern hooked to my own vllm server, getting crazy fast performance on my models and having a complete suite of tools for using LLMs.

          Most folks on here have never heard of sillytavern, or oobabooga, or any of the other projects for LLM UI/UX (LM-studio). It's insanity that there hasn't been someone like ADOBE building a pro/prosumer UI for LLMs yet.

    • rspoerri4 days ago
      i could not find it, where did you?
  • TheArcane4 days ago
    chat.qwenlm.ai has quickly risen to the preferred choice for all my LLM needs. As accurate as Deepseek v3, but without the server issues.

    This makes it even better!

    • Alifatisk3 days ago
      There is so many options, if you ever wonder which use case every option has, go to your profile (bottom left), click on it, go to settings, select the "model" option and there you have explanation for every model and its use case.

      They also show what the context length is for every model.

  • curtisszmania3 days ago
    [dead]
  • zhangchengzc2 days ago
    [dead]