78 pointsby iyaja12 hours ago13 comments
  • paxys10 hours ago
    Lots of words and weird analogies to say basically nothing.

    What is the status of the project? What can it do? What has it achieved in 5 years?

    But no, let's highlight how we follow the "Elon process".

    As a side note, whenever someone incessantly focuses on lines of code as a metric (in either direction), I immediately start to take them less seriously.

    • dewey9 hours ago
      Using lines of code as a metric for productivity is bad. Using it to show how simple something is, or how a refactor removed x lines of code that doesn’t need to be maintained any more isn’t such a bad thing I’d say.
      • alphazard9 hours ago
        Yeah this is exactly right, if you can trust the contributors to not code-golf or otherwise Goodhart the LoC metric, then it's a reasonable measure of complexity.

        It doesn't work as well when you start mixing languages, or generating code.

      • whilenot-dev9 hours ago
        TFA includes a time measurement though, and 5 years for 18'935 SLOC doesn't scream quite "how simple something is".
      • selkin9 hours ago
        Less LOC also doesn't imply simplicity: just look at the demoscene, which often has the former but not the latter.
    • jszymborski10 hours ago
      From [0]:

      "When we can reproduce a common set of papers on 1 NVIDIA GPU 2x faster than PyTorch. We also want the speed to be good on the M1. ETA, Q2 next year."

      [0] https://tinygrad.org/#tinybox

    • piskov9 hours ago
      He was able to run nvidia gpu on mac via thunderbolt with tinygrad.

      https://www.tomshardware.com/pc-components/gpus/tiny-corp-su...

      Check tinygrad’s twitter account for specifics if you want to catch up on progress

    • JoeDohn3 hours ago
      making things less dumb is not a elon's process, if it's the case then we are saying everything that elon is not involved in/with is dumb !
  • still-learning9 hours ago
    >People get hired by contributing to the repo. It’s a very self directed job, with one meeting a week and a goal of making tinygrad better

    I find this organizational structure compelling, probably the closest to reaching 100% productivity in a week as you can get.

    • ttul9 hours ago
      I wonder what happened to George’s old policy of requiring everyone to move to San Diego?
      • georgehotz9 hours ago
        That's comma.ai's policy since they make hardware and solve physical problems. The tiny corp has been hybrid (remote-first) since day 1 because it primarily writes open source software, and there's a long track record of success with remote for this kind of task.

        We have a few whole-team meetups in Hong Kong each year for 2-4 weeks, and there's a San Diego or Hong Kong office that anyone can work from as they choose. We also have a wide array of fancy multi GPU boxes that everyone on the team gets full access to (known external contributors can get some access also).

        I think many companies that were quick to embrace remote have walked it back, not everyone is capable of working productively remotely, nor are all types of work amenable to remote.

  • pa7ch10 hours ago
    Very weird to market this as subscribing to "Elon process for software"

    I remember when defcon ctf would play Geohot's PlayStation rap video every year on the wall.

    • spiderfarmer10 hours ago
      I hate it when ‘inspirational’ quotes are attributed to the person with the largest audience and not the people who came up with it, like in this case, the engineers at Lockheed’s Skunk Works.
      • piskov8 hours ago
      • ramesh318 hours ago
        It's an apocryphal quote.

        "A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away."

        - Antoine de Saint-Exupéry

  • geremiiah7 hours ago
    The risk for Tinygrad is that PyTorch will create a new backend for Inductor, plug in their AMD codegen stuff and walala, PyTorch still king. I mean, they could have easily just taken that route themselves instead of bothering with a new ML framework and AD engine. 99% of the work is just the AMD codegen part of the compiler.

    Either way, super cool project and I wish them the best.

    • ellis0n7 hours ago
      The main risk is that an LLM will rewrite itself and programmers will no longer be needed. I worked a bit with tinygrad and it looks quite amusing I managed to run it right away and make fixes in one of the tasks, but I decided not to commit because I was afraid of rejection. For example, the tasks are strange: $500 for two months, optimizing H.265, something that only a small group of people in the world can do.

      The SV is a unique place where you can meet Geo and get $5M, maintain a bunch of hardware, build a framework in 20,000 LOC and everything works well.

  • measurablefunc9 hours ago
    Is it really "Complex"? Or did we just make it "Complicated"? - https://www.youtube.com/watch?v=ubaX1Smg6pY
    • alphazard9 hours ago
      Programming a GPU in 2025 is complex, that might be because it has been made complicated, but regardless, it is not complexity that this project can control.

      The fact that it competes with PyTorch in so few lines speaks to the incredibly low incidental complexity imposed by Tinygrad.

  • alphazard9 hours ago
    > To fund the operation, we have a computer sales division that makes about $2M revenue a year.

    What's the margin on that? Do 5 software engineers really subsist on the spread from moving $2M/yr in hardware?

    • piskov9 hours ago
      George raised $5.1M in 2023 for Tinygrad
  • mika699611 hours ago
    What would tinygrad replace if they continue to proceed like this?
    • spiderfarmer10 hours ago
      Potentially PyTorch and Tensorflow.
    • cyberax8 hours ago
      I think it has great potential for deployments on edge systems.
      • piskov5 hours ago
        It is already used in comma.ai’s openpilot hardware
  • deburo10 hours ago
    So this is all python? I bet Chris Lattner probably approached them.
    • zephen10 hours ago
      Lattner is a smart guy, but I think Mojo might be the wrong direction.

      Time will tell.

      History has not so far been kind to projects which attempt to supplant cPython, whether they are other Python variants such as PyPy, or other languages such as julia.

      Python has a lot of detractors, but (despite some huge missteps with the 2-3 transition) the core team keeps churning out stuff that people want to use.

      Mojo is being positioned "as a member of the Python family" but, like Pyrex/Cython, it has special syntax, and even worse, the calling convention is both different than Python, and depends on the type of variable being passed. And the introspection is completely missing.

      • tucnak6 hours ago
        Honestly, I feel like Julia might as well beat Mojo or sommat to the punch, sooner or later. It has some facilities and supporting infrastructure for a lot of scientific and data-handling tasks surrounding ML, if not for compiling and dispatching kernels (where XLA reins supreme to anything in the CUDA ecosystem!) For example, Bayesian programming like Turing.jl is virtually unmatched in Python. It's been a while since I looked at Lux.jl for XLA integration, but I reckon it could be incredibly useful. As long as LLM's and RLVR training thereof should continue to improve, we may be able to translate loads of exiting Pytorch code eventually.
        • zephen30 minutes ago
          I dunno. This sort of thing gives me pause:

          https://danluu.com/julialang/

          But the first thing that gave me pause about Julia? They sort of pivoted to say "we're general purpose" but the whole index-starting-at-one thing really belies that -- these days, that's pretty much the province of specialty languages.

        • eli_gottlieb3 hours ago
          > For example, Bayesian programming like Turing.jl is virtually unmatched in Python.

          What about numpyro?

          Disclaimer: I contribute to numpyro occasionally.

  • semiquaver6 hours ago
    Is this the guy who talked a big game about all the things he was going to fix at Twitter, then utterly failed when confronted with a real world codebase and gave up having done nothing of use?
    • piskov6 hours ago
      He left after realizing nothing was going to change (not like he’s in the leadership).

      Also half-joked how the good food went away.

      George is many things but not a quitter (see comma ai for example).

      If someone could pull this, it’s him due to “never give up, never surrender” attitude.

      The shit with nvidia just needs to stop

  • timzaman10 hours ago
    Fell bad for geohotz. Such a lovely guy, i hope he strikes it right soon
    • still-learning9 hours ago
      Seems like he's doing fine, why do you feel bad for him?
  • peter_d_sherman7 hours ago
    >"We also have a contract with AMD to get MI350X on MLPerf for Llama 405B training."

    Anything to help AMD (and potentially other GPU/NPU/IPU etc. chip makers) catch up with NVidia/CUDA is potentially worth money, potentially worth a lot of money, potentially worth up to Billion$...

    Why?

    If we have

    a) Market worth Billion$

    and

    b) A competitive race in that Market...

    then

    c) We have VALUE in anything (product, service, ?, ???) that helps any given participant capture more of that market than their competitors...

    (AMD (and the other lesser known GPU/NPU/IPU etc. chip vendors) are currently lagging behind NVidia's CUDA AI market dominance -- so anything that helps the others advance in this area should, generally speaking, be beneficial for all technology users in general, and be potentially profitable (if the correct deals could be struck!) by those that have the skills to do such assisting...)

    Anyway, wishing you well in your endeavors, Tinygrad!

  • piskov9 hours ago
    > tinygrad is following the Elon process for software. Make the requirements less dumb. The best part is no part.

    That’s not Elon. See Russian TRIZ

    https://en.wikipedia.org/wiki/TRIZ

  • vileain11 hours ago
    [flagged]
    • dang10 hours ago
      "Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead."

      https://news.ycombinator.com/newsguidelines.html

    • mycodendral10 hours ago
      the value is the directness, not implied origination

      not everyone cares about playing voldemort

      • vileain9 hours ago
        What is so aggrandizingly 'direct' about calling the system you are attempting to improve 'dumb'?
    • spiderfarmer10 hours ago
      There are lots of bubbles where Elon is still king. Those bubbles are often void of deodorant.
      • vileain10 hours ago
        Based on the response it appears HN is one such bubble.
        • spiderfarmer9 hours ago
          Elon spent billions to buy a platform and promote his tweets. He spent billions more to create a tweaked AI model that praised him like a mad king.

          He only has to spend a couple thousand a month to influence comment ranking on HN.