When Soumith Chintala co-launched PyTorch, and for many years after, the alternatives for fast, interactive, convenient development were much worse. There was no Jax.
Every single AI researcher I know, including me, who tried PyTorch back then immediately wanted to switch to it, because it was so much better. Andrej Karpathy described what PyTorch felt like back then when he tweeted, on May 2017, "I've been using PyTorch a few months now and I've never felt better. I have more energy. My skin is clearer. My eyesight has improved."[a]
THANK YOU SOUMITH for your hard work over all these years! Your hard work has made a difference for a huge number of people, including many of us here on HN.
We wish you success in your future endeavors, whatever they turn out to be!
Please ignore all the petty criticism.
---
My recollection is that when I looked at Chainer back then, it didn't offer a comprehensive library of preexisting components for deep learning. When I tried PyTorch, on the other hand, I vividly remember it as already having lots of prebuilt components (common layers, activation functions, etc.) in `torch.nn`, so it was easier and faster to get going.
These memories are vague, so I could be wrong.
[1] https://pypi.org/project/autograd/#history
[2] https://www.cs.toronto.edu/~rgrosse/courses/csc421_2019/read...
[2] https://web.archive.org/web/20170422051747/http://pytorch.or...
...but PyTorch felt friendlier and more Pythonic, and it came with a comprehensive library of prebuilt components for deep learning in `torch.nn`.
As his longtime colleague, the one thing I would want people to know about him and this decision is that Soumith has always viewed PyTorch as a community project. He consistently celebrated the contributions of his co-creators Adam and Sam, and he extended the same view towards the Yangqing and the Caffe2 crew that we merged into PyTorch. At the very beginning, by Soumith's highly intentional design, PyTorch was aimed at being truly developed by and for the AI research community and for many years that was the key way in which we grew the framework, FB PT team, and the wider community. At every single stage of PT's lifecycle, he always ensured that our conception of PT and its community grew to include and celebrate the new people and organizations growing what was possible with PT. He's an incredible talent magnet, and thus more and more smart people kept dedicating their blood, sweat, and tears to making PT bigger and better for more people.
I've worked with some very well known and highly compensated leaders in tech, but *no one* has done the job he has done with ameliorating a bus factor problem with his baby. PT has a unique level of broad support that few other open source technology can reach. In a world of unbounded AI salaries, people who want to move AI research methods forward still freely give their time and attention to PyTorch and its ecosystem. It's the great lever of this era of AI that is moving the world, *due in large part* to the strength of the community he fostered and can now let continue without his direct involvement.
His departure is the end of an era, but it's also operationally a true non-event. PyTorch is going strong and can afford to let one of its creators retire from stewardship. This is precisely what success looks like in open source software.
He deserves our congratulations and our thanks. Enjoy your PT retirement, man.
Hope him the best!
I mention this because it feels analogous to military research, where people "dream" of how advanced the military is, how forward they are compared to public research... and yet, it seems to be a recurring myth they love to sustain.
So the signal I get here is AI "labs" in BigTech have nothing worth waiting for around the corner, it's just more of the same and boring for people who stick there.
He’s been with Meta for 11 years and is likely in a very comfortable financial position, given the substantial stock options he’s received over that time.
He also mentioned the arrival of a new child, and it’s well known that Meta's work-life balance isn’t always ideal.
On top of that, Meta, like many major tech companies, has been shifting its focus toward LLM-based AI, moving away from more traditional PyTorch use cases.
Considering all of this, it seems like a natural time for him to move on and pursue new, more exciting opportunities.
This is very wrong. Meta is on the forefront of recommendation algorithms and that's all done with traditional ML models made using PyTorch.
Wait, are LLMs not built with PyTorch?
Building AI, and building with AI.
What's modern about LLM is the training infrastructure and single coordinator pattern, which PyTorch just started and inferior to many internal implementations: https://pytorch.org/blog/integration-idea-monarch/
It’s not dominant in terms of self-hosted where llama.cpp wins but there’s also not really that much self-hosting going on (at least compared with the amount of requests that hosted models are serving)
Whether or not this is the case, I don't get this as being the reason for Sousmith leaving - it sounds as if he is just ready for a change.
Still, it is noticeable that with many of the AI companies claiming that their version of "AGI" is just around the corner, developers and staff don't appear to be particularly excited about this (I assume they realize it is just hype, not some momentous advance around the corner), and leave to pursue different things, such as Mira Murati starting a fine-tuning company, Karpathy going back to education, others switching ship (typically from OpenAI to Anthropic), etc.
Why would they be excited about it? There's little in it for them.
Nobody that has to work for a living should be excited for AI, they should be genuinely afraid of it. AGI will have vast, deeply negative consequence for almost everyone that has to work for a living.
There are a lot of things I don't like about my current job, but not enough for it to make sense to gamble on a new place. It's easier to push for change from my current position than to count on any new place being any better.
But if it gets worse and I do leave, I'll definitely be telling the interviewer, "I was just ready for a change."
As far as I know, we all get one life. If one can help it (modulo other constraints), one should not get trapped by prestige, achievement, short-term admiration by others, impact and external facing factors. To see an alternate reality, it helps to escape the bubble, for example, by spending time in a completely different culture or environment where no one knows or cares about what one did.
I admire people taking such decisions. It's easy to be on autopilot in life. But, people who wear their success lightly are rare but more philosophically aware, in my opinion at least. I wish him good luck!
I'm at similar position now, need to make decision. The problem is after leaving IT world for a while it will be hard to get back. I'll have to change my life completely and discard all knowledge and expertise I have. That will be fun, interesting, eyes opening, etc, but no way back.
Now, the current job market makes this significantly harder than it was in the 2010's, but that's floating over all of us- if your company does an Amazon tomorrow, would you get a job as nice as you currently have? Maybe, maybe not.
Can be*, that's not necessarily always true. I've quit jobs plenty of times without having any plan for the future or particular drama-reason for leaving, just "It's not as fun here anymore, despite this being a great place to work", I'm sure I'm not the only one who does so.
What I've never done though, is leaving a place without being 100% honest exactly why I'm leaving. I won't say "I was just ready for change" if that wasn't the reason, I have no reason not to be honest about why I'm leaving.
I do disagree though that, unless there's some actionable change that would specifically benefit you like more money, my answer outside of private conversations with people I know well is going to be some variant of "time for a change." Anything else just invites arguments and conversations I don't want to have.
In fact everything secret tends to be behind. Secrecy is a huge burden, and seriously limits all forms of collaboration.
In addition, because military projects are often big and highly politicized you get all the inefficiencies that goes with that. Classification is also convenient for hiding screwups and corruption.
I think that the major difference about deployed military technologies- in contrast to both military R&D and the entire commercial side- is that they are, by and large, incredibly rock solid and reliable. If they aren't, they don't actually get used. It takes a lot of effort to get them that way. I remember once at a testing ground for our robot tanks of the far future, right next door was an outdoor test-track. And they were testing a kitchen trailer (a kitchen for ~200 men that can be towed by a Humvee). And they drove it around the track continuously for three weeks, stopping only long enough to change drivers/vehicles, and the four times a day they would halt and make 200 people meals, and then pack up and get back to driving. This was one of several reliability tests that the kitchen trailer had to get through before it was accepted for service.
Our R&D stuff couldn't handle that (it needed 3-4 engineers to carefully monitor it at all times), but the stuff that needed to be in the hands of some random 18 year old with a two week training course had to be rock solid to use, do regular maintenance on, and fix, even when they were only getting four hours of sleep a night. If it wasn't up to that level, then the troops ended up ignoring it, leaving it behind when they went out to do their job. And by and large, from what I could tell, most of the stuff they had was that reliable. There were some cool things that we were doing in the R&D space, but we were a long way from that level.
Secret Squirrel projects (which I was near but never read into) can get away with lower reliability because they can count on the users to be much better trained and prepared, though again, from my brief encounters with these sorts, they will ignore anything they don't trust to be completely reliable. Reliability matters far more than cutting edge for like 99.9% of military gear.
Case in point: firearms. The standard-issue M4A1 is actually pretty good on that front already, but for civilian ARs, there's a whole cottage industry around making improved components that can handle even more abuse.
Knives, as well. Your average military field knife is something like 80 years behind the curve on materials, especially steel. Which isn't necessarily a bad thing - it's "good enough" (given what they're realistically used for) and cheap at that. But civilians can and do drop 10x money for knives that you can baton wood with and still have a shave after, even though there's no practical use for that kind of thing.
Excuse you, I just came back from a 6 month backpacking trip where I had to split my own kindling along the way AND shave regularly and I didn't have weight for a knife/axe AND razor blade /s
> a 318-page report [...] said the SAIC software was incomplete, inadequate and so poorly designed that it would be essentially unusable under real-world conditions. Even in rudimentary tests, the system did not comply with basic requirements
I figured the reason Palantir was so successful was because it was a SV software company instead of a defense contractor dabbling in IT or specialized government consultancy.
Ironically, corporations can afford to take more risks of failure (financially and project-wise) than militaries because failure for them doesn't mean actual human death (and when it can, you see processes come in that look a lot more like military processes).
The military should have very reliable systems, and they often know the point at which their systems will fail (MTBF calculations are easier to develop with their record keeping). However, the military also has an almost unlimited budget and body count to keep just reliable enough things working much better than they should. It's also really bad about actually competing companies against each other.
The commercial sector, targeting consumers, is where you actually get reliable systems. Why? Because consumers will go towards either the cheapest option (reliability is replaced with ubiquity in the market, it's replaceable) or the more reliable but more expensive options. They (individuals) don't have an unlimited budget or unlimited time to maintain everything in their life. There's competition in the commercial world that's completely absent in the military world
The two major exceptions are where COTS products have taken over (definitionally, DOD is using commercial, often consumer-targeted, products instead of military specific products) and special forces. Special forces often bypasses normal acquisitions processes and so ends up having a better chance to compete vendors against each other than other parts of the military.
This doesn't mean everything the DOD procures through normal acquisitions is inherently unreliable, but reliability is only one of many factors and often only really discovered after selection and full-rate production has started. By that point, the DOD is committed to it for years to come. Each DOD procurement is separate enough from others that you don't even get huge opportunities for reuse. The F-35, to pick something from this century, didn't get components that were shared with other aircraft in the DOD fleet. It's almost all new, which means a lot of things were learned about its reliability after it started flying. It has new comms, new radar, new almost everything. Even the engine (though that probably used many subcomponents shared with other engines) was a new engine just used by the F-35.
It's MUCH cheaper and quicker.
Also absolutely unknown if the "new thing" is AI-related at all!
If anything, the reverse seems to be true, if you want to work on something big, you want to be in a small company, sufficiently funded, filled with great people, yet not "big", that's when "something big" seems to be more likely to happen.
In contrast, as far as I can think, the bigger a company gets, the less likely they are to actually come up with "something big", it seems like most of the times you need (creative) constraints in order for the results to end up being actually innovative, otherwise you end up like IBM and Meta, throwing money on stuff and getting some results, but nothing really out of the ordinary considering what's happening elsewhere in their ecosystems.
Edit: to be clear, I didn't mean to imply their next thing is AI related, solely that they obviously know more about AI at Meta than e.g. XR at Meta, just because that's their expertise.
If he has just one other priority in that set (which could still include a robotic min/max of AI impact), then your assumption fails.
It looks like he'd already been transferred once (to Infra) and maybe didn't want to do it again.
If you've ever worked on "advanced military grade" equipment, you'd know better.
It tends to be what you'd euphemistically call "well-proven technology", built down to a price by the lowest bidder, by comparatively unskilled labour.
The most shocking thing about the "captured" Russian drones is they use name-brand Raspberry Pis inside. I'm prepared to bet the American versions use whatever AliExpress crap is on special this week. The UK stuff definitely does.
"Big Army" doesn't see that stuff for decades, if ever, and mostly never due to cost. And I'm not even getting into classified submarine and nuclear tech, fixed wing drones and aircraft flying at night out of known test facilities, etc.
There's tons of actually advance tech out there in military circles.
I don't think that you can read this from the blog post at all, but it gives me a chuckle to think how the quest for AGI at Meta may be "The Men Who Stare at Goats" all over again.
It just makes me think of all the staff, technical staff, that left OpenAI recently. Altman was making grand claims about what was coming next.
Well, we know what followed, namely I don't think any researcher who left knowing what was in the pipeline feel like they missed much in terms of access.
Having friends who are at or near both FAIR and other AI parts of meta, reosurces are not the issue, anymore at least. (there had been a massive squeeze for the last two years though) But pytorch and FAIR use(d) a AWS based cluster. (however pytorch is used everywhere else inside facebook though. well not everywhere...)
There is/are plenty of interesting things happening at big tech, and Meta specifically. If you like computer vision, then Meta is pretty much still the world leader. Much as it pains me to say it.
Unlimited compute resources aren’t literally unique but there are only a small handful of places in the world that have that.
Vast quantities of private data, especially text communications and images. Very few places have that. Coupled with a culture that puts zero privacy protections on that data. Even Google likes to think they’re doing the right thing, so I think that makes Meta unique.
It's not what you or I believe, it's what he believe.
The fact that he's ready to give up on something unique means he professionally can't predict from what he knows internally anything interesting enough in a timeframe sufficient for him to want to stay.
Pytorch and old Lua Torch were a pleasure to work with compared to the contemporary Tensorflow. Lots of S.C's code was copied around liberally, it had its quirks (I remember the DCGAN code had a pretty odd way of doing parameter passing) but it was also really easy to understand and made random people like me feel like we had suddenly stumbled onto something crazy powerful (which we had!). It was wonderfully hackable.
can you explain why you think TensorFlow fumbled?
In my University we had to decide between both libraries so, as a test, we decided to write a language model from scratch. The first minor problem with TF was that (if memory serves me right) you were supposed to declare your network "backwards" - instead of saying "A -> B -> C" you had to declare "C(B(A))". The major problem, however, was that there was no way to add debug messages - either your network worked or it didn't. To make matters worse, the "official" TF tutorial on how to write a Seq2Seq model didn't compile because the library had changed but the bug reports for that were met for years with "we are changing the API so we'll fix the example once we're done".
PyTorch, by comparison, had the advantage of a Python-based interface - you simply defined classes like you always did (including debug statements!), connected them as variables, and that was that. So when I and my beginner colleagues had to decide which library to pick, "the one that's not a nightmare to debug" sounded much better than "the one that's more efficient if you have several billions training datapoints and a cluster". Me and my colleagues then went on to become professionals, and we all brought PyTorch with us.
Also the API changed constantly so examples from docs or open source repos wouldn't work.
They also had that weird thing about all tensors having a unique global name. I remember I tried to evaluate a DQN network twice in the same script and it errored because of that.
It's somewhat vindicating to see many people in this thread shared my frustrations. Considering the impact of these technologies I think a documentary about why TensorFlow failed and PyTorch took off would be a great watch.
Another consequence of this was that PyTorch let you use regular old Python for logic flow.
TensorFlow (while a huge step on top of Theano) had issues with a strange API, mixing needlessly complex parts (even for the simplest layers) with magic-box-like optimization.
There was Keras, which I liked and used before it was cool (when it still supported the Theano backend), and it was the right decision for TF to incorporate it as the default API. But it was 1–2 years too late.
At the same time, I initially looked at PyTorch as some intern’s summer project porting from Lua to Python. I expected an imitation of the original Torch. Yet the more it developed, the better it was, with (at least to my mind) the perfect level of abstraction. On the one hand, you can easily add two tensors, as if it were NumPy (and print its values in Python, which was impossible with TF at that time). On the other hand, you can wrap anything (from just a simple operation to a huge network) in an nn.Module. So it offered this natural hierarchical approach to deep learning. It offered building blocks that can be easily created, composed, debugged, and reused. It offered a natural way of picking the abstraction level you want to work with, so it worked well for industry and experimentation with novel architectures.
So, while in 2016–2017 I was using Keras as the go-to for deep learning (https://p.migdal.pl/blog/2017/04/teaching-deep-learning/), in 2018 I saw the light of PyTorch and didn’t feel a need to look back. In 2019, even for the intro, I used PyTorch (https://github.com/stared/thinking-in-tensors-writing-in-pyt...).
> There is a handful of popular deep learning libraries, including TensorFlow, Theano, Torch and Caffe. Each of them has Python interface (now also for Torch: PyTorch)
> [...]
> EDIT (July 2017): If you want a low-level framework, PyTorch may be the best way to start. It combines relatively brief and readable code (almost like Keras) but at the same time gives low-level access to all features (actually, more than TensorFlow).
> EDIT (June 2018): In Keras or PyTorch as your first deep learning framework I discuss pros and cons of starting learning deep learning with each of them.
This new PyTorch approach was eventually supported by TensorFlow as well ("immediate mode"), but the PyTorch approach was such a huge improvement that there had been an immediate shift by many developers from TF to PyTorch, and TF never seemed able to regain the momentum.
TF also suffered from having a confusing array of alternate user libraries built on top of the core framework, none of which had great documentation, while PyTorch had a more focused approach and fantastic online support from the developer team.
Back in the day, having completed Andrew Ng's ML coursew, I then built my own C++ NN framework copying this graph-mode Lua Torch API. One of the nice things about explicitly building a graph was that my framework supported having the model generate a GraphViz DOT representation of itself so I could visualize it.
Maybe TF has gotten better since but at the time it really felt like an internal tool that Google decided to just throw into the wild. By contrast PyTorch offered a more reasonable level of abstraction along with excellent API documentation and tutorials, so it's no wonder that machine learning engineers (who are generally more interested in the science of the model than the technical implementation) ended up favoring it.
[1] The worst part was that Google only hosted the docs for the latest version of TF, so if you were stuck on an older version (because, oh I don't know, you wanted a stable environment to serve models in production), well tough luck. That certainly didn't gain TF any favors.
The few people I know back then used keras instead. I switched to PyTorch for my next project which was more "batteries included".
If their folder of 10,000 labelled images contains one image that's a different size to the others, the training job will fail with an error about unexpected dimensions while concatenating.
But it won't be able to say the file's name, or that the problem is an input image of the wrong size. It'll just say it can't concatenate tensors of different sizes.
An experienced user will recognise the error immediately, and will have run a data cleansing script beforehand anyway. But it's not experienced users who bounce from frameworks, it's newbies.
Even seasoned developers will bounce away from frameworks or libraries - no matter if old dogs or the next hot thing - if the documentation isn't up to speed or simple, common tasks require wading through dozens of pages of documentation.
Writing good documentation is hard enough, writing relevant "common usage examples" is even harder... but keeping them up to date and working is a rarely seen art.
And the greatest art of all of it is logging. Soooo many libraries refuse to implement detailed structured logging in internal classes (despite particularly Java and PHP offering very powerful mechanisms), making it much more difficult to troubleshoot problems in the field.
I believe some years after the TF1 release, they realized the learning curve was too steep, they were losing users to PyTorch. I think also the Cloud team was attempting to sell customers on their amazing DL tech, which was falling flat. So they tried to keep the TF brand while totally changing the product under the hood by introducing imperative programming and gradient tapes. They killed TF1, upsetting those users, while not having a fully functioning TF2, all the while having plenty of documentation pointing to TF1 references that didn’t work. Any new grad student made the simple choice of using a tool that was user-friendly and worked, which was PyTorch. And most old TF1 users hopped on the band wagon.
I also like that jax.jit forces you to write "functional" functions free of side effects or inplace array updates. It might feel weird at first (and not every algorithm is suited for this style) but ultimately it leads to clearer and faster code.
I am surprised that JIT in PyTorch gets so little attention. Maybe it's less impactful for PyTorch's usual usecase of large networks, as opposed to general scientific computing?
It's not weird. It's actually the most natural way of doing things for me. You just write down your math equations as JAX and you're done.
It's natural when your basic unit is a whole vector (tensor), manipulated by some linear algebra expression. It's less natural if your basic unit is an element of a vector.
If you're solving sudoku, for example, the obvious 'update' is in-place.
In-place updates are also often the right answer for performance reasons, such as writing the output of a .map() operation directly to the destination tensor. Jax leans heavily on compile-time optimizations to turn the mathematically-nice code into computer-nice code, so the delta between eager-Jax and compiled-Jax is much larger than the delta between eager-Pytorch and compiled-Pytorch.
JAX seems well engineered. One would argue so was TensorFlow. But ideas behind JAX were built outside Google (autograd) so it has struck right balance with being close to idiomatic Python / Numpy.
PyTorch is where the tailwinds are, though. It is a wildly successful project which has acquired ton of code over the years. So it is little harder to figure out how something works (say torch-compile) from first principles.
Shades of Siddhartha. Back to the forest.
PyTorch of course has the benefit of being dynamically debuggable. Can’t forget the first time I break pointed my pytorch model and wrote pytorch calls inside the terminal to inspect the behavior. That’s still something I miss a lot now that I’m working with only “fast” compiled code.
A simple feeling has such a power. May he gets an opportunity to create one more powerful tool before retiring.
The second I stop being curious I stop finding new and exciting things to do, and I stop feeling fulfillment. It’s one of the biggest signs saying “it’s time to move on”.
I feel so strongly for the people who can’t afford the luxury. Ive been there, unfulfilling jobs for years because bills or resumè building.
Other decisions follow from this one.
Tensorflow started with static and had to move to dynamic at version 2.0, which broke everything. Fragmentation between tensorflow 1, tensorflow 2, keras, jax.
Pytorch's compilation of this computation graph erased the remaining edge of Tensorflow.
Is the battle over ? From a purely computational point, Pytorch solution is very far from optimal and billions of dollars of electricity and GPUs are burned every year, but major players are happy with circular deals to entrench their positions. So at the pace of current AI code development, probably one or two years before Pytorch is old history.
[1] https://www.geeksforgeeks.org/deep-learning/dynamic-vs-stati...
Ehhh, I don’t know about that.
Sure, new AI techniques and new models are coming out pretty fast, but when I go to work with a new AI project, they’re often using a version of PyTorch or CUDA from when the project began a year or two ago. It’s been super annoying having to update projects to PyTorch 2.7.0 and CUDA 12.8 so I can run them on RTX 5000 series GPUs.
All this to say: If PyTorch was going to be replaced in a year or two, we’d know the name of its killer by now, and they’d be the talk of HN. Not to mention that at this point all of the PhDs flooding into AI startups wrote their grad work in PyTorch, it has a lot of network lock-in that an upstart would have to overcome by being way better at something PyTorch can never be good at. I don’t even know what that would be.
Bear in mind that it took a few years for Tensorflow to die out due to lock in, and we all knew about PyTorch that whole time.
Higher level code migration to the newer framework, is going to 0. You ask your favorite agent (or intern) to port and check that the migration is exact. We already see this in the multitude of deep-learning frameworks.
The day one optimization trick that PyTorch can't do but another framework can, which reduce your training cost 10x and PyTorch is going the way of the dodo.
The day one architecture which can't be implemented in PyTorch get superior performance, and it's bye bye python.
We see this with architectures which require real-time rendering like Gaussian Splatting (Instant Nerf), or the caching strategies for LLM sequence generation.
Pytorch's has 3 main selling point :
- Abstracting away the GPU (or device) specific code, which is due to nvidia's mess : custom optimized kernels, which you are forced to adapt to if you don't want to write custom kernels.
If you don't mind writing optimized kernels, because the machine write them. Or if you don't need Cuda because you can't use Nvidia hardware because for example you are in China. Or if you use custom silicon, like Grok and need your own kernels anyway.
- Automatic differentiation. It's one of its weak point, because they went for easy instead of optimal. They shut themselves off some architectures. Some language like Julia because of the dynamic low-level compilation can do things Pytorch won't even dream about, (but Julia has its own problems mainly related to memory allocations). Here with the pytorch's introduction of the "scan function"[2] we have made our way full circle to Theano, Tensorflow's/Keras ancestor, which is usually the pain point of the bad automatic differentiating strategy chosen by Pytorch.
The optimal solution like all physics Phds which wrote simulations know, is writing custom adjoint code with 'Source Code Transformation' or symbolically : it's not hard but very tedious so it's now a great fit for your LLM (or intern or Phd candidate running 'student gradient descent') if you prove or check your gradient calculation is ok.
- Cluster Orchestration and serialization : a model can be shared with less security risks than arbitrary source code, because you only share weights. A model can be splitted between machines dynamically. But this is also a big weakness because your code rust as you become dependent of versioning, you are locked with the specific version number your model was trained on.
- Soft stops is when the dynamic graph computation overhead is too much, which mean you can still calculate, but if you were to write the function manually or with a better framework, you could be 10x faster.
Typical example involve manually unrolling a loop. Or doing kernel fusion. Other typical example is when you have lots of small objects or need to do loops in python because it doesn't vectorize well. Or using the sparsity efficiently by ignoring the zeros.
- Hard stop is when computing the function become impossible, because the memory needed to do the computation in a non optimal way explode. Some times you can get away with just writing customized kernels.
The typical example where you can get away with it are custom attention layers.
Typical example where you can't get away are physics simulations. Like for example the force is the gradient of energy, but you have n^2 interactions between the particles, so if you use anything more than 0 memory preserved during the forward pass per interaction, your memory consumption explode. And typically with things like Lagrangian or Hamiltonian neural networks where you look the discover dynamics of an energy conserving system, you need to be able differentiate at least three times in a row.
There are also energy expanding stops, where you need to find work-around to make it work like if you want to have your parameters changing shape during the optimization process like learning point clouds of growing size, and they spread you thin so they won't be standardized.
I have an ironic sense that there are classrooms in rural India with better pedagogy and lower barriers to entry than some of our elite engineering programs.
This isn't to say engineering programs in the US can't be improved, but there seems to be widespread consensus that they don't suffer from the kinds of serious problems that ones in India commonly do.
And this isn't some personal opinion of mine -- I've never set foot in an Indian classroom. It's just something I've heard repeatedly from professional educators and from hiring departments, and was under the impression this was common knowledge.
SP500: tripled over 10 years i.e. ~12% a year. Reinvesting dividends gives ~14% a year
Meta: 8x over 10 years i.e. ~23% a year.
If growth was uniform over 10 years and compensation/savings was uniform over 10 years, total portfolio would be:
((1+r)^11-1)/r (geometric series since each year's contributions grow for different amount of times)
1 (this year) + (1+r) (previous year) + (1+r)^2 (previous-to-previous year) and so on
SP500: 14% -> $23M Meta: 23% -> $38M
Now, it's entirely possible, the compensation for a position like this runs into $10s of millions and one can easily account for non-uniform compensation.
Even in NYC, actually even in Manhattan, $10M is more than comfortable for retirement. It lets you draw $300-$400K (3-4% per year adjusted for inflation annually). If one is taking a short sabbatical, then it's a no-brainer.
I only included meta because he works/worked at meta and it's not unusual for people to just leave their rsus in their accounts after they vested. I agree though that one shouldn't pick stocks that happened to explode (e.g. nvda).
There are several unrealistic assumptions I did make:
* Presumably when someone starts, they earn less than in recent years. He probably wasn't making huge amounts his first few years. Amounts invested in earlier years are smaller but have more time to compound and amounts invested in recent years are larger but have had less time to compound.
* Returns aren't constant.
* I pulled the $2 million/yr out of thin air. It could be $1 million/yr or even $10 million/yr. I have no idea what the overall head of a project like PyTorch would make.
* Everyone's expenses are different. In and around NYC, one can live on $80k/year, $120-150k/year as well as as on $1 million/yr. I assumed zero since I wanted a nice even $1 million/yr savings. Maybe it was $500k/yr of savings in which case all the numbers should be halved.
In any case, I can't see how one wouldn't end up with at least $10 million in a position like this with 10 years at meta. Unless one buys a $5 million unit in Manhattan and is burdened by a high mortgage.
He is an investor in Anthropic, didnt know you could do that working for Meta.
In any case, I ended up sticking with PT and am extremely grateful for all the work put into it. Thank you.
Also, looking at the contribution history for a long career is very interesting; reflects the changing roles over time https://github.com/soumith
My memory is that Souminth was really open to other people’s contributions and questions, no matter their credentials. He was a great leader who felt approachable to the open-source community.
I reached out to him myself years ago and was surprised at getting a response.
And the response was incredibly generous. I probably wouldn't have had the confidence to do my switch if it wasn't for Olah.
And as I got further into this path, I learned that Olah had done the same for some of my mentors and colleagues.
Every time Olah speaks, I listen.
If you take advice from reformed Internet trolls, consider turning off all your devices and trying to give yourself at least a week, but ideally a month offline staring at your new baby. You'll never get that time back and there's nothing your brain will appreciate more than loading up those memories as they grow.
Good luck.
If there's a soul to silicon valley, that's it. However many jerks and political/power players I encounter, I remain inspired by the few like this.
Ironical but one HN front page item today is this: "Meta projected 10% of 2024 revenue came from scams and banned goods, Reuters reports"
Glad you're leaving, hopefully you're in a good place financially. Take a page from Bill Gates and work on something that attempts to improve society. Stay away from surveillance capitalism and enshittification.
I don't know why this is celebrated so much, a big company rebranding a non-profit for profit.
But I guess that's the norm for AI now.
<style class="fallback">body{visibility:hidden;white-space:pre;font-family:monospace}</style>
which is then unset by JS, with no <noscript> anywhere, is just... I just get white page.Changing it to
<style class="fallback">body{white-space:pre-wrap;font-family:monospace}</style>
gives perfectly readable web, so it seem bit... pointless.