Building the Linux kernel with LLVM: https://github.com/ClangBuiltLinux/linux/issues
LLVM itself: https://github.com/llvm/llvm-project/issues?q=is%3Aissue%20s...
I'm a bit shocked that it would take significant effort/creativity for an MIT grad with relevant course/project work to get a job in the niche
I would have thought the recruiting pipeline is kinda smooth
Although maybe it's a smaller niche than I think -- I imagine compiler engineers skew more senior. Maybe it's not a common first or second job
I graduated at the bottom of bear market (2001), and it was hard to get a job. But this seems a bit different
That bit was heartbreaking to me too. I knew the economy was bad for new grads but if a double major from MIT in SF is struggling, then the economy is cooked.
Compiler development is (for better or worse) a niche that favours people who've got real-world experience doing this. The traditional ways to get in have either been through high-quality, high-profile open-source contribs, or because your existing non-compiler-dev job let you inch closer to compiler development up until the point you could make the jump.
As the author noted, a lot of modern-day compiler work involves late-life maintenance of huge, nigh-enterprise-type code bases with thousands of files, millions of LOC, and no one person who has a full, detailed view of the entire project. This just isn't experience you get right out of school, or even a year or two on.
Honestly, I'd say that as a 2023 grad with no mentors in the compiler dev space, she's incredibly lucky to have gotten this job at all (and to be clear, I hope she makes the most of it, compiler dev can be a lot of fun).
It has never been a huge niche. It's fun work if you can get it. There were often MIT grads around, but I don't think it made you an automatic hire?
Once in a blue moon, for old times' sake, I send a bug fix PR for someone's optimizer, or build a small transpiler for something.
It makes sense now, doesn't it?
Beyond that, I've definitely interviewed people who seemed like they could have been smart + capable but who couldn't cut it when it came to systems programming questions. Even senior developers often struggle with things like memory layouts and hardware behavior.
Compilers are just programs like anything else. All the compiler developers I know were trained up by working on compilers. Just like people writing B2B ecommerce software learned how to do so by working on B2B e-commerce software and embedded software developers learned how to do so by working on embedded software.
Heck, a typical CS degree probably covers more of the basics for compilers than B2B ecommerce software or embedded software!
Anyways, the "Who .. hires compiler engineer?" section is fairly vague in my opinion, so: AMD, Nvidia, Intel, Apple, Google definitely hire for compiler positions. These hire fairly 'in-open' so probably the best bets all around. Aside from this, Jane Street and Bloomberg also do hire at the peak tier but for that certain language. The off beat options are: Qualcomm, Modular, Amazon (AWS) and ARM. Also see, https://mgaudet.github.io/CompilerJobs/
I seriously attempted getting into compilers last year before realising it is not for me but during those times it felt like people who want to be compiler devs are much much more in number compared to jobs that exist (yes exist, not vacant).
The common way to get going is to do LLVM. Making a compiler is great and all but too many people exist with a lox interpreter-compiler or something taken from the two Go books. Contributing to LLVM (or friends like Carbon, Swift, Rust) or atleast some usage experience is the way. The other side of this is doing GNU GCC and friends but I have seen like only one opening that mentions this way as being relevant. University level courses are rarely of any use.
Lastly, LLVM meetups/conferences are fairly common at most tech hubs and usually have a jobs section listing all requirements.
A few resources since I already made this comment too long (sorry!):
[0]: https://bernsteinbear.com/pl-resources/ [1]: https://lowlevelbits.org/how-to-learn-compilers-llvm-edition... [2]: https://www.youtube.com/@compilers/videos
Damn, you don’t hold back, do you?
Looking through the domains in the LLVM mailing list or the git commits should get you a longer list of "off beat" options.
Definitely worth some self-study, however, if only for the healing effect of being exposed to a domain where the culture is largely one of quality instead of...everything except that. :)
What do you mean by that?
Bloomberg also does use OCaml by the way, although probably not to the extent of Jane Street.
Do tell
Which is to say that all it takes is an interest in compilers. That alone will set you apart. There's basically no one in the hiring pipeline despite the tech layoffs. I'm constantly getting recruiting ads. Current areas of interest are AI (duh) but also early-stage quantum compute companies and fully-homomorphic encryption startups. In general, you will make it farther in computer science the more niche and hard you go.
The cultural importance of education in Jewry, the preservation of that in Christianity and that Christians can never take any understanding of something for granted and always need to question everything, because the Universe (being created by God) will always be more complex than any current knowledge of it, is the origin of the Western concept of Empirism and formalized (Natural-)Science, even if a lot of modern Atheists like to sweep that under the rack.
A lot of early and also late scientists researched as a part to understand the world their God created, meaning they understood it as a approach to worship God.
https://docs.google.com/document/d/1pPE6tqReSAXEmzuJM52h219f...
Stories of "asian face" actresses with eyes taped back, prominent pieces of anti asian grafitti on walls and drawn in bathrooms are common tropes in asian communities, etc.
The examples of plagiarism are examples of common story arcs, with an educated asian female twist, and use of examples that multiple writers in a shared literary pool would have all been exposed to; eg: it could be argued that they all drew from a similar well rather thn some were original and others copied.
There's a shocked article: https://www.halfmystic.com/blog/you-are-believed that may indeed be looking at more evidence than was cited in the google docs link above which would explain the shock and the dismissal of R.W. as a plagiarist.
The evidence in the link amounts to what is common with many pools of proto writers though, lots of similar passages, some of which have been copied and morphed from others. It's literally how writers evolve and become better.
I'm on the fence here, to be honest, I looked at what is cited as evidence and I see similar stories from people with similar backgrounds sharing common social media feeds.
That’s pretty damning evidence. If a publisher was on the fence they might pull her books quietly, but they wouldn’t make such a public attack without very good evidence that they thought would hold up in court. There was no equivocation at all.
The evidence, at least the evidence that I found cited as evidence, appears less damning.
Perhaps there is more damning evidence.
What I found was on the order of the degree of cross copying and similar themes, etc. found in many pools of young writers going back through literary history.
Rona Wang, whom I've never previously heard of, clearly used similar passages from her peers in a literary group and was called out for it after receiving awards.
I would raise two questions, A) was this a truly significant degree of actual plagarism, and 2) did any of her peers in this group use passages from any of Tona's work ?
On the third hand, Kate Bush was a remarkable singer / song writer / performer. Almost utterly unique and completely unlike any contempory.
That's ... highly unusual.
The majority of writers, performers, singers, et al. emerge from pools that differ from their prior generations, but pools none the less that are filled with similarity.
The arc of careers of those that rise from such origins is really the defining part of many creators.
Do you consider the announcement from her publisher that she admitted that she plagiarized passages as a damning response or damning evidence?
It doesn’t prove anything, but it supports the theory that they have seen additional evidence.
After researching this a bit, it looks like someone from publisher says she admitted it to them. That certainly explains why they weren’t afraid to publicly condemn her.
I wonder whether the similar ones were the result of something innocent, like a shared writing prompt within the workshop both writers were in, or maybe from a group exercise of working on each others' drafts.
Or I suppose some could be the result of a questionable practice, of copying passages of someone else's work for "inspiration", and rewriting them. And maybe sometimes not rewriting a passage enough.
(Aside relevance to HN professions: In software development, we are starting to see many people do worse than copy&revise a passage plagiarism. Not even rewriting the text copy&pasted from an LLM, but simply putting our names on it internally, and company copyrights on it publicly. And the LLM is arguably just laundering open source code, albeit often with more obfuscation than a human copier would do.)
But for a lot of the examples of evidence of plagiarism in that document, I didn't immediately see why that passage was suspect. Fiction writing I've seen is heavily full of tropes and even idiomatic turns of phrase.
Also, many stories are formulaic, and readers know that and even seek it out. So the high-powered business woman goes back to her small town origins for the holidays, has second-chance romance with man in a henley shirt, and she decides to stay and open a bakery. Sprinkle with an assortment of standard subgenre trope details, and serve. You might do very original writing within that framework, but to someone who'd only ever seen two examples of that story, and didn't know the subgenre convention, it might look like one writer totally ripped off the other.
(I have no knowledge / context of this situation - no idea if she did or what happened here)
So if I copy paste Harry Potter that's ok?
What kind of argument is that
I just don't see how this could possibly work - how would slapping Harry Potter in the middle of the book your writing work
Read the evidence document another poster linked for actual examples.
Well plagiarism by definition means passing the work off as your own without crediting the author, so in that case it isn’t plagiarism.
References to pop culture are the same as lifting sentences from other books and pretending you wrote them.
> And at some level of famous-ness passages and ideas loose their exclusive tie to the original book and become part of the list of common cultural sayings
In the actual case being examined the copied references certainly hadn’t reached any such level of famousness.
Also there’s a difference between having a character tell another “not all those who wander are lost” as a clear reference to a famous quote from LOTR and copying multiple paragraph length deep cuts to pass off as your own work.
Of course, but wrote 'could' and not 'should' for a reason, I won't expect it. A book isn't a paper and the general expectation is that the book will be interesting or fun to read and not that it is original. That means the general expectation is not that it is never a rehash of existing ideas. I think ever book including all the good ones is. A book that invents the world from scratch might be novel, but unlikely what people want to read.
> copying multiple paragraph length deep cuts to pass off as your own work.
If that is true, it sounds certainly fishy, but that is a case of violation of copyright and intellectual property and not of plagiarism.
There’s a different from rehashing existing ideas and copying multiple passages off as your own.
> If that is true, it sounds certainly fishy, but that is a case of violation of copyright and intellectual property and not of plagiarism.
What exactly do you think plagiarism is? Here’s one common definition:
“An instance of plagiarizing, especially a passage that is taken from the work of one person and reproduced in the work of another without attribution.”
Both are about passing of something of your own. Plagiarism is about passing ideas of insights of as your own. It doesn't really matter, whether you copy it verbatim, present it in your own words or just use the concept. It does however matter how important that idea/concept/topic is in your work and the work you took it from without attribution, and whether that is novel or some generally available/common knowledge.
For violation of intellectual property it is basically the opposite. It doesn't matter, whether the idea or concept is fundamental for your work or the other work you took it from, but it does matter, whether it is a verbatim quote or only the same basic idea.
Intellectual property rights is something that is enforced by the legal system, while plagiarism is an issue of honor, that affects reputation and universities revoke titles for.
> There’s a different from rehashing existing ideas and copying multiple passages off as your own.
Yes and that's the difference between plagiarism and violating intellectual property/copyright.
But all this is arguing about semantics. I don't have the time to research whether the claims are true or not, and I honestly don't care. I have taken from the comments that it was only the case, that she rehashed ideas from other books, and I wanted to point out, that while this is a big deal for academic papers, it is not for books and basically expected. (Publishers might have different ideas, but that is not an issue of plagiarism.) If it is indeed the case that she copied other authors verbatim, then that is something illegal she can be sued for, but whether this is the case is for the legal system to be determined, not something I should do.
In addition to near verbatim quotes, she is also accused of copying stories beat for beat. That's much different than rehashing a few ideas from other works. It is not expected and it is very much considered plagiarism by fiction writers.
As for the quotes she copied. That is likely both a copyright violation and plagiarism.
Plagiarism isn't just about ideas but about expressions of those ideas in the form of words.
Webster's definition:
"to steal and pass off (the ideas or words of another) as one's own : use (another's production) without crediting the source"
"to commit literary theft : present as new and original an idea or product derived from an existing source"
Oxford learner's dictionary:
"to copy another person’s ideas, words or work and pretend that they are your own"
Copying verbatim or nearly verbatim lines from a work of fiction and passing them off as your own is both plagiarism and copyright violation.
> copying stories beat for beat. That's much different than rehashing a few ideas from other works. It is not expected and it is very much considered plagiarism by fiction writers.
Some operas are a greek play. There rehashes of the Faust, the Beggars Opera is a copy of a play from Shakespeare, there are modern versions of Pride and Prejustice, there are tons of stories that are a copy of the Westside Story, which is itself a copy of Romeo and Julia, which I thinks comes from an even older story. This often don't come with any attribution at all, although the listener is sometimes expected to know that the original exists. They change the settings, but the plot is basically the same. Do you consider all of that to be plagiarism? These would be all a reason to call it plagiarism when considering a paper, but for books nobody bats an eye. This is because authors don't sell abstract ideas or a plot, they sell concrete stories.
The stories this authors copied were either unpublished manuscripts she got access to in writers groups or very obscure works that it’s unlikely her readers had read.
Second, the examples you gave were extremely transformative. Just look at the differences between Westside Story and Romeo and Juliette. It’s a musical for goodness sake. It subverts expectations by letting Maria live through it.
The writings at issue are short stories, so there’s less room for transformation in the first place. And there was clearly not even a strong attempt at transformation. The author even kept some of the same character names.
There was no attempt to subvert expectations largely because the audience had expectations, since they weren’t aware of the originals.
>change settings
She didn’t even do that.
> for books nobody bats an eye
If a popular book were revealed to be a beat for beat remake of an obscure novel with the same setting, similar dialogue, some of the same character names, and few significant transformative elements, you can bet your life there would be a scandal.
Doesn’t need to be a YT channel, a blog where you talk about this very complex and niche stuff would be awesome for many.
...And honestly it seems that I'm screwed. And I need about 6 months of study to learn all this stuff. What I'd do right now is finish Crafting Interpreters, then grab that other book on Interpreters that was recommended here recently[2] and written in Go because I remember it had a followup book on Compilers, and THEN start going through the technical stuff that Rona suggested in the article.
And my interview is on Monday so that's not happening. I have other more general interviews that should pay better so I'm not too upset. If only I wasn't too lazy during my last position and kept learning while working. If the stars align and somehow I get that Compiler Engineer position, then I will certainly reach out to Rona and thank you again lalitkale for sharing this post with HN!
There are three versions (C, ML, and Java). The language isn’t all that important; the algorithms are described in pseudo-code.
I also find the traditional Dragon Book to be somewhat helpful, but you can mostly skip the parsing/frontend sections.
Just search for either of the words "Triton", "CUDA", "JAX", "SGLang" and "LLVM" (Not LLM) and it filters almost everyone out on "Who wants to be Hired' with 1 or 2 results.
Where as if you search "Javascript", 200+ results.
This tells me that there is little to no interest in compiler engineering here (and especially in startups) unless you are at a big tech company or at one of the biggest AI companies that use these technologies.
Of course, the barrier is meant to be high. but if a recruiter has to sift through 200+ CVs a page of a certain technology (Javascript), then your chances of getting selected against the competition for a single job is vanishingly small.
I said this before and it works all the time, for compilers; open-source contributions to production-grade compiler projects with links to commits is the most staightforward differentiator and proof one can use to stand out against the rest.
YMMV, I guess, but you're better off demonstrating experience with what they're hiring for, not random tech that they aren't and never will use.
The world needs maybe what, 5000, 10000 of these people maximum? In a world with 8 billion people?
- multiple large companies contribute to each of the larger AoT compiler projects; think AMD's contributions to LLVM and GCC, and multiple have their own internal team for handling compiler work based on some OSS upstream (eg Apple clang)
- various companies have their own DSLs, eg meta's Hack, the python spinoff Goldman has going on, etc.
- DBs have query language engineers which are effectively compiler roles
- (most significantly) most hardware companies and accelerators need people to write toolchains; Triton, Pytorch, JAX, NVCC, etc. all have a lot of people working on them
Hands-on balanced with theory.
We need more compilers (and interoperability of course) and less dependence on LLVM.
But writing smart contracts and whatnot directly in bytecode sucks; so, you make a compiler so you can write them in an (invented) higher-level language. For which you might as well hire a "compiler engineer" as any other kind. :)
I would like to read in the future about what is the usual day of a compiler engineer, what you usually do, what are the most enjoyable and annoying tasks.
0: https://www.packtpub.com/en-us/product/llvm-techniques-tips-...
- most enjoyable: fiddling with new optimizations
- least enjoyable: root-causing bugs from customer crash stacks
> I’m not involved in any open-source projects, but they seem like a fantastic way of learning more about this field and also meeting people with shared interests. I did look into Carbon and Mojo but didn’t end up making contributions.
This sounds like the best way to learn and get involved with compilers, but something that's always been a barrier for me is just getting started in open source. Practical experience is far more valuable than taking classes, especially when you really need to know what you're doing for a real project versus following along directions in a class. Open source projects aren't usually designed to make it easy for anyone to contribute with the learning curve.
> So how the hell does anybody get a job?
> This is general advice for non-compilers people, too: Be resourceful and stand out. Get involved in open-source communities, leverage social media, make use of your university resources if you are still in school (even if that means starting a club that nobody attends, at least that demonstrates you’re trying). Meet people. There are reading groups (my friend Eric runs a systems group in NYC; I used to go all the time when it was held in Cambridge). I was seriously considering starting a compilers YouTube channel even though I’m awkward in front of the camera.
There's a lot of advice and a lot of different ways to try to find a job, but if I were to take away anything from this, it's that the best way is to do a lot of different meaningful things. Applying to a lot of jobs or doing a lot of interview prep isn't very meaningful, whereas the most meaningful things have value in itself and often aren't oriented towards finding a job. You may find a job sooner if you prioritize looking for a job, similar to how you may get better grades by cramming for a test in school, but you'll probably get better outcomes by optimizing for the long term in life and taking a short term loss.
There's probably hundreds of other brilliant engineers more with insane impacts that never got any popularity.
>In 2023, I graduated from MIT with a double major in math and computer science.
You certainly don't need to be from 'top 3 college in US' at all. Neither did the creators of Zig (Andrew Kelly) and LLVM, Swift and Mojo (Chris Lattner) ever did.
All you need is genuine interest in compilers, the basic CS fundamentals including data structures and algorithms. Start by building a toy compiler and then study the existing open-source ones and make some contributions to understand how it all works.
The GNU gcc groups win is rarely considered these days, but kids who were stuck with masm, risc asm, or Basic thought they were heroes to the hobby.
Slowly, the FOSS compilers grew in features, and commercial entities realized it was less risky to embrace an established compiler ecosystem, and paid people to port in an experienced user-base.
Starting from scratch is hard, as people often make the same logical mistakes. =3
"There are two mistakes one can make along the road to truth...not going all the way, and not starting."(Prince Gautama Siddhartha)
Then there are the people building compilers accidentally, like in the <xyz>-as-code space. Infrastructure automation deal with grammars, symbol tables, (hopefully) module systems, IRs, and so forth. Only the output is very different.
And of course the toolchain space is larger than just compilers. Someone needs to maintain the assemblers, linkers, debuggers, core runtime libraries. If you are building a Linux distribution, someone has to figure out how the low-level pieces fit together. It's not strictly a compiler engineering role, but it's quite close. Pure compiler engineering roles (such as maintaining a specific register allocator) might be quite rare.
It's a small field, but probably not that obscure. Despite the efficiency gains from open-source compilers, I don't think it's shrinking.
Seriously? She's posting on her personal blog (that's what her substack is). Up front she says that one of the two kinds of people she's writing for are those who are interested in her personally.
> she needn't use
There are a lot of things that people needn't do, like absurdly talk about "attack vectors".
If it was her who posted it, she should've made it more clear that this is a 'personal journey' post. If it was someone else without her permission, then that should've not done so.
If you look at most of the discussion that emerged around it, it's about the relative technical merits of various compiler frameworks and techniques, and getting a job working with compilers, which I think most people expected to read about.
And do you really have no sense of how utterly absurd
> she needn't use the title of 'Becoming a Compiler Engineer' as an attack vector
is? Her title is perfectly reasonable and beyond any rational sensible intellectually honest good faith criticism.
The fault you seek lies entirely within.
Then you turn around and claim that this was a technical article after all. And no, the advice of 'you have to be among the three dozen people that get accepted to MIT to do math' is an unnecessarily elitist piece advice, that's not even necessary, as I (and a lot of others) pointed out there are a ton of good books and resources you can just get started with.
I know HN disallows editorializing, but the title reads like it'll be an useful resource on compilers. The writer obviously chose that title in context of her personal blog, where it makes sense, but I maintain that a strangers life update is not interesting to the rest of us, and if I were the writer, I wouldn't be too happy if someone decided to direct a ton of eyeballs to my personal post.
I'd implore you to look around in the thread, most people had different expectations and complained about the same thing I did. Still the post was useful, since an interesting conversation happened around the topic of compilers.
And I'd like you to realize, that you are just one person sharing his personal opinion, not some arbiter of moral goodness, please stop attacking me, and I feel bad for having to defend myself.
> First you chew me out for pointing out that this isn't a very good resource for compiler stuff as its more of an autobiographical post.
I never did any such thing. I didn't even bother to read past that false claim as I expect the rest to be similarly false and dishonest, and won't respond to anything else from that source.
I also checked out Cranelift which was recommended to me as a battle-tested but easier to use alternative, and honestly I haven't found it to be that much simpler.
Anything else, you're better off starting off with LLVM. Especially if you want a compiler job.
Eg with gcc you can write a jit in pure C. With LLVM you'd still need C++ for name lookup hooks. With qbe you won't need any of this. With lisp much less.
Doesn't mean it's good advice for someone getting into a field where LLVM is the technology used for 90% of the jobs.
Still I think once you get over the hurdle of SSA-ifying your procedural code, LLVM is all right, and seems to be a lot more feature rich (I might be stupid, but I don't see how to make stack maps in GCC).
Also GCC is very lightly documented while LLVM seems to have quite good docs and tutorials.
Just emit load and store instructions, and it'll be converted automatically.
What SSA gives you is substantially easier analysis and optimizations on the intermediate representation, so much that all compilers use it (yes, even GCC).
How do you know this? Just because some people have written dissertations on compilers and didn't find work? OP seems very talented. And doing compiler work for a blockchain is not the most competitive position. I did the same thing for a while, and I don't even have a computer science degree (nor do I "check HR boxes"). And her article has lots of information that would be helpful for someone looking for a compiler engineering role
Yes, I said exactly that. It's because virtually nobody who studies compilers ever has a job doing that, and this is a girl literally from MIT we're talking about. It's not as much about expertise as about the hiring end of things IMO. I could be wrong but I'm just connecting dots and making educated guesses here.
>OP seems very talented.
As a human being, or perhaps even as a generic new grad, sure. She might also be more studious than I'm expecting. She said she wrote a novel, which is cool. How much time did she spend doing that versus actually studying compilers? I'm comparing her to the kinds of people I would expect to be doing compiler work.
>And doing compiler work for a blockchain is not the most competitive position. I did the same thing for a while, and I don't even have a computer science degree (nor do I "check HR boxes")
There's always an exception to the rule. For what it's worth I would not expect you to be a competent compiler engineer either, especially without a degree or killer experience. I might believe it if given enough evidence but first impressions say no.
> For what it's worth I would not expect you to be a competent compiler engineer either, especially without a degree or killer experience.
Which indicates to me that you think a degree would be evidence of me being a competent compiler engineer. But when OP gets a degree from MIT (and does part of a graduate program in compilers), and then gets hired, you suggest that she has substandard skills compared to what you would expect from someone in her role. I'm not sure why me not having a degree would count against me, but her having gotten a degree doesn't seem to count in her favor
The comment is not merely rude, it's blatant bigotry.
You must be referring to something other than your own comment.
What's your point? Maybe those people aren't as good as her at turning their academic knowledge into real-world skills. Clearly the main reason she ever got hired anywhere was because she passed the interviews, which should tell you that she knows her stuff. Being charming and having contacts only gets you so far; nobody will hire you to work on compilers just because you smile and drop names.
From the article: "I am super-lucky in a lot of ways".
> I'm just not going to expect much good advice about to get hired from a fresh MIT grad who checks off a bunch of HR checkboxes.
What exactly is wrong with the advice she does give? And exactly what "HR checkboxes" does she check off? C'mon, don't be coy.
Despite not basing your comments on the content of the article, you seem to have a lot of knowledge about her skills. Or perhaps it's just a common bias against people referred to as "her".
what exactly are those even? that she went to MIT? from her linkedin she's at some blockchain startup (for only 4 months) doing "compiler work" - i put it in quotes because these jobs actually happen to be a dime-a-dozen and the only thing you have to do to get them is pass standard filters (LC, pedigree, etc.).
> I was a compiler engineer at a startup in New York City. In that role, I worked on extending a preexisting open-source programming language.
> I am now a compiler engineer at a large, post-IPO tech company in the San Francisco Bay Area. I work on making programming languages run faster.
The 4 month startup is the former.
At least up until 5 years ago, the bar to join compiler teams was relatively low and all it required was some demonstration of effort and a few commits.
(Disclosure: am retired now)
No, LLMs are not going to replace compiler engineers. Compilers are probably one of the least likely areas to profit from extensive LLM usage in the way that you are thinking, because they are principally concerned with correctness, and LLMs cannot reason about whether something is correct — they only can predict whether their training data would be likely to claim that it is correct.
Additionally, each compiler differs significantly in the minute details. I simply wouldn't trust the output of an LLM to be correct, and the time wasted on determining whether it's correct is just not worth it.
Stop eating pre-chewed food. Think for yourself, and write your own code.
Actually, your whole point about LLMs not being able to detect correctness is just demonstrably false if you play around with LLM agents a bit.
First, LLMs should be happy to use made up languages described in a couple thousand tokens without issues (you just have to have a good llm-friendly description, some examples). That and having a compiler it can iterate with / get feedback from.
Second, LLMs heavily reduce ecosystem advantage. Before LLMs, presence of libraries for common use cases to save myself time was one of the main deciding factors for language choice.
Now? The LLM will be happy to implement any utility / api client library I want given the API I want. May even be more thoroughly tested than the average open-source library.
And this is for generic backend stuff, like a CRUD server with a Rest API, the same thing with an Express/Node backend works no trouble.
To be clear, I mean specifically using Claude Code, with preloaded sample context and giving it the ability to call the compiler and iterate on it.
I’m sure one-shot results (like asking Claude via the web UI and verifying after one iteration) will go much worse. But if it has the compiler available and writes tests, shouldn’t be an issue. It’s possible it causes 2-3 more back and forths with the compiler, but that’s an extra couple minutes, tops.
In general, even if working with Go (what I usually do), I will start each Claude Code session with tens of thousands of tokens of context from the code base, so it follows the (somewhat peculiar) existing code style / patterns, and understands what’s where.
> even those languages haven't taken over much of the professional world.
Non sequitur goalpost moving ... this has nothing to do with whether language development is a dead-end "in the real world", which is a circular argument when we're talking about language development. The claim is simply false.