https://github.com/anthropics/claudes-c-compiler/issues/1#is...
I am grateful to be able to witness all these amazing progress play out, but am also concerned about the wide ranging implications.
I thought about it and it doesn't seem that bright. The problem is not that LLMs generate inferior code faster, is that at some point some people will be convinced that this code is good enough and can be used in production. At that point, the programming skills of the population will devolve and less people will understand what's going on. Human programmers will only work in financial institutions etc., the rest will be a mess. Why? Because generated code is starting to be a commodity and the buyer doesn't understand how bad it it.
So we're at the stage when global companies decided it's a fantastic idea to outsource the production of everything to China, and individuals are buying Chinese plastic gadgets en masse. Why? Because it's very cheap when compared to the real thing.
Not the kind of insecurity you get from your parents mind you, but the kind where you’re not sure you’re going to be able to preserve your way of life.
I don't get this part. At least my experience is the opposite: it's basically the basic function of parents to give their child the sense of security.
The only safety lies in staying ahead of LLMs or migrating to a field that's out of reach of them.
The ones against it understand fully what the tech means for them and their loved ones. Even if the tech doesn't deliver on all of its original promises (which is looking more and more unlikely), it still has enough capabilities to severely affect the lives of a large portion of the population.
I would argue that the ones who are inhaling "copium" are the ones who are hyping the tech. They are coping/hoping that if the tech partially delivers what it promises, they get to continue to live their lives the same way, or even an improved version. Unless they already have underground private bunkers with a self-sustained ecosystem, they are in for a rude awakening. Because at some point they are going to need to go out and go grocery shopping.
In business, as a product, results are all that matter.
As a research and development efforts it's exciting and interesting as a milestone on the path to something revolutionary.
But I don't think it's ready to deliver value. Building a compiler that almost works is of no business value.
They are very capable but it's very hard to explain to what degree. It is even harder to quantify what they will be able to do in the future and what inherent limits exist. Again leading to the people benefiting from it to claim that there are no limits.
Truth is that we just don't know. And there are too few good folks out there that are actually reasonable about it because the ones that know are working on the tech and benefit from more hype. Karpathy is one of the few that left the rocket and gives a still optimistic but reasonable perspective.
I think lofty claims ultimately hurt the perception of AI. If I wanted to believe AI was going nowhere, I would listen to people like Sam Altman, who seem to believe in something more akin to a religion than a pragmatic approach. That, to me, does not breed confidence. Surely, if the product is good, it would not require evangelism or outright deceit? For example, claiming this implementation was 'clean room'. Words have meaning.
This feat was very impressive, no doubt. But with each exaggeration, people lose faith. They begin to wonder - what is true, and what is marketing? What is real, and what is a cheap attempt for companies to rake in whatever cold hard AI cash they can? Is this opportunistic, like viral pneumonia, or something we should really be looking at?
While there are many comments which are in reaction to other comments:
Some people hype up LLMs without admitting any downsides. So, naturally, others get irritated with that.
Some people anti-hype LLMs without admitting any upsides. So, naturally, others get irritated with that.
I want people to write comments which are measured and reasonable.
We already have determinism in all machines without this wasteful layer of slop and indirection, and we're all sick and tired of the armchair philosophy.
It's very clear where LLMs will be used and it's not as a compiler. All disagreements with that are either made in bad faith or deeply ignorant.
I’ve seen posts with 500+ upvotes that were still flagged. I think the balance and automation around flagging is completely off and too easily abused.
I wonder if it feels the same embarrassment and shame I do too
> Works if you supply the correct include path(s)
> Can confirm, works fine:
> You could arguably fault ccc's driver for not specifying the include path to find the native C library on this system.
> (I followed the instructions in the BUILDING_LINUX.txt file in the repo and got the kernel built for RISC-V. You can find the build I made here if someone is just interested in the binaries)
The location of Standard C headers do not need to be supplied to a conformant compiler.
>> You could arguably fault ccc's driver for not specifying the include path to find the native C library on this system.
This is not a good implementation decision for a compiler which is not the C compiler distributed with the OS. Even though Standard C headers have well-defined names and public contracts, how they are defined is very much compiler specific.
So this defect is a "somethingburger."
Wake me up when a model trained only on data through the year 1950 can write a C compiler.