I still remember even when seeing GPT3.5 I thought it must be impossible what it can do and that there must be some sort of trickery involved, but no.
I feel like I'm still impressed and amazed daily what AI can do now.
LLMs with meaningful capabilities arrive very quickly. e.g. One week they were not that useful, the next week they gained meaningful capabilities.
A function that takes text and returns text isn't that useful without it being integrated into products, and this takes time.
Next 12-24 months will be the AIfication of many workflows: that is, discovering and integrating LLM-based reasoning into business processes. Assuming even a gradual improvement in capabilities of LLMs over time, all of these AI enhanced business processes will simply get better.
Diffusion of technology is slow slow slow, and then fast. As I become more capable with AI (e.g. what tasks as an engineer are helped using AI) I'm getting better and better at it. So there's a non-linear learning curve where, as you learn to use the technology better, you can unlock more productivity.
The successful uses of LLMs don't seem to depart too far from the basic chatbot that started the whole hype. And the truly 'magic' uses seem to fail in practice because even a small error rate is way too high for a system that cannot learn from its mistakes (quickly).
Is ChatGPT-3.5 a basic chatbot now? It's been less than two years since it was SOTA.
Hardly quietly. Thorpe published "Beat the Market" in 1967 detailing his formulae, six years before Black Scholes won the Nobel.
I wish I had a better heuristic, but the best I’ve found on Twitter is pseudonymous users with anime profile pics. These are people who don’t care about boosting a product. They’re possibly core contributors to a lesser-known but essential python library. They deeply understand a single thing very well. They don’t post all day because they are busy producing.
X is a very good microcosm of that producer/promoter model from Dalio except that the promoters are seemingly the entirety and they are extremely loud to the point that it triumphs all common sense and reasoning.
It's also very tiring to scroll through "I made $XXXX in 30 days with AI and I'm only 17 year old high school student" or "we shipped a ChatGPT wrapper and used dark patterns for subs"
On Linkedin its far worse, everybody is a genius and everybody needs to pay attention to me of the remote chance a recruiter from big tech will reach out and pay me a large salary for managing their impression.
All in all, it really feels like the American economy is running on pure hopium and fumes. This cannot be good for it in the long run.
Right. So much content, but it feels so empty. Do people actually network there?
For people working in the field, deep learning has already proven itself to be self-funding. It’s the main source of Google’s profits. It’s TikTok’s algorithm. Et cetera.
And AGI is science fiction with no credible plan of how to get there. If you can even get everyone to agree on the same definition.
An AI winter is something that can be measured and is factual eg. the lacklustre spending on AI products and the dry up in VC funding.
The fact that VCs aren’t throwing millions of dollars after every CS undergrad who figured out how to make an API call to OpenAI means they are wising up. The main question is why it took this long.
Amazon was a company that was around and survived the dot com bomb (founded in 1994, roughly around the time of the beginning of the bubble) [though its stock took about 7 years to recover]
Facebook was post dot com bomb. (founded 2004)
I mean... you can't really have a (strict) plan for how to build something that nobody knows how to build (yet). But that doesn't necessarily mean it's "science fiction". There are credible reasons[1] to believe that AGI will happen - eventually. To me, the biggest question is around timeline, not "will it happen or not". Now granted, that allows for anything from "tomorrow" up to "the heat death of the universe" so you can accuse me of the dodging the issue if you'd like. But I'd bet money on it happening closer to "tomorrow" than "the heat death of the universe".
[1]: among others - the progress on AI that's already been made. And while we may not have AGI, it's hard to deny that we have AI that's a far sight better than what we had in 1956. The other is that, unless you believe in magic, the human brain is an existence proof that human level AGI is achievable on a deterministic machine that operates according to the physical laws of the universe. It would seem to follow then that it should be possible (albeit perhaps very difficult) to achieve that same level of intelligence on some other deterministic machine. And note that even if "Penrose is right" about the brain relying on quantum mechanical phenomenon, there's no particular reason to think that those can't also be mirrored on a human created machine.
> But beneath the surface, there are rampant issues: citation rings, reproducibility crises, and even outright cheating. Just look at the Stanford students who claimed to fine-tune LLaMA3 to have be multimodal with vision at the level of GPT-4v, only to be exposed for faking their results. This incident is just the tip of the iceberg, with arXiv increasingly resembling BuzzFeed more than a serious academic repository.
Completely agreed. Academia is terminally broken. The citation rings don't bother me. Bibliometrics are the OG karma -- basically, fake internet points. Who cares?
The much bigger problem is that those totally corrupt circular influence rings extend into program director positions and grant review committees at federal funding agencies. Most of those people are themselves academics (on leave, visiting, etc.) who depend on money from the exact sources they are reviewing for. So this time is their friends turn, and next time is their turn. And don't dare tell me that this isn't how it works. I've been in too many of those rooms.
It's gotten incredibly bad in in ML in particular. Our government needs to cut these people off. I am sick of my tax money going to these assholes (via the NSF, DARPA, etc.). Just stop funding the entire subfield for a few years, tbh. It's that bad.
On the private sector side, I think that the speculative AI bubble will deflate, but also that some real value is being created and many large institutions are actually behaving quite reasonably compared to previous nonsense cycles. You just have to realize we're mid-late cycle and companies/groups that aren't finding PMF with llm tech in the next 2-3 years are probably not great bets.
--
There was a small bubble.
There were 1980s AI startups: IntelliCorp and Teknowledge. Intellicorp pivoted from expert systems to UML and was acquired. Teknowledge seems to have disappeared. (The outsourcing company called Teknowledge today seems to be unrelated.) There were the LISP machine companies, Symbolics and LMI. There were a few others, mostly forgotten now.
For all the valid criticisms of "AI"[1] today, it's creating too much value to disappear completely and there's no particular reason[2] to expect progress to halt.
[1]: scare quotes because a lot of people today are mis-using the term "AI" to exclusively mean "LLM's" and that's just wrong. There's a lot more to AI than LLM's.
[2]: yes, I'm aware of neural scaling laws and some related charts showing a slow-down in progress, and the arguments around not having enough (energy|data|whatever) to continue to scale LLM's. But see [1] above - there is more to AI than LLM's.
The fall of data science??? When did that happen? I’m not squarely in the field, but I thought I would have heard about it
It didn't. "Data science" may not be the latest, trendy, catchy "buzzword of the day" but nothing holds onto that title forever. Losing that crown to trendy tech du-jour isn't the same as "falling off" IMO.
It started with Heroku but now it has gained VC attention in the form of Next/Vercel, Laravel Cloud, Void(0), Deno Deploy and Bun-yet-to-be-announced solution. I'm probably forgetting one or two.
Don't get me wrong, they are legit solutions. But the VC money currently being poured in on influencers to push these solutions make them seem much more appealing than they would be otherwise.
I cannot vouch for laravel cloud or void, since I've never used them, nor I will comment on Deno/Bun since they are far more recent
“Meanwhile, data scientists and statisticians who oftentimes lack engineering skills are now being pushed to write Python and “do AI,” often producing nothing more than unscalable Jupyter Notebooks”
Most data scientists are already well versed in python. There’s so many platforms emerging that abstract a lot of the infra required to build semi-scalable applications
Very powerful, albeit sad, statement.
Has data science or the modern data stack fallen? What does crypto(I assume currency) have any relevance to ai winter for?
> the real producers will keep moving forward, building a more capable future for AI.
This is one of many signal flares going up.
Do something or cash out of the AI space. Engineers are tired.
broadly agree but i think predicting ai winter isnt as useful as handicapping how deep and still building useful things regardless.
If anyone had this knowledge, they wouldnt tell us, theyd keep their market edge and make a bet for their own selfish greed.
Anything else is PR
discuss amongst yourselves Rhode Island, neither a road nor an island
But I've been hearing the refrain of this article for a decade now. I just don't believe it anymore.
Assuming you mean this[1] bitter lesson... sharing the link for anyone who isn't familiar with the term in this context.
[1]: http://www.incompleteideas.net/IncIdeas/BitterLesson.html