I think everyone is starting to see this as a middle man problem to solve, look at ERP systems for instance when they popped up it had some growing pains as an industry. (or even early windows/microsoft 'developers, developers, developers' target audience)
I OpenAI see it will take a lot of third party devs to take what OpenAI has and run with it. So they want to build a good developer and start up network to make sure that there are a good, solid ecosystem of options corporations and people can use AI wise.
The gap was that workers were using their own implementation instead of the company's implementation.
https://www.linkedin.com/feed/update/urn:li:activity:7365026...
These third-party apps get huge token usage with agenentic patterns. So losing out on them and being forced to make more internal products to tune to specific use cases is not something they want to biuld out or explore
The fact it is mid is why they are really needing all the other lines of business to work. AKA selling tokens to AI apps the specialize in other mid products, and limit the snakeoil AI products that are littering the market ruining AI's image of being the new catch all solution.
Then I discovered LLMs.
If you think IntelliSense is comparable to what LLMs can do, you really, really need to try giving an AI higher-level problems to solve. Throwaway example I gave in a similar thread a few weeks ago: https://news.ycombinator.com/item?id=44892576
I think a big part of simonw's shtick is trying to get people to give LLMs a proper try, and TBH that's what I end up doing a lot too, including right now! The problem is a "proper try" takes dedicated effort, because it's not obvious where the AI will excel or fail for your specific context, and people legitimately don't have enough time for that.
But once you figure it out, it feels like when you first discovered IntelliSense, except you already know IntelliSense, so it's like... IntelliSense raised to the power of IntelliSense.
Then you have Java and C# where you need a whole IDE if you're writing more than 10 lines. Because using anything brings the whole jungle with it.
Seems like languages like Java and C# that encourage more complexity just aim to provide richer context to mine. Simple example, given an incomplete line like "TypeA foo = bar.", the IDE can very easily figure out you want "bar.getBlah(baz)" because getBlah has a return type of "TypeA" and "baz" is the only variable available in the scope. But to have all that context at that point requires a whole bunch of setup beforehand, like a fine-grained types supported by a rich type system and function signatures and so on, which incentivizes verbosity that usually scales with the complexity of the app.
So yes, that's a lot of verbosity, but also a lot of context. To your point, I feel like the philosophy of languages like Java and C# is deliberately based on providing enough context for sophisticated tooling like IntelliSense and IntelliJ.
Unfortunately, the languages came before such sophisticated tooling existed, and when good tools did exist they were expensive, and even with those tools now being widely and freely availble, many people still don't use them. (Plus, in retrospect, the language designs themselves genuinely turned out to be more complex than ideal in some aspects.)
So the current reputation of these languages encouraging undue complexity is probably due to their philosophies being grounded in sound reasoning but based on predictions that didn't quite pan out as expected.
- OpenAI needs talent, and it's generally hard to find. Money will buy you smart PhDs who want to be on the conveyer belt, but not people who want to be a centre of a project of their own. This at least puts them in the orbit of OpenAI - some will fly away, some will set up something to be aquihired, some will just give up and try to join OpenAI anyway
- the amount of cash they will put into this is likely minuscule compared to their mammoth raises. It doesn't fundamentally change their funding needs
- OpenAI's biggest danger is that someone out there finds a better way to do AI. Right now they have a moat made of cash - to replicate them, you generally need a lot of hardware and cash for the electricity bill. Remember the blind panic when DeepSeek came out? So, anything they can do to stop that sprouting elsewhere is worth the money. Sprouting within OpenAI would be a nice-to-have.
Imagining one negative spin doesn’t an imagination make. Imagine harder.
My guess is this is as much about talent acquisition as it is about talent retention. Give the bored, overpaid top talent outside problems to mentor for/collaborate on that will still have strong ties to OpenAI, so they don't have the urge to just quit and start such companies on their own.
I don't think there is any money given, except travel costs for first and last week.
First time I am hearing this term. It is a euphemism like pre-owned cars (instead of used cars).
What does this mean? People who do not yet have any idea? Weird.
I find this disturbing. How can someone be useful to others without an idea of what that even means? How can one provide a novel offering without even caring about it? It's an expression of missing craft and bad taste. These aspirations are reactive, not generated by something beautiful (like kindness, or optimism).
Fortunately it is not hopeless; aspiring entrepreneurs can find deeper motivation if they look for it.
(I like to give the following advice: it is easier to first be useful to others and become rich than it is to be rich and then become useful to others. This almost certainly requires sufficient empathy and care to have a hypothesis and be "post-idea".)
It seems that there is a constant motive to view any decision made by any big AI company on this forum at best with extreme cynicism and at worse virulent hatred. It seems unwise for a forum focused on technology and building the future to be so opposed to the companies doing the most to advance the most rapidly evolving technological domain at the moment.
OpenAI had a lot of goodwill and the leadership set fire to it in exchange for money. That's how we got to this state of affairs.
What's even scarier is that if they actually had the direct line of sight to AGI that they had claimed, it would have resulted in many businesses and lines of work immediately being replaced by OpenAI. They knew this and they wanted it anyway.
Thank god they failed. Our legislators had enough of a moment of clarity to take the wait and see approach.
First, when they thought they had a big lead, OpenAI argued for AI regulations (targeting regulatory capture).
Then, when lead evaporated by Anthropic and others, OpenAI argued against AI regulations (so that they can catch up, and presumably argue for regulations again).
Most regulations that have been suggested would but restrictions mostly the largest, most powerful models, so they would likely affect OpenAI/Anthropic/Google primarily before smaller upstarts would be affected.
Their prerogative is to make money via closed-source offerings so they can afford safety work and their open-source offerings. Ilya noted this near the beginning of the company. A company can't muster the capital needed to make SOTA models giving away everything for free when their competitor is Google, a huge for-profit company.
As per your claim that they are scammy, what about them is scammy?
Not sure specifically what the commenter is referring to re: scammy, but things like the Scarlett Johansson / Her voice imitation and copyright infringement come to mind for me.
GPT-OSS is not a near-state-of-the-art model: it is a model deliberately trained in a way that it appears great in the evaluations, but is unusable and far underperforms actual open source models like Ollama. That's scammy.
[1] https://www.lesswrong.com/posts/pLC3bx77AckafHdkq/gpt-oss-is...
[2] https://huggingface.co/openai/gpt-oss-20b/discussions/14
Isnt that a good thing? The comments here are not sponsored, nor endorsed by YC.
I've been posting here for over a decade, and I have absolutely no interest in YC in any way, other than a general strong negative sentiment towards the entire VC industry YC included.
Lots of people come here for the forum, and leave the relationship with YC there.
Big tech (not just AI companies) have been viewed with some degree of suspicion ever since Google's mantra of "Don't be evil" became a meme over a decade ago.
Regardless of where you stand on the concept of copyright law, it is an indisputable fact that in order for these companies to get to where they are today - they deliberately HOOVERED up terabytes of copyrighted materials without the consent or even knowledge of the original authors.
Skepticism is healthy. Cynicism is exhausting.
Thank you for posting this.
Their messaging is just more drivel in a long line of corporate drivel, puffing themselves up to their investors, because that’s who their customers are first and foremost.
I’d do some self reflection and ask yourself why you need to carry water for them.
I don't do a calculation in my head over whether any firm or individual I support "needs" my support before providing or rescinding it.
This feels like a program to see what sticks.
We'll invest in your baby even before it's born! Simply accept our $10,000 now, and we'll own 30% of what your child makes in its lifetime. The womb is a hostile environment where the fetus needs to fight for survival, and a baby that actually manages to be born has the kind of can-do attitude and fierce determination and grit we're looking for in a founder.
What better than companies whose central purpose is putting their API to use creatively? Rather than just waiting and hoping every F500 can implement AI improvements that aren't cut during budget crunches.
Would this have been viewed with skepticism if any other startup from like 5+ years ago selling an API did this? If so, then how is it not even worse when a startup that is supposed to be providing access to what is pushed as a technical marvel of a panacea or something does it?
Sometimes I feel like I'm taking crazy pills...
I literally help companies implement AI systems. So I'm not denying there being any value...just...I don't understand how we can say with a straight face that they need to "build and grow demand for their product and API" while the same company was just reported on inking a $300B deal with Oracle for infra...like come on...the demand isn't there yet?!
Isn't that how we got (and eventually lost) most Google products?
I suspect, but could be wrong, that in OpenAI’s case it is because they believed they will reach AGI imminently and then “all problems are solved”, in other words the ultimate product. However, since that isn’t going to happen, they now have to think of more concrete products that are hard to copy and that people are willing to pay for.
I'm working on a prototype right now, guess I'll toss my hat in the ring.
Fortune favors the bold.
Next up, we're funding prenatal individuals.
Holy crap, I thought that term existed purely in the realm of satire skits:
https://www.tiktok.com/@techroastshow/video/7341240131015445...
If ideas are a dime a dozen, what even is a pre-idea startup
To me, it sounded like, "let's find all the idea guys who can't afford a tech founder. Then we'll see which ones have the best ideas, and move forward with those. As a bonus, we'll know exactly where we'd be able to acquihire a product manager for it!"
I'm highly capable of building some great things, but at my dayjob I'm filled to brim with things to do and a non-ending list of tasks in front of me.
I've built cool stuff before, and if given a little push and some support could probably come up with something useful - and I can implement much of it myself.
Put me in the room with cool people, throw out some conversation starters, shake it up and I'll come up with something.
This is probably purely a pivot in market strategy to profitability to increase token usage, increase consumer/public's trust more than farming ideas for internal projects.
Alas, such grove is impossible.
Most will submit the app with a dime a dozen ideas. (Or, at internet scale, a dime a few hundred thousand I guess?) No need to even consider those guys.
But it will be a pyramid. There will likely be 20-30 submissions that are at once, truly novel, and "why didn't I think of that!"-type ideas.
Finally, a handful of the submissions will be groundbreaking.
Et voilá. Right there you've identified the guys and gals thinking outside the LLM box about LLMs. Or even AI in general.
The world really benefits from well funded institutions doing research and development. Medicine has also largely advanced due in part to this.
What’s lost is the recapture. I don’t think governments are typically the best candidate to bring a new technology to marketable applications, but I do think they should be able to force terms of licensure and royalties. Keeping both those costs predictable and flat across industry would drive even more innovation to market.
What happens instead is private entities take public research and capture it almost entirely in as few hands as possible.
In short, the loss of civic pride and shared responsibility to society has created the nickel and dime you to death capitalism we are seeing in the rise today. Externalization of all costs possible and capture as much profit as possible. No thought to second order effects or how the system that is being dodged to contribute back to gave way for the ability for people to so grossly take advantage of it in the first place
^ This is the secret sauce. For decades the arrangement was exactly that: defense projects would create new technologies, then once those were finished, they were handed to private industry to figure out how to make a $20,000 MIL-spec LCD screen cheap enough and in vast enough quantities that you can buy 3 of them for less than $1,000 while the manufacturer, distributor, and retailer make a solid profit each. That's not an easy thing to do and it's what corporations have historically been good at. And it makes things better for the defense industry too, because they can then apply those lessons to their own hardware where appropriate. Win/win.
But we don't fund research anymore, or at least not that sort of it. Or perhaps there's just not much else to find. I think it's a bit of both. But in any case nothing new is getting made which is why technology feels so dull right now. The most innovative products right now are just thinner, dumber, lighter versions of things we already have, and that's not nothing but it isn't very interesting either.
Edit: if you don't think this is true, look at the history of truly any country and see what happens when subsistence farmers and indigenous communities refuse to work for capitalists