While I think LLMs can improve the interface and help users learn/generate domain specific languages, I don’t see how a professional can trust an llm to get a technical request like this correct without verification. Wouldn’t a financial professional trust the Bloomberg llm agent that translates their request into a set of Bloomberg commands more?
It's like the people who talk about how LLMs can't count the r's in "raspberry" and don't seem to understand that GPT5 can reliably e.g. work out a transformed probability distribution function from a given PDF by integration and derivation --- in part because frontier models are smarter but more importantly because they're all presumably just calling into CAS tooling.
SaaS stocks are currently the buying opportunity of a lifetime.
Now an LLM can do most of that, easier, and effectively for free. The first pass on a new problem is not googling to see if there's a SaaS for that, it's prompting an LLM to see if it can do it, or if it can build a tool that can do it.
Case in point: in my job we have to data-enter invoices. I have dealt with or in this industry for 30-ish years. I worked on various projects trying to get computers to read invoices, to various degrees of success. It's a hard problem; there's no standard format, or layout. Every company does its invoices differently. Some are Excel files, some are PDFs, some are Word docs, etc.
This entire problem vanished this year. You get an LLM to read the invoice. It does this more accurately than humans do. Job done.
There are entire SaaS businesses that read invoices that are now obsolete and have no moat.
However, the hypothesis in the SaaS market is that LLMs have made software have zero value and therefore the SaaS companies will be less profitable. That’s like if wood was suddenly free, expecting home builders to go out of business. If anything, home builders are going to do better, because they can apply their expertise while deploying capital elsewhere. We should expect software companies to be more profitable, not less.
Of course, there are exceptions. Sometimes AI replaces the product itself, e.g. image generation models vs. contractors on fiverr.
I don't object to using LLMs to parse PDFs but over the long run it's going to be less efficient and reliable than other options.
But for my startup I still use a ton of SaaS services for things that I could probably do just fine myself. (Clerk/StackAuth, Supabase/PlanetScale, Cloudflare STUN/TURN, Clickhouse, Vercel, Calendly, Google Workspace, ngrok, Tailscale).
Spiritually, I hate using these. Any one of these would be dead-simple to replace. But my time is genuinely better spent on my startup’s particular value-add. Maybe I’ll replace these some day when we can hire someone to manage internal replacement services - some of which are as easy as “a postgres database” or “wireguard on some VPS instances”. But it’s just not worth my time right now when I’m focused on building revenue.
Even if they all cost $300/mo in total, and we’re bootstrapped, it’s a lot easier to cut back on UberEats or shiny nerdy toys than it is to replace all of these SaaS offerings. I recognize there’s a lot of ”I don’t know what I don’t know” and I’m liable to subtly misconfigure something in a potentially disastrous way.
(Though: you can do a lot with IB, right? Same deal.)
But replacing Bloomberg terminals with "Chat with NYSE", that is no exaggeration one of the most out of touch ideas I think I have ever heard in my life!
Considering how Zipf's law works, there might be a huge discrepancy coming on as we see the products deteriorating from all the AI and H1B spaghetti code to the point where LibreOffice appears quite competent by comparison. Most of the people I worked with just want to sell lamps or furniture or trumpets or whatever they do, and the inventions of modern SaaS make this a lot harder to do. Once enough small businesses stop paying their 5-user subscriptions, I think this whole thing will pivot heavily into the favor of those that just maintained their product well and didn't ensloppify it in the mantime.
The strongest argument is the one about the interface. LLMs will definitely have a large impact. But under the hood, I still expect to see a lot of formally verified code, written by engineers with domain knowledge, with support by AI.
1. I don't buy that chat interfaces will replacing existing user interfaces. I'm in particular a little bit familiar with Bloomberg's user culture. I don't know that I buy that it's going to be replaced with LLM chat prompts. But software agents are going to make faithfully reproducing those existing user interfaces much easier, so: half credit?
2. Half credit again on LLMs vaporizing the "business logic" moat, because the vertical-specific rules that justified the original software market are I think a lot harder to encode in Markdown than the 1 week they gave it, and also verification becomes a bear as more ground-truth business logic is replaced with nondeterministic AI output. There's a thing happening here for sure, I just don't buy it's as decisive as they say.
3. Public data access: I 100% buy this. If this was a real moat, it's dead.
4. Talent scarcity: same deal. Remember, we're talking about vertical software, where the underlying technical work is fairly repetitive and best-practices driven; it's the exact slice of software development work LLMs excel at.
5. Bundling (you get IB messaging along with your charting and your news service); maybe. This point feels tautological. Work out what LLMs do to each of the bundled experiences and there's your answer for how resilient that moat is.
6. Proprietary data: I think they're just dead on right here, and it does indeed seem to be a good time to be a company like Bloomberg?
7. Regs lock-in: half credit, because AI does make regs compliance a lot easier, and I think we're at the very early stages of seeing how.
8. Network effects seems like a repetition of "bundling" and if I have a qualm about this rubric it's that they made it look like an even 10, so they could have clean wins and losses.
9. Transaction embedding (ie, being a payment processor or a loan originator) also seems tautological; it's a moat, sure, but they're begging the question of whether AI enables people to stand up viable competitors.
10. I think "system of record" and "transaction embedding" are kind of the same moat.
I wish people would not blog on X (I will call it X when it's used as a crappy blog platform); these ASCII charts are awful. But that's neither here nor &c.
The overall index has been pretty well flat. What sectors gained?
And surely there aren't 140 "software and services" companies in the top 500 by market cap?
Data centers and AI.
The current US economy is flat except for AI and data centers.[1]
[1] https://fortune.com/2025/10/07/data-centers-gdp-growth-zero-...
I’m also pretty sure S&P doesn’t maintain a ‘S&P 500 Software & Services Index‘.
There is a ‘S&P Software & Services Select Industry Index’ with 140 constituents. That’s probably the index in question.
That's what I was trying to establish.
Either that, or there really are 140/503 software and service companies, and I learned something surprising.
> Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.
Its another ad, no?
Again: if you think they've crossed the line, the guidelines specifically ask you not take to the threads with it. Mail Dan and Tom. They'll get back to you quickly.
Anyway, I dispute that there's "zero switching" costs to go from proprietary keyboard shortcuts to english sentences as the interface.
In the bloomberg example, the shortcuts are precise. LLM's responses are not always what you want.
Imagine being a vim or emacs user and have those replaced by something you have to type entire sentences for functionality.
"Public Data Access → Commoditized"
Also, no. Today I tried to have gemini pro give me some data from wikipedia for a list of countries that I supplied. It gave me data and source links, but the numbers were all wrong! I would have zero confidence in this for anything serious.
This has already happened for a non-negligible amount of people on this site. I still use magit to prepare commits and review diffs. Everything else is English and Claude code or opencode.
Ironically, part of the reason I like emacs and formerly liked vim was because it reduced the amount of time my hands had to leave the keyboard. I simply look where I want to type, press the chord to jump with avy, and then begin typing. New tools are spiritually aligned with this goal. I look at the screen, I think about what I don’t like, and now instead of translating the criticism into code changes, I just stream my thoughts directly into the tool.
As a former vim user who uses cursor, I've found that as the models get better I'm typing less and less. I appreciate the vim key bindings, but eventually I can imagine not missing them.
Yelling “it’s 95% there, we’re so cooked” when anyone can be 95% there for any field since Google was invented doesn’t show much.
Technology, Google, all of it, makes it easy enough to learn how to do 95% of many common roles and fields. However, you will never be hired, or replaced, until you master that last 5%. Every role, even some of the lowest ranking, has its own insurmountable 5%. How well has fully replacing cashiers with self checkout gone, after… two decades… of trying?
If we can’t fully replace a cashier yet, the most automatable role in existence, politely stuff your pie hole when saying AI will replace skilled roles.