How could it possibly keep up with LLM based search?
I believe frontier labs have no option but to go into verticals (because models are getting commoditized and capability overhang is real and hard to overcome at scale), however, they can only go into so many verticals.
Interesting. Why wouldn't an LLM based search provide the same thing? Just ask it to "use only trusted sources".
Oh so they are not just helping in search but also in curating data.
> They've got hundreds of thousands of physicians asking millions of questions everyday. None of the labs have this sort of data coming in or this sort of focus on such a valuable niche
I don't take this too seriously because lots of physicians use ChatGPT already.
There is trust and then there is accountability.
At the end of the day, a business/practice needs to hold someone/entity accountable. Until the day we can hold an LLM accountable we need businesses like OpenEvidence and Harvey. Not to say Anthropic/OpenAI/Google cannot do this but there is more to this business than grounding LLMs and finding relevant answers.
More seriously, the concept of trust is extremely lossy. The LLM is gonna lean in one direction that may or may not be correct. At the extreme, it wound likely refute a new discovery that went against what we currently know. In a more realistic version, certain AIs are more pro Zionist than others.
The thing is, LLMs are quite good at search and probably way way more strong that whatever RAG setup this company has. What failure mode are you looking at from a search perspective? Will ChatGPT just end up providing random links?
Is that sarcasm?
I think what you'll end up is a response that still relies on whatever random sources it likes, but it'll just attribute it to the "trusted sources" you asked for.
You started off by asking a question, and people are responding. Please, instead of assuming that everyone else is missing something, perhaps consider that you are.
Here’s what I mean: LLMs can absolutely be directed to just search for trustable sources. You can do this yourself - ask ChatGPT a question and ask it to use sources from trustworthy journals. Come up with your own rubric maybe. It will comply.
Now, do you disagree that ChatGPT can do this much? If you do, it’s almost trivially disprovable.
One of the posters said that hallucination is a problem but if you’ve used ChatGPT for search, you would know that it’s not. It’s grounding on the results anyway a worst case the physician is going to read the sources. So what’s hallucination got to do here?
The poster also asked a question “can you ask it to not hallucinate”. The answer is obviously no! But that was never my implication. I simply said you can ask it to use higher quality sources.
Since you’ve said in asserting BS, I’m asking you politely to show me exactly what part of what I said constitutes as BS with the context I have given.
Please read what I have written clearly instead of assuming the most absurd interpretation.
For example, only 7% of pharmaceutical research is publicly accessible without paying. See https://pmc.ncbi.nlm.nih.gov/articles/PMC7048123/
Edit: seems like it is ~10M USD.
If AI tooling starts to seriously chip away at those foundations then it puts a large chunk of their business at risk.
You can be a huge, profitable data-only company... but it's likely going to be smaller than a data+interface company. And so, shareholder value will follow accordingly.
The assumption is that Claude has access to a stream of fresh, currated data. Building that would be a different focus for Anthropic. Plus Thomson Reuters could build an integration. Not totally convinced that is a major threat yet.
If that happens, some software companies will struggle to find funding and collapse, and people who might consider starting a software company will do something else, too.
Ultimately that could mean less competition for the same pot of money.
I wonder.
No, it will just lead to the end of the Basic CRUD+forms software engineer, as nobody will pay anyone just for doing that.
The world is relatively satisfied with "software products". Software - mostly LLM authored - will be just an enabler for other solutions in the real world.
> The world is relatively satisfied with "software products".
you can delete all websites except Tiktok, Youtube and PH, and 90% of the internet users wouldnt even notice something is wrong on the internet. We dont even need LLMs, if we can learn to live without terrible products.
Capital also won't be rewarded to people who don't have privileged/proprietary access to a market or non-public data or methods. Just being a good engineer with Claude Code isn't enough.
Something seems quite off. Am I the only one?