Historically, insurance has paid for activity: time spent in visits, RVUs generated, and minutes logged. This was a reasonable starting point, but the flaw is that there's no strong incentives to be efficient.
ACCESS is explicitly a "deflationary" approach. Medicare has set the payment rates high enough to be viable for startups, but low enough that you have to use software (including AI) to deliver a large part of your program.
So Medicare has basically created economic incentives to reward software without prescribing the exact shape of the programs. I thought it was a really interesting approach and builds on 15 years of lessons from CMMI (Medicare's innovation group).
For hospital stays, I may be outdated in this, but Medicare pays a lump sum DRG which doesn't tend to go up much, so the longer the patient is in the hospital, the less money the hospital makes.
Short story is the biggest pressures from the higher-ups is for us to see more volume outpatient, and cut duration of stays inpatient....
They’ll just start cherry picking their patients, finding ways to squeeze out the people just that little bit lower on the prognosis curve. Or at least that will be the risk in a setup like that.
The program sounds reasonable until you become aware that the patients most in need are often the ones least likely to improve. It also ignores the reality that sometimes even the most rigorous, well reasoned treatment plans fail for unpredictable reasons. Do you punish providers and patients for that?
(There is such a thing as Medicare advantage, where a patient can choose to put their Medicare dollars toward private insurance, but it's not part of the initial launch of this program.)
First the title: "Medicare's new payment model is built for AI. Most of the tech world has no idea", classic AI tell. The by-line is by the editor-in-chief.
Em-dashes everywhere, including in this quote, somewhat unusually: “The best solution wins, which, in regulated industries like healthcare — that’s not been the case.”
Oddly-short paragraphs: "That payment structure is the real news."
Rule of threes: "Pair Team launched in 2019 with a specific kind of patient in mind: people managing chronic conditions who were also dealing with unstable housing, too little food, or lack of transportation"
This whole paragraph: "There are real risks. Participants are feeding extraordinarily sensitive patient data — intimate conversations about housing and diseases and mental illness — into a federal infrastructure with a documented history of breaches, including exposed Social Security numbers. For the vulnerable populations ACCESS is designed to serve, that's not an impractical concern."
---
I haven't opened a TC article in years and I think I'll return to that practice.
I think there's an ongoing conversation about whether we should accept all LLM-generated text without commentary.
I write this comment because I have some sympathy for a Show HN with AI-assisted writing, but I will not spend time enriching TechCrunch's use of machine-generated text anymore than I would scroll through an ad block at the end of any other article.
(Just for the sake of comparison, here's something by the same writer from a few years ago - https://techcrunch.com/2022/11/16/boompop-gains-traction-by-...
You can see more examples here, too https://techcrunch.com/author/connie-loizos/page/16/ )
That said, Pangram agrees and its track record is pretty good.
I'm not saying the quotes are fake, that would be horrific. I'm saying the rest of the article appears to have had minimal human intervention.
The other uses are honestly pretty standard rhetorical patterns; they do not seem especially AI-flavored to me.
Put another way, search out the great vowel shift. That happened over more time but then again the contact with different speakers wasn’t as constant as every day on the internet. It’s just what happens, how things spread. No different and maybe to a further degree than typical memes.
Coincidentally I just read a blog post today that explained this in a way I always struggled to: https://www.astralcodexten.com/p/nostalgebraists-hydrogen-ju...
And if we're using machines to assess this, the appropriate action is to look at the author's writing from before the time of LLMs and compare it to now.
There've been third-party evaluations of Pangram, e.g., https://bfi.uchicago.edu/wp-content/uploads/2025/09/BFI_WP_2.... I personally do not think I could achieve that rate of accuracy, if you made me read a bunch of text samples and guess whether humans or AIs wrote them. Do you think you could?
They are absolutely correct about this mathematically, you can’t solve problems you don’t have data for
The question is what organization would I trust with the full context of my life. None. Zero.
**future headline: Consumer warning: The panopticon(tm) product is embedded into your care plan, insurance is only available for panopticon subscribers.
People don't seem to realize that this is both coming and that before long people will be defending AI "persons" because of this reason (OpenAI is already complaining about people doing this). Nobody's going to deliver this level of care using humans. It's not going to happen.
A lot of people needing care are deeply isolated and will be of the opinion that AI changes that.
One step further would be robots that take people to the bathroom, clean them and other stuff. Having this done by humans is either extremely expensive or it will not be done properly.
Some people are horrified by the loss of human touch but for most old people human touch is a luxury they can't afford.
Any attempt to use LLMs as a substitute for personal interaction is playing an incredibly dangerous game that will probably make them a lot of money, while hurting a lot of people.
Oh and taking sycophancy out of a model is easy. Just finetune out that they (have to) agree with everything. Plus every new model has less of it, or at least masks it better.