Monitoring and analytics is important, but it is a solved problem. A language model will only be able to hallucinate about the relationship between meals and glycemic response. At best it does no harm, at worst it can directly misinform.
But I will check this algo out. Maybe it has some interesting bits.
Is your perspective based on, say, opinionated principle?, or experience?
The benefits are enormous.
The risks; What risks? No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.
It's so helpful to offload some the thinking about the condition to ai, all these people moaning about 'muh safety' don't get it. T1D suffers have to think about it all day all the time. A person doesn't have their own blood glucose data in their head.
And how do you deal with AI hallucinations?
Otherwise, when tuned correctly, oref1 et.al. provide amazing results and are safe. Hard to understand where I would use LLMs in this.
On your work:
this is legit
it is appreciated
Hats off, I salute this, thank you
The hardest to learn was that an unhealthy lifestyle resulted in a diabetes that was harder to manage. Too much carbs, not enough exercise, etc. After adjusting my lifestyle, it became quite easy.
The most pain, in my experience, comes from the discrepancy between the CGM - measured value and the prick-test value, even when accounting for time lag. I've used several CGMs and they've all been wildly off sometimes. I have a few T1D acquaintances who relied on their CGM alone and have significantly improved their HbA1c after accounting for that.
Maybe that information is useful to you.
Probably something like SVM for warnings.
Unless the whole purpose is just daily reports.
Do you find the analytics actually helps? I.e. a lot of this will depend on what you ate and whether or not you logged it?