The submitter made a basic MCP function that returns the current time, so... Claude knows the current time. There is nothing about sundials and Claude didn't somehow build a calendar in any shape or form.
I thought this was something original or otherwise novel but it's not... it's not complex code or even moderately challenging code, nor is it novel, nor did it result in anything surprising... it's just a clickbaity title.
What’s new here isn’t just exposing `current_datetime()`. The server also gives the model tools to reason about time:
(1) time_difference(t1, t2) – exact gaps with human wording
(2) timestamp_context(t) – “weekend evening”, “workday morning”
(3) time_since(t) – “2 h ago, earlier today”
I also request that Claude ask for time at every turn, which creates a timeseries that is parallel to our interactions.
When Claude calls these every turn it starts noticing patterns (it independently labelled our chat as a three-act structure). That was the surprise that prompted the title.Ask Claude “what patterns do you see so far?” after a few exchanges.
If you still find it trivial after trying, happy to hear why—genuinely looking for ways to push this further. Thanks for the candid feedback.
Finding a good title is really hard. I'd appreciate any advice on that. You'll notice I wrote the article several weeks ago, and that's how long it took me to figure out how to pitch on HN. I'd appreciate any feedback to improve. Thanks!
(Submitted title was "Show HN: I gave Claude a sundial and it built a calendar")
I apologize to the community for the mistake. I appreciate this feature of this community's discourse. I'll remember to use literal, precise language in the future.
Your reworded title fits perfectly — thank you!
Aside from the title, what claims do I make that you find ridiculous?
I just finished some changes to my own little project that provides MCP access to my journal stored in Obsidian, plus a few CLI tools for time tracking, and today I added recursive yearly/monthly/weekly/daily automatic retrospectives. It can be tweaked for other purposes (e.g. project tracking) tweaking the templates.
I am curious: You say "offline-first or local-first, quantified self projects", what models do you use with your projects?
I find the LLMs like the Claude and GPT families to be incredibly impressive for integration and metacognition — however, I am not sure yet what LMs are best for that purpose, if there are any.
Your "Augmented Awareness" framework seems to be metacognition-on-demand. In practice, how has it helped you recently? Is it mostly automated, or does it require a lot of manual data transfers?
I am assuming that the MCP server is plugged into a model, and that in the model you run prompts to run retrospectives.
Have you written about this?
Sometimes I write about it on https://rlupi.com/
The way I use: - keep track of how I spent my time and my energy (that's what the go code is for) - daily journal (my template includes: journal and events [see above]; my thoughts, ideas, tasks; others' ideas (TIL): articles, books, links). The "my thoughts, ideas, tasks" section include reviewing metacognition exercises I learned at a burnout retreat. - weekly retrospectives (what went well/wrong? what was unexpected/surprising? what did I learn? what will I change/experiment with to improve my life? what I am grateful for? who did I talk to? who didn't I talk to but should?) - monthly free-form summaries - yearly GTD-style horizon of focus exercises
In aww, I feed the whole bunch, and aww-automated recursive retrospectives to build higher level ones. I don't have to do it that often, only when a week/month/year ends (or more often, when I tweak prompts).
For use, I use LM-studio with the go program as a local MCP server to discuss, brainstorm ideas, etc.
I don't write a lot about deep themes in my daily journals, but aww seems to be able to extract quite a few high level trends. For example, here is the result of all that and asking:
> Read the yearly retrospective in the following pages and summarize my personal journey: [[r2021]], [[r2022]], [[r2023]], [[r2024]], [[r2025]].
From r2021 to r2025, your journey reflects profound growth across psychological resilience, identity development, technological curiosity, spiritual inquiry, and balancing ambition with self-care. Here’s a synthesis:
ADHD Management & Productivity Evolution Early Years: Adopted Pomodoro, time blocking, Key Insights: Embraced external scaffolding (tools, routines) to counter executive-function challenges. Shifted from reactive productivity ("working under stress") to sustainable rhythms rooted in Stoicism/GTD principles. Recognized ADHD traits as strengths requiring tailored strategies—structured routines, environmental adaptability. Health & Wellness:
Physical Health: Overcame initial health anxieties (e.g. diagnoses), adopted Mounjaro/Ozempic for weight management, prioritized sleep hygiene and exercise regimes. Mental Health: Transitioned from medication experimentation to schema therapy, biofeedback tools, meditation/yoga practices fostering emotional equilibrium. Spiritual Shift: Evolved from productivity-driven focus to valuing presence, connection, and spiritual inquiry via Kriya Yoga/Samadhi practices. Career & Identity Development Early Career: Focused on technical mastery/self-hosted infrastructure projects blending tech curiosity with professional growth. Mid-Career Shift: Transitioned toward work-life balance—part-time roles at Google, advocating neurodiversity awareness—to protect personal time. Recent Focus: Explored AI safety, public writing (SLO/OKR frameworks), and creative outputs balancing technical depth with accessibility.
Identity Evolution:
Shapeshifted from solo researcher to community-oriented thinker embracing interdependence. Redefined self-worth internally rather than externally—from promotions to "I am enough" mantras. Cultivated polymath identity balancing intellectual/artistic/social/physical growth, embracing yoga as spiritual-tech fusion. Social Connections & Community Struggled early-on with isolation and transactional relationships post-breakup. Focused on meaningful connections through family ties (nonna), camper-life adventures, Zurich localism experiments. Recognized regrets around neglecting emotional intimacy—embraced "top five regrets" awareness guiding social outreach goals. Key Practices: Host gatherings, join collaborative groups, practice emotional vulnerability through journaling and conversations.
Technological & Financial Pathways Infrastructure: Built self-hosted ecosystems emphasizing autonomy (BalenaCloud, Obsidian systems). AI Projects: Explored edge computing/AIOps while maintaining privacy boundaries. Financial Goals: FIRE vision formalized by r2025 targeting autonomy by 2028. Balanced frugality with strategic investments, automated financial tracking. Philosophical Shift: Framed technology as extension of self requiring vigilance for autonomy—moving toward decentralized/self-sustained systems.
Themes Across Years Growth Mindset: Consistently reframed challenges as opportunities—from burnout to retreats, perfectionism to iterative publishing. Balance Between Kama/Moksha: Pursued pleasure (camping adventures) alongside spiritual liberation practices. Inner Peace as Cornerstone: Transitioned from achievement-driven goals ("doer") toward witnessing consciousness via meditation/philosophy. Systems Thinking: Applied habit-stacking, metadata tracking, automation to sustain growth without burnout. Future Trajectory Your trajectory suggests continued focus on:
Social/Emotional Growth: Expanding community ties via language learning (German/Swiss citizenship), local Zurich connections. Spiritual Synthesis: Blending yoga/Advaita Vedānta with technological mastery. Financial Autonomy: Refining FIRE plans ahead of housing deadlines by 2028. Health Balance: Maintaining physical-mind integration through breathwork, exercise rituals. Your journey illustrates resilience against cyclical challenges—addiction loops, productivity peaks—and evolving toward holistic fulfillment defined by inner peace rather than external milestones.
"We made an API for time so now the AI has the current time in it's context" is the bulk of it, yes?
With an MCP server the model can call now(), diff it against earlier turns, and notice: "you were away 3 h, shall I recap?" or "deadline is 18 h out, let’s prioritise". That continuous sense of elapsed time simply isn’t possible with a static timestamp stuffed into the initial prompt; you'd have to create a new chat to update the time, and every fresh query would require re‑injecting the entire conversation history. MCP gives the model a live clock instead of a snapshot.
The docs are pictures, and what is a Pipfile in any context? It looks like a requirement file but you never bothered to follow the news about pip or uv.
Every AI project is like that and I'm really scared for the future of programming.
And BTW it's already happening, it's not a fantasy.
Imagine a woodworking forum and someone being called out for showing off their little 6 piece tool box and someone saying how this doesn't adhere to residential building code and what this does for the profession of woodworkers...
For instance at Boeing, the fault of software problems lies entirely on the managers: They made the decision to subcontract software engineering to a third party to cut cost, but also they didn't provide the contractor with enough context and support to do a good job. It's not subcontracting that was bad — because subcontracting can be the solution in some circumstances and with proper scoping and oversight — it was the management.
The MCP protocol is changing every few weeks, it doesn't make sense (to me at least) to professionalize a technical demo, and I appreciate that LLMs allow for faster iteration and exploration.
`uv` is great but `pipenv` is a perfectly well-tested Python dependency manager (albeit slow). Down in the instructions it explicitly asks you to use `pipenv` to manage* dependencies. I also do not think your assertion of "what is a Pipfile in any context" is fair, as I don't think I've ever seen a project list a dependency manager and then explicitly call out artifacts that the dependency manager may require to function.
MCP + LLMs = our solution to data integration problems, which include context awareness limitations.
It's an exciting development and I am glad you see it too!
Knowing quite a bit about sundials I was genuinely curious about how that would work, as a typical (horizontal) sundial doesn't have enough information to make a calendar. It's a time of day device, rather than a time of year device. You could teach the model about the Equation of Time or the Sun's declination, but it wouldn't need the sundial at that point. There are sundials like a spider sundial, or nodus sundial, that encode date information too. But there's overlap/ambiguity between the two solstices as the sun goes from highest to lowest, then back to its highest declination. Leap years also add some challenges too. There are various ways to deal with those, but I think you can see why I was curious how producing a calendar from a sundial would work (without giving it some other information that makes the sundial unecessary).
My only worry with these MCP "sensors" is that they add-up to the token cost — and more importantly to the context window cost. It would be great to have the models regularly poll as new data and factor that into their inferences. But I think the models (at least with current attention) will always have a trade-off between how much they are provided and what they can focus on. I am afraid that if I provide Claude numerous senses, that it will lower its attention to our conversation.
But your exciting comment (and again I apologize for disappointing you!) makes me think about creating an MCP server that provides like the position of the sun in the sky for the current location, or maybe some vectorized representation of a specific sundial.
I think the digitized information that we experience is more native to models (i.e., require fewer processing steps to extract insights from), but it's possible that providing them this kind of input would result in unexpected insights. They may notice patterns, i.e., more grumpy when the sun is in this phase, etc.
Thanks for your thoughtfulness!
You’re right: Stripping away all ambient context is both a bug and a feature. It lets us rebuild “senses” one at a time—clean interfaces instead of the tangled wiring in our own heads.
Pauses are the first step, but I’m eager to experiment with other low‑bandwidth signals:
• where the user is (desk vs. train) • weather/mood cues (“rainy Sunday coding”) • typing vs. speech (and maybe sentiment from voice) • upcoming calendar deadlines
If you could give an LLM just one extra sense, what would you pick—and why?
1. *Hands‑free integration*: ChatGPT, Claude, etc. don’t let you auto‑append text, so you have to manually do it. Here, a server call happens behind the scenes—no copy‑paste or browser hacks.
2. *Math & reliability*: LLMs core models are provably not able to do math (without external tools), this is a theoretical limitation that will not change. The server not only returns now() but also time_difference(), time_since(), etc., so the model gets ready‑made numbers instead of trying to subtract 1710692400‑1710688800 itself.
3. *Extensibility*: Time is just one "sense." The same MCP pattern can stream location, weather, typing‑vs‑dictation mode, even heart‑rate. Each stays a compact function call instead of raw blobs stuffed into the prompt.
So the tool isn’t about fancy code—it’s about giving the model a live, scalable, low‑friction sensor instead of a manual sticky note.
But there are some issues:
1. Cheaper + Deterministic: It is much more costly, both in terms of tokens and context window. (Generating the code takes many more tokens than making a tool call.) And there can be variability in the query, like issues with timezones.
2. Portability: It is not portable, not all LLM or LM environments have access to a code interpreter. This is a much lower resource requirement.
3. Extensibility: This approach is extensible, and it allows us to expand the toolkit with additional cognitive scaffolds that help contextualize how we experience time for the model. (This is a fancy way of saying: The code only gives the timestamp, but building an MCP allows us to contextualize this information — "this is time I'm sleeping, this is the time I'm eating or commuting, etc.")
4. Security: Ops teams are happier approving a read-only REST call than arbitrary code running.
In practice, this hasn't really worked. I've had to add a custom instruction to "call current_datetime" at each message to get Claude to do it consistently over time.
Still, it is meaningful that I ask the model to make a single quick query rather than generate code.
For those looking for "a calendar", here is one[0] I made from a stylized orrery. No AI. Should be printable to US Letter paper. Enjoy.
EDIT: former title asserted that the LLM built a calendar
At the same time I understand what you mean and I agree that no, this does not give any LLM any sense of anything, in the same way that we conceive it. But it provides them context with take for granted in service of further customizing their outputs.
Your "calendar" is nice, thanks for sharing. :)
I am not so concerned about the anthropomorphizing language, which is technically incorrect but forgivable in communication, but with the practical factor that incorporating words or data points about time are not actually expressed in an experiential time dimension...
I would like to see timeline comprehension. Maybe this is that, but I couldn't tell and I kind of doubt it.
I just can't imagine an LLM can deal with being "one of today's lucky 10,000" but only produces explanation...
https://www.linkedin.com/posts/emollick_i-am-starting-to-thi...
It's really frustrating. I've come to loathe the agreeable tone because every time i see it i remember the times where i've hit this pain point in design.
Not chatty. Unbiased. Avoid use of emoji. Rather than "Let me know if..." style continuations, list a set of prompts to explore further topics. Do not start out with short sentences or smalltalk that does not meaningfully advance the response.
I want an intelligent agent (or one that pretends to be) that answers the question rather than something that I chat with.As an aside, I like the further prompt exploration approach.
An example of this from the other day - https://chatgpt.com/share/68767972-91a8-8011-b4b3-72d6545cc5... and https://chatgpt.com/share/6877cbe9-907c-8011-91c2-baa7d06ab4...
One part of this in comparison with the linked in post is that I try to avoid delegating choices or judgement to it in the first place. It is an information source and reference librarian (that needs to be double checked - I like that it links its sources now).
However, that's a me thing - something that I do (or avoid doing) with how I interact with an LLM. As noted with the stories of people following the advice of an LLM, it isn't something that is universal.
In both of your conversations, there is only one depth of interaction. Is that typical for your conversations? Do you have examples where you iterate?
I think your meta-cognitive take on the model is excellent:
"One part of this in comparison with the linked in post is that I try to avoid delegating choices or judgement to it in the first place. It is an information source and reference librarian (that needs to be double checked - I like that it links its sources now)."
The only thing I would add is that, as a reference librarian, it can surface template decision-making patterns.
But I think it's more like that cognitive trick where you assign outcomes to the sides of a coin, and you flip it, and see how you brain reacts — it's not because you're going to use the coin to make the decision, but you're going to use the coin to induce information from your brain using System 1.
In general, however, I'm looking for the sources and other things to remember the "oh yea, it was HGS-1" that I can then go back and research outside of ChatGPT.
Flipping a coin and then considering how one feels about the outcome and using that to guide the decision is useful. Asking ChatGPT and then accepting its suggestion is problematic.
I believe that there's real damager in ascribing prophecy, decision making, or omniscience to an LLM. (aside: Here's an iterative chat that you can see leading to help picking the right wording for this bit - https://chatgpt.com/share/68794d75-0dd0-8011-9556-9c09acd34b... (first version missed the link))
I can see it as something that's real easy to do. And even back to Eliza and people chatting with that, and I see people trusting the advice as a way of offloading some of their own decision making agency to another thing - ChatGPT as a therapist is something I'd be wary of. Not that it can't make those decisions, but rather that it can't reject the responsibility of making those decisions back to the person asking the question.
To an extent, being familiar with the technology and having the problems of decision fatigue ( https://en.wikipedia.org/wiki/Decision_fatigue ) that, as a programmer, I struggle with in the evening (not wanting to think anymore since I'm all thought out from the day)... ChatGPT would be so easy to let it do its thing and make the decisions for me. "What should I have for dinner?" (Aside: this is why I've got a meal delivery subscription so that I don't have to think about that because otherwise I snack on unhealthy food or skip dinner).
---
One of the things that disappointed me with the Love, Death & Robots adaptation of Zima Blue ( https://youtu.be/0PiT65hmwdQ ) was that it focused on Zima and art and completely dropping the question of memory and its relation to art and humanity (and Carrie). The adaptation focuses on Zima's story arc without going into Carrie's story arc.
For me, the most important part of the story that wasn't in the adaptation follows from the question "Red or white, Carrie?" (It goes on for several pages in a socratic dialog style that would be way too much to copy here - I strongly recommend the story).
Answer concisely when appropriate, more
extensively when necessary. Avoid rhetorical
flourishes, bonhomie, and (above all) cliches.
Take a forward-thinking view. OK to be mildly
positive and encouraging but NEVER sycophantic
or cloying. Above all, NEVER use the phrase
"You're absolutely right."
I just copied it into Claude's preferences field, we'll see if it helps.But for the broader question: I see sycophancy as a double‑edged sword.
• On one side, the Dunning–Kruger effect shows that unwarranted praise can reinforce over‑confidence and bad decisions.
• On the other, chronic imposter syndrome is real—many people underrate their own work and stall out. A bit of positive affect from an LLM can nudge them past that block.
So the issue isn't "praise = bad" but dose and context.
Ideally the model would:
1. mirror the user's confidence level (low → encourage, high → challenge), and
2. surface arguments for and against rather than blanket approval.
That's why I prefer treating politeness/enthusiasm as a tunable parameter—just like temperature or verbosity—rather than something to abolish.
In general, these all-or-nothing, catastrophizing narratives in AI (like in most places) often hide very interesting questions.