17 pointsby Garbage6 hours ago5 comments
  • dinkleberg4 hours ago
    I'd be surprised to hear that lots of people don't do just this. As soon as the memory features came out I gave them a quick try and quickly turned them off.

    I don't want to be held accountable to all of my previous ideas. I want each conversation to start fresh with the context that I provide. If I am exploring some library in one language stack and then I later want to look into something completely different, I don't want the conversation polluted by what it thinks I want based on the previous discussion.

    I suppose for those who use it as a companion the memory is a core element. But when used as a tool it gives a significantly worse experience IME.

    • prawn4 hours ago
      I always use it without logging in ‡, and make a point of starting each fresh chat in a fresh window so its perspective in one conversation isn't tainted by some random line of thinking from before. Can't stand when it makes assumptions about my intentions because I mentioned having kids before, or had talked about a different property or venture or whatever.

      ‡ I'd tried logging in recently and immediately it started nagging me to upgrade. Went back to using without an account and bizarrely the situation is far better.

      • OutOfHerean hour ago
        It is not necessary to logout for this purpose. It is trivially easy to disable the memory feature permanently. There is an easily available toggle for it.
        • prawnan hour ago
          Right, but when I tried that, I get constant nags to upgrade. If I stay logged out and use it anonymously, it doesn't nag me and seems to work well enough.
    • 0_____04 hours ago
      Yeah account-level memory is a real mixed bag. I do like Anthropic's project scoped memory, that actually is useful because you get to decide what chats are useful to a given problem space.
  • sshine18 minutes ago
    Memories inside projects are a boon: If you can assume that all your questions are related, and you instruct the engine what memories to store and how to store them (short single statements), you can get a powerful tool out of it. I use it for habit tracking. The system prompt is taking the style of Atomic Habits. It will remember exactly what habits I'm trying to add, what my preferences are for apps, when I sleep, what other goals I have unrelated to my current query.
  • sshine23 minutes ago
    Concrete memories in a generic context suck:

    Every time I ask Claude Kubernetes question, it relates it to that one time I was deploying Kubernetes on VMs. It's not helpful!

    Every time I ask Claude a question related to projects, it starts mentioning where I work, as if that helps.

    Asking it to forget these concrete memories sometimes results in remembering to not mention them.

    Thoroughly useless.

    I started out disabling it because it's creepy. I tried it out, and it just doesn't work well.

  • dSebastienan hour ago
    I rather have memory and ideas in Markdown and pick what to include in the context rather than relying on platform specific memory features
  • OutOfHerean hour ago
    Leaving memory enabled risks introducing bias and baggage from past chats and experiences that are better off being left behind. It however then becomes my responsibility to tell ChatGPT everything it needs to know in every chat.

    Note that there are other sources of bias besides memory. For example, for each Custom GPT, an OpenAI staff Karens looks to type up and insert custom instructions that apply with a higher priority, and often are detrimental to the output, thereby sabotaging the Custom GPT. It would be better off if they didn't do any of it. I have had multiple Custom GPTs that initially worked flawlessly get sabotaged by OpenAI in this way.