I wonder if they’re related.
I was about 45 before I realised that when people said “in my mind’s eye” it was substantially more than a metaphor.
And it wasn’t until about a year later that I realised that I also didn’t have what ordinary people refer to as an inner monologue.
Realising that I had both aphantasia and anendophasia was quite a shock, but has never felt to me like I was missing anything.
For images I literally have nothing “pictorial” or “graphical” at all, but concepts and relationships are “vivid”. And for the inner monologue, there’s no autonomic voice at all, but if I concentrate in the same way that someone might “consciously breathe” I can kinda sorta trigger something.
Interestingly, in periods where I have meditated for >20min per day for consecutive weeks, I can trigger what I refer to as “flyover mode” which is like a literal landscape flyover that feels like a 4K screensaver. But this is _rare_ and requires a huge amount of effort.
Weird, eh!?
I don’t know if it’s a blessing or a curse, but I generally have extremely vivid dreams. I also have a very active inner-monologue and can do the whole “picture a red apple on a green lawn with 3 yellow dots on it, pick it up and rotate it and track the dots” kind of thing. As it happens I’m a visual learner and a voracious reader.
I’ve never thought about if any of this was connected before. I should do some research.
For real though, who exactly is this for? People who want to see an AI's take on the real dream they just had?
We will one day have the technology to actually record dreams, however. See https://pubmed.ncbi.nlm.nih.gov/40876481/, Reconstructing high-resolution visual perceptual images from human intracranial electrocorticography signals (2025)
> Reconstruction of visual perception from brain signals has emerged as a promising research topic. Electrocorticography (ECoG) is a kind of high-quality intracranial signal with good spatiotemporal resolution that offers some new opportunities. However, according to our knowledge, there are no studies to reconstruct the perceived images from human ECoG signals at present.
> We have conducted the pioneering work and developed a novel pipeline that integrates Talairach coordinate alignment masked autoencoders (TA-MAE) with denoising diffusion probabilistic models. Our approach exploits the spatiotemporal dynamics of human ECoG signals, enabling the restoration of details in high-resolution
This idea is also pretty obvious. Who hasn't tried describing a dream to AI and been disappointed with the slop it generates? It will never look anything like what was imagined. Most of the imagery in my mind is unique to stuff I have experienced in my real life offline. AI training data is biased very far away from anything so candid, and if the images are wrong then they also cannot convey the same emotions that the words did.
This problem occurs again and again with damn near everything AI generates. All emotion and style are replaced with the kind of stale and cold feelings you only get from stock photos, trashy low effort music, and corporate speak.