Main Content: We propose an observational method to dynamically extract the "Floating Equilibrium Point (FEP)" hidden behind the stochastic token generation process. By treating the internal state transitions of Large Language Models (LLMs) as a system mathematically isomorphic to the differential equations of a CR low-pass filter (RC circuit), we can separate the essential semantic trajectory from statistical fluctuations (sampling noise).
Using this framework, we can quantitatively measure internal states and have successfully observed phenomena such as Preference Mode Collapse (PMC) and Context Rigidity in real-time.
To establish this diagnostic technique, we define the "Information Viscosity" based on the token Rejection Rate as a form of physical resistance. We have formalized these behaviors into a complete mathematical framework for your review.