There is no latency, because the inference is done locally. On a server at the customer with a big GPU
The idea of having the elements anticipated and lowering the cognitive load of searching a giant drop down list scratches a good place in my brain. Instantly recognize it as such a better experience than what we have on the web.
I think something like this is the long term future for personal computing, maybe I'm way off, but this the type of computing I want to be doing, highly customized to my exact flow, highly malleable to improvement and feedback.