> vision-based search for comprehensive document understanding
but it's not clear to me what this means, is it just vector embeddings for each image in every document via a CLIP-like model?
In addition, I'd be curious what's the rationale behind using the plethora of databases, given the docs on running it in production spins them all up, I assume they're all required, for instance I'd be curious on the trade-offs between using postgres with something like pg_search (for bm25 support, which vanilla postgres FTS doesn't have) vs using both postgres and ElasticSearch.
The docs are also very minimal, I'd have loved to see at least 1 example of usage.
> bash ./02-install-database.sh # Deploys PostgreSQL, Redis, Qdrant, Elasticsearch
Is this built on top of all databases ? I am just trying to understand.
geez
sorry but, how much SHIT is it going to take to make AI good?