To validate what’s possible, we trained a 400M parameter embedding model—and it outperformed Gemini embeddings at some similarity search tasks.
For example, given the query: "Cereal for my 4-year-old who skips breakfast" Most search engines return generic cereal options. Our model understands intent, surfacing high-protein, kid-friendly cereals designed for picky eaters.
This raises some big questions:
- Do we really need massive models for great search? - Are users ready to trust search bars to think with them? - Is search moving towards full conversational interaction, or will people be too lazy?
Retailers like Costco now support searches like "Nutritious greens tastier than Kale." The trend is shifting.
Try our model here: https://vectorpath.ai/model
We’d love to hear your thoughts, especially if you’re exploring conversational search for your product. Let’s discuss!