There's an example in the pgvector-python that uses a cross-encoder model for re-ranking: https://github.com/pgvector/pgvector-python/blob/master/exam...
You can even use a language model for re-ranking, though it may not be as good as a model trained specifically for re-ranking purposes.
In our Azure RAG approaches, we use the AI Search semantic ranker, which uses the same model that Bing uses for re-ranking search results.
This was to avoid the problem where, when we only had vectors for "valid" sounds and there was an input that didn't match anything in the training set (a foreign language, garbage truck backing up, a dog barking, ...) the model would still return some word as the closest match (there's always a vector that has the highest similarity) and frequently do so with high confidence i.e. even though the actual input didn't actually match anything in the training set, it would be "enough" more like one known vector than any of the others that it would pass most threshold tests, leading to a lot of false positives.
Disclaimer, I don't know shit.
I do embeddings on arbitrary websites at runtime, and had a persistent problem with the last chunk of a web page matching more things. In retrospect, its obvious that the smaller the chunk was, the more it was matching everything
Full details: MSMARCO MiniLM L6V3 inferenced using ONNX on iOS/web/android/macos/windows/linux
- Use a large context LLM.
- Segment documents to 25% of context or alike.
- With RAG, retrieve fragments from all the documents, they do a first pass semantic re-ranking like this, sending to the LLM:
I have a set of documents I can show you to reply the user question "$QUESTION". Please tell me from the title and best matching fragments what document IDs you want to see to better reply:
[Document ID 0]: "Some title / synopsis. From page 100 to 200"
... best matching fragment of document 0...
... second best fragment ...
[Document ID 1]: "Some title / synopsis. From page 200 to 300"
... fragmnets ...
LLM output: show me 3, 5, 13.
New query, with attached the full documents for 75% of context window.
"Based on the attached documents in this chat, reply to $QUESTION".
Perhaps one could represent word embeddings as vertices, rather than vectors? Suppose you find "Python" and "scripting" in the same context. You draw a weighted edge between them. If you find the same words again you reduce the weight of the edge. Then to compute the similarity between two words, just compute the weighted shortest path between their vertices. You could extend it to pair-wise sentence similarity using Steiner trees. Of course it would be much slower than cosine similarity, but probably also much more useful.
It is true that cosine similarity is unhelpful if you expect it to be a distance measure.
[0,0,1] and [0,1,0] are orthogonal (cosine 0) but have euclidean distance √2, and 1/3 of vector elements are identical.
It is better if embeddings encode also angles, absolute and relative distances in some meaningful way. Testing only cosine ignores all distances.
But if random embeddings are gaussian, they are distributed on a "cloud" around the hypersphere, so they are not equal.
True, and quite funny. This is an excellent, well-written and very informative article, but this part is wrongly worded:
> Let's have a task that looks simple, a simple quest from our everyday life: "What did I do with my keys?" [and compare it to other notes using cosine similarity]: "Where did I put my wallet" [=> 0.6], "I left them in my pocket" [=> 0.5]
> The best approach is to directly use LLM query to compare two entries, [along the lines of]: "Is {sentence_a} similar to {sentence_b}?"
(bits in brackets paraphrased for quoting convenience)
This will result in the same, or "worse" result, as any LLM will respond that "Where did I put my wallet" is very similar to "What did I do with my keys?", while "I left them in my pocket" is completely dissimilar.
I'm actually not sure what the author was trying to get at here? You could ask an LLM 'is that sentence a plausible answer to the question' and then it would work; but if you ask for pure 'likeness', it seems that in many cases, LLMs' responses will be close to cosine similarity.
In any way, I see how the example "Is {sentence_a} similar to {sentence_b}?" breaks the flow. The original example was:
{question}
# A
{sentence_A}
# B
{sentence_B}
As I now see, I overzealously simplified that. Thank you for your remark! I edited the article. Let me know if it is clearer for you now.> The most powerful approach
> The best approach is to directly use LLM query to compare two entries.
Cross encoders are a solution I’m quite fond of, high performing and much faster. I recently put an STS cross encoder up on huggingface based on ModernBERT that performs very well.
An STS cross encoder is a model that uses the CrossEncoder class to predict the semantic similarity between two sentences. STS stands for Semantic Textual Similarity.
That said, for many applications, we may be perfectly fine with some version of a fine-tuned BERT-like model rather than using the newest AGI-like SoTA just to compare if two products are vaguely similar, and it is worth putting the other one in suggestions.
https://github.com/dleemiller/WordLlama
There’s also model2vec doing some cool things as well in that area. So it’s cool to see recent progress in 2024/5 on simple static embedding models.
On the computational performance note, the performance of cross encoder I trained using ModernBERT base is on par with the roberta large model, while being about 7-8x faster. Still way more complex than static, but on benchmark datasets, much more capable too.
https://huggingface.co/dleemiller/ModernCE-base-sts
There’s also the large model, which performs a bit better.
Frankly, the LLM approach the author talks about in the end doesn’t either. What does “similar” mean here?
Given inputs A, B, and C, you have to decide whether A and B are more similar or A and C are more similar. The algorithm (or architecture, depending on how you look at it) can’t do that for you. Dual encoder, cross encoder, bag of words, it doesn’t matter.
That’s not practical for a lot of applications, but it can do it.
For the cross encoder I trained, I have a pretty good idea what similar means because I created a semi-synthetic dataset that has variants based on 4 types of similarity.
Perhaps not a perfect solution when you’re really trying to split hairs about what is more similar between texts that are all pretty similar, but not all applications need that level of specificity either.
Cosin similarity of two encrypted images would be useless, unencrypt them, a bit more useful.
The 'strings are not the territory' in other words, the territory is the semantic constructs cryptically encoded into those strings. You want the similarity of constructs, not strings.
I think what it say is under "Is it the right kind of similarity?" :
> Consider books. > For a literary critic, similarity might mean sharing thematic elements. For a librarian, it's about genre classification. > For a reader, it's about emotions it evokes. For a typesetter, it's page count and format. > Each perspective is valid, yet cosine similarity smashes all these nuanced views into a single number — with confidence and an illusion of objectivity.
Or even better, as the OP suggests standardise the format of the chunks and generate a hypothetical answer in the same format.
The article is right to point out that cosine similarity is more of an accidental property of data than anything in most cases (but IIUC there are newer embedding models that are deliberately trained for cosine similarity as a similarity measure). The author's bootstrapping approach is interesting especially because of it's ability to map relations other than the identity, but it seems like more of a computational optimization or shortcut (you could just run inference on the input) than a way to correlate unstructured data.
After trying out some RAG approaches and becoming disillusioned pretty quickly I think we need to solve the problem much deeper by structuring models so that they can perform RAG during training. Prompting typical LLMs with RAG gives them input that is dissimilar from their training data and relies on heuristics (like the data format) and thresholds (like topK) that live outside the model itself. We could probably greatly improve this by having models define the embeddings, formats, and retrieval processes (ie learn its own multi-step or "agentic" RAG while it learns everything else) that best help them model their training data.
I'm not an AI researcher though and I assume the real problem is that getting the right structure to train properly/efficiently is rather difficult.
> Has the model ever seen cosine similarity?
Yes - most of the time, at least for deep learning based semantic search. E.g. for semantic search of text, the majority are using, SentenceTransformers [1], models which have been trained to use cosine similarity. Or e.g. for vector representations of images, people are using models like CLIP [2], which has again been trained to use cosine similarity. (Cosine similarity being used in the training loss, so the whole model is fundamentally "tuned" for cosine similarity.)
Articles like these cause confusion, e.g. I've come across people saying: "You shouldn't use cosine similarity", when they've seen SentenceTransformers being used, and linking articles like these, when in fact you very much should be using cosine similarity with those models.
Had the article been framed as "Improving RAG Retrieval Beyond Basic Cosine Similarity," its insights would be aligned with its actual content.
I wonder though, for cases where you genuinely are trying to match like to like, rather than question to answer, is vector embeddings with cosine similarity still the way to go?
My understanding, as stated in TFA, is that if you put careful thought (and prompt engineering) into the content before vectorisation, you can get quite far with just cosine similarity. But how far has "tool use" come along, could it be better in some scenarios?
One fundamental problem of cosine similarity is that it works on surface level. For example, "5+5" won't embed close to "10". Or "The 5th word of this phrase" won't be similar to "this".
If there is any implicit knowledge it won't be captured by simple cosine similarity, that is why we need to draw out those inplicit deductions before embedding. Hence my approach of pre-embedding expansion of chunk semantic information.
I basically treat text like code, and have to "run the code" to get its meaning unpacked.
What I mean is, how can you derive topics from a chunk that refers to them only obliquely?
If you haven’t seen it, there’s a lovely overview of the idea in one of the SpaCy blog posts: https://explosion.ai/blog/coref
Interestingly, the opposite conclusion is drawn in the TFA (the article says LLMs are quite good at identifying 'like' words, or, at least, better than the cosine method, which admittedly isn't a high bar).
[0] Admittedly, some are a little obscure, but they're in famous publications by famous authors, so I'd have expected an LLM to have 'seen' them before.
The problem you're encountering is not the model being unable to determine whether a quote it knows is responsive to your prompt but instead is a problem to do with recall in the model (which is not generally a task it's trained for). So it's not a similarity problem it's a recall problem.
When LLMs are trained on a particular document, they don't save a perfect copy somehow that they can fish out later. They use it to update their weights via backpropogation and are evaluated on their "sentence completion" task during the main phase of training or on a prompt response eval set during instruction fine tuning. Unless your quote is in that set or is part of the eval for the sentence completion task during the main training, there's no reason to suppose the LLM will particularly be able to recall it as it's not being trained to do that.
So what happens instead is the results of training on your quote update the weights in the model and that maybe somehow in some way that is quite mysterious results in some ability to recall it later but it's not a task it's evaluated on or trained for, so it's not surprising it's not great at it and in fact it's a wonder it can do it at all.
p.s. If you want to evaluate whether it is struggling with similarity, look up a quote and ask a model whether or not it's responsive to a given question. I.e. give it a prompt like this
I want a quote about someone living the highlife during the 1960s. Do you think this quote by George Best does the job? “I spent a lot of money on booze, birds, and fast cars. The rest I just squandered.”
This is also a case where something like perplexity might yield better results because it would try to find authoritative sources and then use the LLM to evaluate what it finds instead of relying on the LLM to have perfect recall for the quote. Which of course can fail in the exact same way my own brain is failing me (mangling words, mixing up people, etc.). It's something that works surprisingly well. I pay for Chat GPT and I don't pay for perplexity. But I find myself using that more and more.
Another approach if you're working with a local model is to ask for a summary of one word and then work with the resulting logits (wish I could find the article/paper that introduced this). You could compare similarity by just seeing how many shared words are in the top 500 of two queries, for example.
This blog post stemmed from my frustration that people use cosine distance without a second thought. In virtually all tutorials on vector databases, cosine distance is treated as if it were some obvious ground truth.
When questioned about cosine similarity, even seasoned data scientists will start talking about "the curse of dimensionality" or some geometric interpretations but forget that (more than often) they work with a hack.
See https://www.sbert.net/examples/applications/cross-encoder/RE...
Something like: "Is {sentence_a} a plausible answer to {sentence_b}? Respond only with a single yes/no token" and then look at the probabilities of those.
Would it be beneficial to use dimensionality reduction instead of truncating? Or does “truncation” mean dimensionality reduction in this context?
Basically, do not carelessly use any similarity metric.
(The catch is that during training logistic regression is done on the word and context vectors, but they have a high degree of similarity. People would even sum the context vectors and word vectors or train with word and context vectors being the same vectors without much loss.)
The fundamental issue here is comparing apples to oranges, questions, and answers.
https://openai.com/index/new-embedding-models-and-api-update...
I also find this methods powerful. I see more and more software is getting outsourced into LLM judgements/prompts.
Embedding models usually have fewer parameters than the LLMs, and once we index the documents, their retrieval is also pretty fast. Using LLM as a judge makes sense, but only on a limited scale.
Cosine similarity literally comes from solving the geometric formula for the dot product of two Euclidian vectors to find cos theta, so of course it's the same. That is to say
a . b = |a||b| cos theta
Where a and b are the two vectors and theta is the angle between them. Therefore
cos theta = (a . b)/(|a||b|)
TADA! cosine similarity.[1]
If the vectors are unit vectors (he calls this "normalization" in the article) then |a||b| = 1 so of course cos theta = a . b in that case.
If you don't understand this, I really recommend you invest an afternoon in something like khan academy's "vectors" track from their precalculus syllabus. Understanding the underlying basic maths will really pay off in the long run.[2]
[1] If you have ever been confused by why it's called cosine similarity when the formula doesn't include a cosine that's why. The formula gives you cos theta. You would take the arccosine if you wanted to get theta but if you're just using it for similarity you may as well not bother to compute the angle and just use cos theta itself.
[2] Although ML people are just going to keep on misusing the word "tensor" to refer to a mere multidimensional array. I think that ship has sailed and I just need to give up on that now but there's still hope that people at least understand vectors when they work on this stuff. Here's an amazing explanation of what a tensor actually is for anyone who is interested https://www.youtube.com/watch?v=f5liqUk0ZTw
First, I am not sure whom you refer to - as (I hope) everyone who uses cosine similarity has seen a . b = |a||b| cos theta. I read its very name, "cosine (of the angle between vectors) used as a similarity measure".
Second, cos theta = (a . b)/(|a||b|) is pretty much how you define the angle between vectors, when working in Hilbert spaces.
Third, you pick a very narrow view of tensor when it is based on spatial coordinates (and so you get covariant and contravariant indices). But even in physics, this notation is broader - e.g. in quantum physics, a state of two-qubit lives in the tensor product space of two single-qubit states. Sure, both in terms of states and operators, you have a notion of covariance and contravariance (bras and kets, respectively). In mathematics, it is even broader - all you need is two vector spaces and ⊗.
In terms of deep learning (at least in most cases), there is no less notion of co- and contravariance. Yet, the tensor product makes sense, as (say) we can have an outer product between the sample and channels. Quite a few operations could be understood that way, e.g., so-called 1x1 convolutions that mix channels but do not do anything spatially and channel-wise.
A few notes here:
https://github.com/stared/thinking-in-tensors-writing-in-pyt...
Could you elaborate on the difference?
I was under the impression that beyond the fact that arrays are a computer science concept and tensors are more of a math/physics concept, for all intents and purposes, they are isomorphic.
How is a tensor more than just a multidimensional array?
- The combination of components + basis vectors + operators that transform components and basis vectors in such a way as to preserve their relationship is a tensor
In ML (and computer science more broadly), people often use the word tensor just to mean a multi-dimensional array. ML people do use tensor products etc so they maybe have more justification that some folks for using the word but I'm not 100% convinced. Not an expert as I say.