Consider this: when measuring the cosine similarity of embedding vectors in high-dimensional spaces, how does their similarity in lower-dimensional subspaces imply the overall similarity? Is there a direct, proportional relationship, or is the reality more complex with high-dimensional data?
More concretely, does high similarity between vectors in their first 256 dimensions assure a high similarity in their full 768 dimensions? Conversely, if vectors significantly differ in some dimensions, does this spell a low overall similarity? These aren't mere theoretical musings; they are crucial considerations for efficient vector retrieval, database indexing, and the performance of RAG systems.
Developers often rely on heuristics, assuming that high subspace similarity equates to high overall similarity or that notable differences in one dimension significantly affect the overall similarity. The question is: are these heuristic methods built on firm theoretical ground, or are they simply assumptions of convenience?
This post delves into these questions, examining the theory and practical implications of subspace similarity in relation to overall vector similarity.
tagBounding the Cosine Similarity
Given vectors , we decompose them as and , where and , with .
The cosine similarity in the subspace is given by ; similarly, the similarity in the subspace is .
In the original space , the cosine similarity is defined as:
Now, let . Then, we have:
End of proof.
Note that in the final step of the proof, we leverage that the cosine similarity is always less than or equal to 1. This forms our upper bound. Similarly, we can show that the lower bound of is given by:
, where .
Note that for the lower bound, we can not hastily conclude that . This is because of the range of the cosine function, which spans between . Due to this range, it's impossible to establish a tighter lower bound than the trivial value of -1.
So in conclusion, we have the following loose bound: and a tighter bound , where .
tagConnection to Johnson–Lindenstrauss Lemma
The JL lemma asserts that for any and any finite set of points in , there exists a mapping (with ) such that for all , the Euclidean distances are approximately preserved:
To make work like a subspace selection, we can use a diagonal matrix for projection, such as a matrix , albeit not random (note, the typical formulation of the JL lemma involves linear transformations that often utilize random matrices drawn from a Gaussian distribution). For instance, if we aim to retain the 1st, 3rd, and 5th dimensions from a 5-dimensional vector space, the matrix could be designed as follows:
However, by specifying to be diagonal, we limit the class of functions that can be used for the projection. The JL lemma guarantees the existence of a suitable within the broader class of linear transformations, but when we restrict to be diagonal, such a suitable may not exist within this restricted class for applying the JL lemma's bounds.
tagValidating the Bounds
To empirically explore the theoretical bounds on cosine similarity in high-dimensional vector spaces, we can employ a Monte Carlo simulation. This method allows us to generate a large number of random vector pairs, compute their similarities in both the original space and subspaces, and then assess how well the theoretical upper and lower bounds hold in practice.
The following Python code snippet implements this concept. It randomly generates pairs of vectors in a high-dimensional space and computes their cosine similarity. Then, it divides each vector into two subspaces, calculates the cosine similarity within each subspace, and evaluates the upper and lower bounds of the full-dimensional cosine similarity based on the subspace similarities.
tagUnderstanding the Bounds
In a nutshell, when comparing two high-dimensional vectors, the overall similarity lies between the best and worst similarities of their subspaces, adjusted for how large or important those subspaces are in the overall scheme. This is what the bounds for cosine similarity in higher dimensions intuitively represent: the balance between the most and least similar parts, weighted by their relative sizes or importance.
Imagine you're trying to compare two multi-part objects (let's say, two fancy pens) based on their overall similarity. Each pen has two main components: the body and the cap. The similarity of the whole pen (both body and cap) is what we're trying to determine:
tagUpper Bound ()
Think of as the best match between corresponding parts of the pens. If the caps are very similar but the bodies aren't, is the similarity of the caps.
Now, is like a scaling factor based on the size (or importance) of each part. If one pen has a very long body and a short cap, while the other has a short body and a long cap, adjusts the overall similarity to account for these differences in proportions.
The upper bound tells us that no matter how similar some parts are, the overall similarity can't exceed this "best part similarity" scaled by the proportion factor.
tagLower Bound ()
Here, is the similarity of the least matching parts. If the bodies of the pens are quite different but the caps are similar, reflects the body's similarity.
Again, scales this based on the proportion of each part.
The lower bound means that the overall similarity can't be worse than this "worst part similarity" after accounting for the proportion of each part.
tagImplications of the Bounds
For software engineers working with embeddings, vector search, retrieval, or databases, understanding these bounds has practical implications, particularly when dealing with high-dimensional data. Vector search often involves finding the closest (most similar) vectors in a database to a given query vector, typically using cosine similarity as a measure of closeness. The bounds we discussed can provide insights into the effectiveness and limitations of using subspace similarities for such tasks.
tagUsing Subspace Similarity for Ranking
Safety and Accuracy: Using subspace similarity for ranking and retrieving top-k results can be effective, but with caution. The upper bound indicates that the overall similarity can't exceed the maximum similarity of the subspaces. Thus, if a pair of vectors is highly similar in a particular subspace, it's a strong candidate for being similar in the high-dimensional space.
Potential Pitfalls: However, the lower bound suggests that two vectors with low similarity in one subspace could still be quite similar overall. Therefore, relying solely on subspace similarity might miss some relevant results.
tagMisconceptions and Cautions
Overestimating Subspace Importance: A common misconception is overestimating the importance of a particular subspace. While high similarity in one subspace is a good indicator, it doesn't guarantee high overall similarity due to the influence of other subspaces.
Ignoring Negative Similarities: In cases where the cosine similarity in a subspace is negative, it indicates an opposing relationship in that dimension. Engineers should be wary of how these negative similarities impact the overall similarity.