🚀 Try Zilliz Cloud, the fully managed Milvus, for free—experience 10x faster performance! Try Now>>

Milvus
Zilliz
  • Home
  • AI Reference
  • What is the typical code snippet to compute the cosine similarity between two sentence embeddings using the library?

What is the typical code snippet to compute the cosine similarity between two sentence embeddings using the library?

To compute cosine similarity between two sentence embeddings, you can use mathematical operations from libraries like NumPy, PyTorch, or TensorFlow. Cosine similarity measures the angle between two vectors, producing a value between -1 (opposite) and 1 (identical). The formula is the dot product of the vectors divided by the product of their magnitudes. Libraries provide built-in functions or straightforward ways to implement this.

For example, using NumPy, you can compute it manually. Suppose embedding1 and embedding2 are NumPy arrays. Calculate the dot product with np.dot(), then divide by the product of their L2 norms (computed via np.linalg.norm()). Here’s a snippet:

import numpy as np

dot_product = np.dot(embedding1, embedding2)
norm_a = np.linalg.norm(embedding1)
norm_b = np.linalg.norm(embedding2)
similarity = dot_product / (norm_a * norm_b)

This works for 1D vectors. If using batches of embeddings, ensure the arrays are 2D and adjust the axis parameter in np.linalg.norm() to axis=1. Libraries like PyTorch simplify this further. With PyTorch, use torch.nn.functional.cosine_similarity():

import torch
import torch.nn.functional as F

# Convert embeddings to tensors
tensor1 = torch.tensor(embedding1)
tensor2 = torch.tensor(embedding2)
similarity = F.cosine_similarity(tensor1, tensor2, dim=0)

The dim argument specifies the dimension to reduce (use dim=1 for batches). PyTorch handles normalization internally, making it efficient for GPU computations.

Key considerations include ensuring embeddings are normalized (unit vectors) for accurate results. Some libraries, like Hugging Face’s sentence-transformers, return pre-normalized embeddings, so additional normalization isn’t needed. For custom embeddings, normalize them first using sklearn.preprocessing.normalize() or torch.nn.functional.normalize(). Avoid common mistakes like mismatched dimensions or using non-floating-point data. If performance is critical, prefer library-specific functions (e.g., PyTorch’s cosine_similarity) over manual implementations, as they optimize underlying operations. For large-scale applications, consider batch processing and GPU acceleration.

Like the article? Spread the word