Developers use voyage-2 for semantic search by embedding both searchable content and user queries into the same vector space and then retrieving the most similar vectors. The key idea is simple: instead of matching keywords, the system matches meanings. This allows a query like “reset my password” to retrieve content titled “Account access recovery,” even if none of the exact words overlap. voyage-2 provides the embedding step that makes this kind of semantic matching possible.
In practice, the workflow has two main phases: indexing and querying. During indexing, developers preprocess their content—documents, FAQs, tickets, or notes—by splitting it into manageable chunks. Each chunk is sent to voyage-2 to generate an embedding, and the resulting vectors are stored along with metadata (such as document ID, title, or URL). During querying, the user’s search input is embedded using the same model, and a similarity search is performed to find the top-k closest vectors. The results are then mapped back to their original text chunks and shown to the user or passed to another system.
This workflow is almost always implemented with a vector database such as Milvus or Zilliz Cloud. These systems are optimized for storing large numbers of vectors and performing fast nearest-neighbor searches. Developers rely on the database to handle indexing strategies, filtering, and scalability, while voyage-2 focuses on embedding quality. Together, they form a semantic search stack where relevance is driven by meaning rather than string matching, making search results more robust and user-friendly.
For more information, click here: https://zilliz.com/ai-models/voyage-2