• Integrations

Retrieval-Augmented Generation (RAG) with Milvus and Haystack

Open In Colab

This guide demonstrates how to build a Retrieval-Augmented Generation (RAG) system using Haystack and Milvus.

The RAG system combines a retrieval system with a generative model to generate new text based on a given prompt. The system first retrieves relevant documents from a corpus using a vector similarity search engine like Milvus, and then uses a generative model to generate new text based on the retrieved documents.

Haystack is the open source Python framework by deepset for building custom apps with large language models (LLMs). Milvus is the world's most advanced open-source vector database, built to power embedding similarity search and AI applications.


Before running this notebook, make sure you have the following dependencies installed:

$ pip install --upgrade --quiet pymilvus milvus-haystack markdown-it-py mdit_plain

If you are using Google Colab, to enable dependencies just installed, you may need to restart the runtime.

We will use the models from OpenAI. You should prepare the api key OPENAI_API_KEY as an environment variable.

import os

os.environ["OPENAI_API_KEY"] = "sk-***********"

Prepare the data

We use an online content about Leonardo Da Vinci as a store of private knowledge for our RAG pipeline, which is a good data source for a simple RAG pipeline.

Download it and save it as a local text file.

import os
import urllib.request

url = ""
file_path = "./davinci.txt"

if not os.path.exists(file_path):
    urllib.request.urlretrieve(url, file_path)

Create the indexing Pipeline

Create an indexing pipeline that converts the text into documents, splits them into sentences, and embeds them. The documents are then written to the Milvus document store.

from haystack import Pipeline
from haystack.components.converters import MarkdownToDocument
from haystack.components.embedders import OpenAIDocumentEmbedder, OpenAITextEmbedder
from haystack.components.preprocessors import DocumentSplitter
from haystack.components.writers import DocumentWriter

from milvus_haystack import MilvusDocumentStore
from milvus_haystack.milvus_embedding_retriever import MilvusEmbeddingRetriever

document_store = MilvusDocumentStore(
    connection_args={"uri": "./milvus.db"},
indexing_pipeline = Pipeline()
indexing_pipeline.add_component("converter", MarkdownToDocument())
    "splitter", DocumentSplitter(split_by="sentence", split_length=2)
indexing_pipeline.add_component("embedder", OpenAIDocumentEmbedder())
indexing_pipeline.add_component("writer", DocumentWriter(document_store))
indexing_pipeline.connect("converter", "splitter")
indexing_pipeline.connect("splitter", "embedder")
indexing_pipeline.connect("embedder", "writer"){"converter": {"sources": [file_path]}})

print("Number of documents:", document_store.count_documents())
Converting markdown files to Documents: 100%|█| 1/
Calculating embeddings: 100%|█| 9/9 [00:05<00:00, 
E20240516 10:40:32.945937 5309095 milvus_local.cpp:189] [SERVER][GetCollection][] Collecton HaystackCollection not existed
E20240516 10:40:32.946677 5309095 milvus_local.cpp:189] [SERVER][GetCollection][] Collecton HaystackCollection not existed
E20240516 10:40:32.946704 5309095 milvus_local.cpp:189] [SERVER][GetCollection][] Collecton HaystackCollection not existed
E20240516 10:40:32.946725 5309095 milvus_local.cpp:189] [SERVER][GetCollection][] Collecton HaystackCollection not existed

Number of documents: 277

Create the retrieval pipeline

Create a retrieval pipeline that retrieves documents from the Milvus document store using a vector similarity search engine.

question = 'Where is the painting "Warrior" currently stored?'

retrieval_pipeline = Pipeline()
retrieval_pipeline.add_component("embedder", OpenAITextEmbedder())
    "retriever", MilvusEmbeddingRetriever(document_store=document_store, top_k=3)
retrieval_pipeline.connect("embedder", "retriever")

retrieval_results ={"embedder": {"text": question}})

for doc in retrieval_results["retriever"]["documents"]:
    print("-" * 10)
). The
composition of this oil-painting seems to have been built up on the
second cartoon, which he had made some eight years earlier, and which
was apparently taken to France in 1516 and ultimately lost.

This "Baptism of Christ," which is now in the Accademia in Florence
and is in a bad state of preservation, appears to have been a
comparatively early work by Verrocchio, and to have been painted
in 1480-1482, when Leonardo would be about thirty years of age.

To about this period belongs the superb drawing of the "Warrior," now
in the Malcolm Collection in the British Museum.
" Although he
completed the cartoon, the only part of the composition which he
eventually executed in colour was an incident in the foreground
which dealt with the "Battle of the Standard." One of the many
supposed copies of a study of this mural painting now hangs on the
south-east staircase in the Victoria and Albert Museum.

Create the RAG pipeline

Create a RAG pipeline that combines the MilvusEmbeddingRetriever and the OpenAIGenerator to answer the question using the retrieved documents.

from haystack.utils import Secret
from import PromptBuilder
from haystack.components.generators import OpenAIGenerator

prompt_template = """Answer the following query based on the provided context. If the context does
                     not include an answer, reply with 'I don't know'.\n
                     Query: {{query}}
                     {% for doc in documents %}
                        {{ doc.content }}
                     {% endfor %}

rag_pipeline = Pipeline()
rag_pipeline.add_component("text_embedder", OpenAITextEmbedder())
    "retriever", MilvusEmbeddingRetriever(document_store=document_store, top_k=3)
rag_pipeline.add_component("prompt_builder", PromptBuilder(template=prompt_template))
        generation_kwargs={"temperature": 0},
rag_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
rag_pipeline.connect("retriever.documents", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder", "generator")

results =
        "text_embedder": {"text": question},
        "prompt_builder": {"query": question},
print("RAG answer:", results["generator"]["replies"][0])
RAG answer: The painting "Warrior" is currently stored in the Malcolm Collection in the British Museum.