Open In Colab GitHub Repository

使用 Exa 和 Milvus 建立雙源 RAG 代理程式

本教學示範如何建立一個代理程式,同時搜尋公開網路(透過Exa)和私人知識庫(透過Milvus),然後歸納出統一的答案。這個代理程式使用 OpenAI 的函式呼叫功能,根據使用者的問題自動決定查詢哪一個來源。

Exa是專為 AI 應用程式設計的搜尋 API,由Zilliz Cloud(完全管理 Milvus) 提供引以為傲的支援。與傳統以關鍵字為基礎的搜尋引擎不同,Exa 支援語意 (神經) 搜尋 - 您以自然語言描述您想要的東西,它就能了解您的意圖。它也提供內容擷取、重點瀏覽及以類別為基礎的過濾功能。Milvus是一個開放原始碼的向量資料庫,專為可擴充的相似性搜尋而建立。將它們與 LLM 代理結合,您就可以建立一個系統,在單一工作流程中檢索內部專屬資料和最新的網路資訊。

先決條件

在執行本筆記本之前,請確定您已安裝下列相依性:

$ pip install exa_py pymilvus openai

如果您使用的是 Google Colab,為了啟用剛安裝的相依性,您可能需要重新啟動執行時(點選畫面頂端的「Runtime」功能表,並從下拉式功能表中選擇「Restart session」)。

您將需要ExaOpenAI 的 API 金鑰。將它們設定為環境變數:

import os

os.environ["EXA_API_KEY"] = "***********"
os.environ["OPENAI_API_KEY"] = "sk-***********"

初始化用戶端

設定 Exa、OpenAI 和 Milvus 客戶端。我們使用 OpenAI 的text-embedding-3-small 模型來產生向量嵌入,並使用 Milvus Lite 來進行本機向量儲存,基礎架構設定為零。

import json
from openai import OpenAI
from pymilvus import MilvusClient, DataType
from exa_py import Exa

llm = OpenAI()
exa = Exa(api_key=os.environ["EXA_API_KEY"])
milvus = MilvusClient(uri="./milvus_exa_demo.db")

EMBED_MODEL = "text-embedding-3-small"
EMBED_DIM = 1536
COLLECTION = "private_kb"

至於MilvusVectorAdapterMilvusClient 的論點 :

  • uri 設定為本機檔案,例如./milvus.db ,是最方便的方法,因為它會自動利用Milvus Lite將所有資料儲存在這個檔案中。
  • 如果您有大規模的資料,例如超過一百萬個向量,您可以在Docker 或 Kubernetes 上架設效能更高的 Milvus 伺服器。在此設定中,請使用伺服器位址和連接埠作為您的 uri,例如http://localhost:19530 。如果您啟用 Milvus 上的驗證功能,請使用 ":" 作為令牌,否則請勿設定令牌。
  • 如果你想使用Zilliz Cloud,Milvus 的完全管理雲端服務,調整uritoken ,對應 Zilliz Cloud 的Public Endpoint 和 Api key

定義產生 embeddings 的輔助函式。我們將在筆記型電腦中重複使用,以進行索引和查詢:

def embed_text(text: str | list[str]) -> list:
    """Generate embedding vector(s) using OpenAI."""
    resp = llm.embeddings.create(
        input=text if isinstance(text, list) else [text],
        model=EMBED_MODEL,
    )
    if isinstance(text, list):
        return [item.embedding for item in resp.data]
    return resp.data[0].embedding

建立私有知識庫 (Milvus)

我們模擬一套公司內部文件 - 產品規格、政策、收益報告和 API 文件 - 這些文件不會出現在公開網站上。在真實情境中,這些文件可能來自您的內部維基、資料庫或文件管理系統。

private_docs = [
    {
        "id": 1,
        "text": (
            "Acme Widget Pro supports up to 10,000 concurrent connections. "
            "It uses a proprietary compression algorithm (AcmeZip v3) that "
            "reduces payload size by 72% compared to gzip."
        ),
        "source": "product-spec.pdf",
    },
    {
        "id": 2,
        "text": (
            "Our return policy allows customers to return any product within "
            "30 days of purchase for a full refund. After 30 days, only store "
            "credit is offered. Damaged items must be reported within 48 hours."
        ),
        "source": "return-policy.md",
    },
    {
        "id": 3,
        "text": (
            "Q3 2025 revenue was $4.2M, up 18% from Q2. The growth was "
            "primarily driven by enterprise customers adopting Widget Pro. "
            "Churn rate dropped to 3.1%."
        ),
        "source": "q3-earnings.pdf",
    },
    {
        "id": 4,
        "text": (
            "Internal API rate limits: free tier 100 req/min, pro tier "
            "5,000 req/min, enterprise tier 50,000 req/min. Rate limit "
            "headers are X-RateLimit-Remaining and X-RateLimit-Reset."
        ),
        "source": "api-docs.md",
    },
    {
        "id": 5,
        "text": (
            "Employee onboarding checklist: 1) Sign NDA, 2) Set up VPN access, "
            "3) Enroll in mandatory security training, 4) Request Jira and "
            "Confluence access from IT, 5) Schedule 1:1 with manager."
        ),
        "source": "onboarding-guide.md",
    },
]

使用明確的模式建立 Milvus 套件,嵌入文件並插入它們:

if milvus.has_collection(COLLECTION):
    milvus.drop_collection(COLLECTION)

schema = milvus.create_schema(auto_id=False, enable_dynamic_field=True)
schema.add_field(field_name="id", datatype=DataType.INT64, is_primary=True)
schema.add_field(field_name="vector", datatype=DataType.FLOAT_VECTOR, dim=EMBED_DIM)
schema.add_field(field_name="text", datatype=DataType.VARCHAR, max_length=65535)
schema.add_field(field_name="source", datatype=DataType.VARCHAR, max_length=512)

index_params = milvus.prepare_index_params()
index_params.add_index(
    field_name="vector", index_type="AUTOINDEX", metric_type="COSINE"
)

milvus.create_collection(
    collection_name=COLLECTION,
    schema=schema,
    index_params=index_params,
    # consistency_level="Strong",
)

# Embed all documents in one batch call
embeddings = embed_text([doc["text"] for doc in private_docs])

milvus.insert(
    collection_name=COLLECTION,
    data=[
        {
            "id": doc["id"],
            "vector": emb,
            "text": doc["text"],
            "source": doc["source"],
        }
        for doc, emb in zip(private_docs, embeddings)
    ],
)

print(f"Inserted {len(private_docs)} documents into Milvus.")
Inserted 5 documents into Milvus.

讓我們用快速的測試查詢來驗證擷取是否成功:

query = "What is the return policy?"
results = milvus.search(
    collection_name=COLLECTION,
    data=[embed_text(query)],
    limit=2,
    output_fields=["text", "source"],
)

for hit in results[0]:
    print(f"[score={hit['distance']:.3f}] ({hit['entity']['source']})")
    print(f"  {hit['entity']['text'][:120]}...")
    print()
[score=0.665] (return-policy.md)
  Our return policy allows customers to return any product within 30 days of purchase for a full refund. After 30 days, on...

[score=0.119] (q3-earnings.pdf)
  Q3 2025 revenue was $4.2M, up 18% from Q2. The growth was primarily driven by enterprise customers adopting Widget Pro. ...

探索 Exa 搜尋功能

在建立代理之前,讓我們先來探索 Exa 的搜尋功能。Exa 支援多種搜尋模式,適用於不同的情境。

具有內容萃取功能的語意搜尋- Exa 不僅能傳回連結,還能在單一要求中傳回文章正文、主要重點以及 AI 產生的摘要:

web_results = exa.search_and_contents(
    query="latest trends in AI agents 2026",
    type="auto",
    num_results=3,
    text={"max_characters": 3000},
    highlights={"num_sentences": 3},
)

for r in web_results.results:
    print(f"[{r.title}]")
    print(f"  URL: {r.url}")
    if r.highlights:
        print(f"  Highlight: {r.highlights[0][:150]}...")
    print()
[The AI Trends Shaping 2026. A month into the new year is as good a… | by ODSC - Open Data Science | Mar, 2026 | Medium]
  URL: https://odsc.medium.com/the-ai-trends-shaping-2026-34078dad4d49
  Highlight:  ahead. January brought Claude CoWork, Anthropic’s “AI coworker” that turns agents into desktop collaborators; OpenClaw (formerly Moltbot, formerly Cl...

[AI agent trends 2026 report]
  URL: https://cloud.google.com/resources/content/ai-agent-trends-2026
  Highlight: >. The era of simple prompts is over. We're witnessing the agent leap—where AI orchestrates complex, end-to-end workflows semi-autonomously. For enter...

[The Rise of Agentic AI: Why 2026 is the Year AI Started 'Doing']
  URL: https://www.marketdrafts.com/2026/02/rise-of-agentic-ai-2026-trends.html?m=1
  Highlight:  The era of "Generative AI" (which creates content) is being superseded by "Agentic AI" (which executes actions). We are witnessing a fundamental arch...

以類別為基礎的篩選- 您可以將結果限制為特定的內容類型,例如"research paper","news","company""tweet" 。當您想要高品質的來源,又想避免雜訊時,這將非常有用:

filtered_results = exa.search_and_contents(
    query="retrieval augmented generation real world applications",
    category="research paper",
    num_results=3,
    highlights={"num_sentences": 2},
)

for r in filtered_results.results:
    print(f"- {r.title}")
    print(f"  {r.url}\n")
- 10 RAG examples and use cases from real companies
  https://www.evidentlyai.com/blog/rag-examples

- Implementing Retrieval-Augmented Generation (RAG) with Real-World Constraints
  https://dev.to/dextralabs/implementing-retrieval-augmented-generation-rag-with-real-world-constraints-3ajm

- 
  https://www.arxiv.org/pdf/2502.14930

尋找類似文章- 給定 URL 後,Exa 可以尋找內容類似的其他文章。這有助於從良好的起點擴展研究:

if web_results.results:
    source_url = web_results.results[0].url
    similar = exa.find_similar_and_contents(
        url=source_url,
        num_results=3,
        highlights={"num_sentences": 2},
    )
    print(f"Articles similar to: {source_url}\n")
    for r in similar.results:
        print(f"- {r.title}")
        print(f"  {r.url}\n")
Articles similar to: https://odsc.medium.com/the-ai-trends-shaping-2026-34078dad4d49

- AI Trends 2026: From Agent Demos to Production Reality
  https://opendatascience.com/the-ai-trends-shaping-2026/

- The Most Important AI Trends to Watch in 2026
  https://medium.com/the-ai-studio/the-most-important-ai-trends-to-watch-in-2026-54af64d45021

定義代理工具

現在我們定義代理將使用的兩個工具功能。私人知識庫工具使用向量相似性搜尋 Milvus,而網路工具則透過 Exa 搜尋公開網際網路:

def search_private_kb(query: str) -> str:
    """Search the internal knowledge base using Milvus vector search."""
    results = milvus.search(
        collection_name=COLLECTION,
        data=[embed_text(query)],
        limit=3,
        output_fields=["text", "source"],
    )
    chunks = []
    for hit in results[0]:
        chunks.append(f"[{hit['entity']['source']}] {hit['entity']['text']}")
    return "\n\n".join(chunks) if chunks else "No relevant internal documents found."


def search_web(query: str) -> str:
    """Search the public web using Exa for up-to-date information."""
    results = exa.search_and_contents(
        query=query,
        type="auto",
        num_results=3,
        highlights={"num_sentences": 3},
    )
    items = []
    for r in results.results:
        highlight = r.highlights[0] if r.highlights else "No snippet available."
        items.append(f"[{r.title}]({r.url})\n{highlight}")
    return "\n\n".join(items) if items else "No web results found."


TOOL_FNS = {
    "search_private_kb": search_private_kb,
    "search_web": search_web,
}

建立代理程式

代理使用 OpenAI 的函式呼叫來決定要調用的工具。它遵循一個簡單的循環:LLM 接收使用者查詢、決定要呼叫哪些工具(如果有)、執行這些工具,然後從擷取的上下文合成一個最終答案。

TOOLS = [
    {
        "type": "function",
        "function": {
            "name": "search_private_kb",
            "description": (
                "Search the company's internal knowledge base (product docs, "
                "policies, earnings, API docs, HR guides). Use this for any "
                "question about internal/proprietary information."
            ),
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {"type": "string", "description": "The search query"}
                },
                "required": ["query"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "search_web",
            "description": (
                "Search the public web for up-to-date external information - "
                "news, trends, competitor analysis, open-source projects, etc. "
                "Use this when the question is about the outside world."
            ),
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {"type": "string", "description": "The search query"}
                },
                "required": ["query"],
            },
        },
    },
]

SYSTEM_PROMPT = """You are a helpful assistant with access to two search tools:

1. **search_private_kb** - searches the company's internal knowledge base.
2. **search_web** - searches the public internet via Exa.

Routing rules:
- Questions about internal products, policies, metrics, or processes: use search_private_kb.
- Questions about external trends, news, competitors, or general knowledge: use search_web.
- Questions that need both internal and external context: call BOTH tools, then synthesize.

Always cite your sources. For internal docs, mention the filename. For web results, include the URL."""


def run_agent(user_query: str) -> str:
    """Run the agent loop: LLM -> tool calls -> LLM -> final answer."""
    messages = [
        {"role": "system", "content": SYSTEM_PROMPT},
        {"role": "user", "content": user_query},
    ]

    print(f"User: {user_query}\n")

    # First LLM call - may request tool calls
    response = llm.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        tools=TOOLS,
    )
    msg = response.choices[0].message
    messages.append(msg)

    # If no tool calls, return directly
    if not msg.tool_calls:
        print(f"Agent (no tools used): {msg.content}")
        return msg.content

    # Execute each tool call
    for tc in msg.tool_calls:
        fn_name = tc.function.name
        fn_args = json.loads(tc.function.arguments)
        print(f"  -> Calling {fn_name}(query={fn_args['query']!r})")

        result = TOOL_FNS[fn_name](**fn_args)
        messages.append(
            {
                "role": "tool",
                "tool_call_id": tc.id,
                "content": result,
            }
        )

    # Second LLM call - synthesize final answer
    response = llm.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        tools=TOOLS,
    )
    answer = response.choices[0].message.content
    print(f"\nAgent:\n{answer}")
    return answer

示範

現在讓我們以三種情境測試代理程式,展示不同的路由行為。

情境 A:內部問題 (路由至 Milvus)

詢問內部政策 - 代理應呼叫search_private_kb 並從我們的私人文件擷取答案:

run_agent("What is the return policy for Acme products?")
User: What is the return policy for Acme products?



  -> Calling search_private_kb(query='return policy Acme products')



Agent:
The Acme products return policy allows customers to return any product within 30 days of purchase for a full refund. After 30 days, only store credit is offered. It's important to note that damaged items must be reported within 48 hours of receipt ([source: return-policy.md]).





"The Acme products return policy allows customers to return any product within 30 days of purchase for a full refund. After 30 days, only store credit is offered. It's important to note that damaged items must be reported within 48 hours of receipt ([source: return-policy.md])."

情境 B:外部問題 (路由至 Exa)

詢問外部趨勢 - 代理應呼叫search_web ,從公共網際網路取得最新資訊:

run_agent("What are the latest AI agent frameworks trending in 2026?")
User: What are the latest AI agent frameworks trending in 2026?



  -> Calling search_web(query='latest AI agent frameworks 2026')



Agent:
In 2026, several AI agent frameworks are trending, each offering unique features and capabilities that cater to various needs. Here are some of the most prominent ones:

1. **LangChain and LangGraph**: These frameworks remain highly popular for building large language model (LLM)-powered applications. LangGraph, in particular, models agents as state graphs, which is useful for action-oriented workflows. LangChain continues to dominate due to its comprehensive feature set for production-grade control and orchestration.

2. **LangSmith Agent Builder**: Released into general availability in 2026, this tool allows teams to create AI agents using natural language, simplifying the process of agent development.

3. **Semantic Kernel and AutoGen**: These have been integrated into Azure AI Foundry, creating a unified framework. Semantic Kernel uses a plugin-based middleware pattern, enhancing existing applications with AI capabilities efficiently.

4. **OpenClaw**: An open-source framework that operates locally, OpenClaw transforms your computer into an autonomous agent host, differing from cloud-based solutions by keeping data and operations localized. This framework supports a large community and includes extensive skills for customization.

These frameworks cater to various requirements, whether it's production-grade solutions, open-source options, or frameworks focused on local deployment. Each framework has its strengths, depending on the use case and the existing ecosystem it fits into.

Sources:
- [Agentic AI Frameworks: The Complete Guide (2026)](https://aiagentskit.com/blog/agentic-ai-frameworks/)
- [OpenClaw: The Open-Source AI Agent Framework That Runs Your Life Locally](https://www.clawbot.blog/blog/openclaw-the-open-source-ai-agent-framework-that-runs-your-life-locally)
- [The Best AI Agent Frameworks for 2026](https://medium.com/data-science-collective/the-best-ai-agent-frameworks-for-2026-tier-list-b3a4362fac0d)





"In 2026, several AI agent frameworks are trending, each offering unique features and capabilities that cater to various needs. Here are some of the most prominent ones:\n\n1. **LangChain and LangGraph**: These frameworks remain highly popular for building large language model (LLM)-powered applications. LangGraph, in particular, models agents as state graphs, which is useful for action-oriented workflows. LangChain continues to dominate due to its comprehensive feature set for production-grade control and orchestration.\n\n2. **LangSmith Agent Builder**: Released into general availability in 2026, this tool allows teams to create AI agents using natural language, simplifying the process of agent development.\n\n3. **Semantic Kernel and AutoGen**: These have been integrated into Azure AI Foundry, creating a unified framework. Semantic Kernel uses a plugin-based middleware pattern, enhancing existing applications with AI capabilities efficiently.\n\n4. **OpenClaw**: An open-source framework that operates locally, OpenClaw transforms your computer into an autonomous agent host, differing from cloud-based solutions by keeping data and operations localized. This framework supports a large community and includes extensive skills for customization.\n\nThese frameworks cater to various requirements, whether it's production-grade solutions, open-source options, or frameworks focused on local deployment. Each framework has its strengths, depending on the use case and the existing ecosystem it fits into.\n\nSources:\n- [Agentic AI Frameworks: The Complete Guide (2026)](https://aiagentskit.com/blog/agentic-ai-frameworks/)\n- [OpenClaw: The Open-Source AI Agent Framework That Runs Your Life Locally](https://www.clawbot.blog/blog/openclaw-the-open-source-ai-agent-framework-that-runs-your-life-locally)\n- [The Best AI Agent Frameworks for 2026](https://medium.com/data-science-collective/the-best-ai-agent-frameworks-for-2026-tier-list-b3a4362fac0d)"

情況 C:混合問題 (路由至兩者)

詢問一個同時需要內部規格和外部基準的問題 - 代理應同時呼叫這兩個工具,並進行綜合比較:

run_agent(
    "How does our Widget Pro's throughput compare to "
    "open-source alternatives on the market?"
)
User: How does our Widget Pro's throughput compare to open-source alternatives on the market?



  -> Calling search_private_kb(query='Widget Pro throughput comparison')


  -> Calling search_web(query='open-source widget throughput comparison')



Agent:
The throughput of our Widget Pro is quite competitive when compared to open-source alternatives on the market. Here's a detailed comparison:

### Widget Pro

- **Concurrent Connections**: Supports up to 10,000 concurrent connections.
- **Compression**: Utilizes AcmeZip v3, a proprietary compression algorithm that reduces payload size by 72% compared to gzip (source: [product-spec.pdf]).
- **API Rate Limits**: Offers different tiers:
  - Free tier: 100 requests/minute.
  - Pro tier: 5,000 requests/minute.
  - Enterprise tier: 50,000 requests/minute (source: [api-docs.md]).

### Open-Source Alternatives

From the available resources, open-source widget solutions such as Chatwoot and Tiledesk are popular in handling customer engagement with a flexible and customizable approach (source: [ChatMaxima article](https://chatmaxima.com/blog/15-open-source-free-live-chat-widget-solutions-to-boost-your-customer-engagement-in-2024/)). However, specific throughput metrics such as maximum concurrent connections or API limits are generally not highlighted in open-source product descriptions unless directly benchmarked.

These alternatives often emphasize customization, control, and integration with AI-driven capabilities but do not always specify throughput in terms comparable with Widget Pro. They might be more suited for organizations looking to tailor solutions to specific needs rather than focusing solely on throughput efficiency.

In conclusion, Widget Pro appears to offer high throughput suitable for enterprises with robust API support, while open-source options offer flexibility and customization with varying degrees of performance metrics.





"The throughput of our Widget Pro is quite competitive when compared to open-source alternatives on the market. Here's a detailed comparison:\n\n### Widget Pro\n\n- **Concurrent Connections**: Supports up to 10,000 concurrent connections.\n- **Compression**: Utilizes AcmeZip v3, a proprietary compression algorithm that reduces payload size by 72% compared to gzip (source: [product-spec.pdf]).\n- **API Rate Limits**: Offers different tiers:\n  - Free tier: 100 requests/minute.\n  - Pro tier: 5,000 requests/minute.\n  - Enterprise tier: 50,000 requests/minute (source: [api-docs.md]).\n\n### Open-Source Alternatives\n\nFrom the available resources, open-source widget solutions such as Chatwoot and Tiledesk are popular in handling customer engagement with a flexible and customizable approach (source: [ChatMaxima article](https://chatmaxima.com/blog/15-open-source-free-live-chat-widget-solutions-to-boost-your-customer-engagement-in-2024/)). However, specific throughput metrics such as maximum concurrent connections or API limits are generally not highlighted in open-source product descriptions unless directly benchmarked.\n\nThese alternatives often emphasize customization, control, and integration with AI-driven capabilities but do not always specify throughput in terms comparable with Widget Pro. They might be more suited for organizations looking to tailor solutions to specific needs rather than focusing solely on throughput efficiency.\n\nIn conclusion, Widget Pro appears to offer high throughput suitable for enterprises with robust API support, while open-source options offer flexibility and customization with varying degrees of performance metrics."

清理

完成後,丟棄集合以釋放資源。

milvus.drop_collection(COLLECTION)

結論

在本教程中,我們建立了一個雙源 RAG 代理,結合了用於私人知識檢索的 Milvus 與用於公共網路搜尋的 Exa。主要元件如下

  • Milvus透過向量相似性搜尋來儲存與擷取內部文件,確保專屬資料保持隱私且可被搜尋。
  • Exa提供語意網路搜尋功能,如類別過濾、內容擷取及類似文章搜尋。
  • OpenAI 函式呼叫可讓 LLM 根據問題的意圖,自動將查詢路由至正確的來源,或同時路由至這兩個來源。

此模式適用於 AI 助理需要存取機密內部文件與即時外部資訊的企業用例。