Milvus
Zilliz
Home
  • Get Started
  • Home
  • Docs
  • Get Started

  • Install Milvus

  • Run Milvus Distributed

  • Helm Chart

Run Milvus in Kubernetes with Helm

This page illustrates how to start a Milvus instance in Kubernetes using Milvus Helm charts.

Overview

Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. Milvus provides a set of charts to help you deploy Milvus dependencies and components.

Prerequisites

  • Install Helm CLI.

  • Create a K8s cluster.

  • Install a StorageClass. You can check the installed StorageClass as follows.

    $ kubectl get sc
    
    NAME                  PROVISIONER                  RECLAIMPOLICY    VOLUMEBIINDINGMODE    ALLOWVOLUMEEXPANSION     AGE
    standard (default)    k8s.io/minikube-hostpath     Delete           Immediate             false 
    
  • Check the hardware and software requirements before installation.

  • Before installing Milvus, it is recommended to use the Milvus Sizing Tool to estimate the hardware requirements based on your data size. This helps ensure optimal performance and resource allocation for your Milvus installation.

If you encounter any issues pulling the image, contact us at community@zilliz.com with details about the problem, and we’ll provide you with the necessary support.

Install Milvus Helm Chart

Before installing Milvus Helm Charts, you need to add Milvus Helm repository.

helm repo add zilliztech https://zilliztech.github.io/milvus-helm/

The Milvus Helm Charts repo at https://github.com/milvus-io/milvus-helm has been archived. We now use the new repository at https://github.com/zilliztech/milvus-helm. The archived repo is still available for charts up to 4.0.31, but use the new repo for later releases.

Then fetch Milvus charts from the repository as follows:

$ helm repo update

You can always run this command to fetch the latest Milvus Helm charts.

Online install

1. Deploy a Milvus cluster

Once you have installed the Helm chart, you can start Milvus on Kubernetes. This section guides you through deploying a Milvus cluster.

Need standalone deployment instead?

If you prefer to deploy Milvus in standalone mode (single node) for development or testing, use this command:

helm install my-release zilliztech/milvus \
  --set image.all.tag=v2.6.0 \
  --set cluster.enabled=false \
  --set pulsarv3.enabled=false \
  --set standalone.messageQueue=woodpecker \
  --set woodpecker.enabled=true \
  --set streaming.enabled=true

Note: Standalone mode uses Woodpecker as the default message queue and enables the Streaming Node component. For details, refer to the Architecture Overview and Use Woodpecker.

Deploy Milvus cluster:

The following command deploys a Milvus cluster with optimized settings for v2.6.0, using Woodpecker as the recommended message queue:

helm install my-release zilliztech/milvus \
  --set image.all.tag=v2.6.0 \
  --set pulsarv3.enabled=false \
  --set woodpecker.enabled=true \
  --set streaming.enabled=true \
  --set indexNode.enabled=false

What this command does:

  • Uses Woodpecker as the message queue (recommended for reduced maintenance)
  • Enables the new Streaming Node component for improved performance
  • Disables the legacy Index Node (functionality is now handled by Data Node)
  • Disables Pulsar to use Woodpecker instead

Architecture Changes in Milvus 2.6.x:

  • Message Queue: Woodpecker is now recommended (reduces infrastructure maintenance compared to Pulsar)
  • New Component: Streaming Node is introduced and enabled by default
  • Merged Components: Index Node and Data Node are combined into a single Data Node

For complete architecture details, refer to the Architecture Overview.

Alternative Message Queue Options:

If you prefer to use Pulsar (traditional choice) instead of Woodpecker:

helm install my-release zilliztech/milvus \
  --set image.all.tag=v2.6.0 \
  --set streaming.enabled=true \
  --set indexNode.enabled=false

Next steps: The command above deploys Milvus with recommended configurations. For production use:

Important notes:

  • Release naming: Use only letters, numbers, and dashes (no dots allowed)
  • Kubernetes v1.25+: If you encounter PodDisruptionBudget issues, use this workaround:
    helm install my-release zilliztech/milvus \
      --set pulsar.bookkeeper.pdb.usePolicy=false \
      --set pulsar.broker.pdb.usePolicy=false \
      --set pulsar.proxy.pdb.usePolicy=false \
      --set pulsar.zookeeper.pdb.usePolicy=false
    

For more information, see Milvus Helm Chart and Helm documentation.

2. Check Milvus cluster status

Verify that your deployment is successful by checking the pod status:

kubectl get pods

Wait for all pods to show “Running” status. With the v2.6.0 configuration, you should see pods similar to:

NAME                                             READY  STATUS   RESTARTS  AGE
my-release-etcd-0                                1/1    Running   0        3m23s
my-release-etcd-1                                1/1    Running   0        3m23s
my-release-etcd-2                                1/1    Running   0        3m23s
my-release-milvus-datanode-68cb87dcbd-4khpm      1/1    Running   0        3m23s
my-release-milvus-mixcoord-7fb9488465-dmbbj      1/1    Running   0        3m23s
my-release-milvus-proxy-6bd7f5587-ds2xv          1/1    Running   0        3m24s
my-release-milvus-querynode-5cd8fff495-k6gtg     1/1    Running   0        3m24s
my-release-milvus-streaming-node-xxxxxxxxx       1/1    Running   0        3m24s
my-release-minio-0                               1/1    Running   0        3m23s
my-release-minio-1                               1/1    Running   0        3m23s
my-release-minio-2                               1/1    Running   0        3m23s
my-release-minio-3                               1/1    Running   0        3m23s
my-release-pulsar-autorecovery-86f5dbdf77-lchpc  1/1    Running   0        3m24s
my-release-pulsar-bookkeeper-0                   1/1    Running   0        3m23s
my-release-pulsar-bookkeeper-1                   1/1    Running   0        98s
my-release-pulsar-broker-556ff89d4c-2m29m        1/1    Running   0        3m23s
my-release-pulsar-proxy-6fbd75db75-nhg4v         1/1    Running   0        3m23s
my-release-pulsar-zookeeper-0                    1/1    Running   0        3m23s
my-release-pulsar-zookeeper-metadata-98zbr       0/1   Completed  0        3m24s

Key components to verify:

  • Milvus components: mixcoord, datanode, querynode, proxy, streaming-node
  • Dependencies: etcd (metadata), minio (object storage), pulsar (message queue)

You can also access the Milvus WebUI at http://127.0.0.1:9091/webui/ once port forwarding is set up (see next step). For details, refer to Milvus WebUI.

3. Connect to Milvus

To connect to your Milvus cluster from outside Kubernetes, you need to set up port forwarding.

Set up port forwarding:

kubectl port-forward service/my-release-milvus 27017:19530

This command forwards your local port 27017 to Milvus port 19530. You should see:

Forwarding from 127.0.0.1:27017 -> 19530

Connection details:

  • Local connection: localhost:27017
  • Milvus default port: 19530

Options for port forwarding:

  • Auto-assign local port: Use :19530 instead of 27017:19530 to let kubectl choose an available port
  • Listen on all interfaces: Add --address 0.0.0.0 to allow connections from other machines:
    kubectl port-forward --address 0.0.0.0 service/my-release-milvus 27017:19530
    
  • Standalone deployment: If using standalone mode, the service name remains the same

Keep this terminal open while using Milvus. You can now connect to Milvus using any Milvus SDK at localhost:27017.

(Optional) Update Milvus configurations

You can update the configurations of your Milvus cluster by editing the values.yaml file and applying it again.

  1. Create a values.yaml file with the desired configurations.

    The following assumes that you want to enable proxy.http.

    extraConfigFiles:
      user.yaml: |+
        proxy:
          http:
            enabled: true
    

    For applicable configuration items, refer to System Configuration.

  2. Apply the values.yaml file.

helm upgrade my-release zilliztech/milvus --namespace my-namespace -f values.yaml
  1. Check the updated configurations.

    helm get values my-release
    

    The output should show the updated configurations.

Access Milvus WebUI

Milvus ships with a built-in GUI tool called Milvus WebUI that you can access through your browser. Milvus Web UI enhances system observability with a simple and intuitive interface. You can use Milvus Web UI to observe the statistics and metrics of the components and dependencies of Milvus, check database and collection details, and list detailed Milvus configurations. For details about Milvus Web UI, see Milvus WebUI

To enable the access to the Milvus Web UI, you need to port-forward the proxy pod to a local port.

$ kubectl port-forward --address 0.0.0.0 service/my-release-milvus 27018:9091
Forwarding from 0.0.0.0:27018 -> 9091

Now, you can access Milvus Web UI at http://localhost:27018.

Offline install

If you are in a network-restricted environment, follow the procedure in this section to start a Milvus cluster.

1. Get Milvus manifest

Run the following command to get the Milvus manifest.

$ helm template my-release zilliztech/milvus > milvus_manifest.yaml

The above command renders chart templates for a Milvus cluster and saves the output to a manifest file named milvus_manifest.yaml. Using this manifest, you can install a Milvus cluster with its components and dependencies in separate pods.

  • To install a Milvus instance in the standalone mode where all Milvus components are contained within a single pod, you should run helm template my-release --set cluster.enabled=false --set etcd.replicaCount=1 --set minio.mode=standalone --set pulsarv3.enabled=false zilliztech/milvus > milvus_manifest.yaml instead to render chart templates for a Milvus instance in a standalone mode.
  • To change Milvus configurations, download the value.yaml template, place your desired settings in it, and use helm template -f values.yaml my-release zilliztech/milvus > milvus_manifest.yaml to render the manifest accordingly.

2. Download image-pulling script

The image-pulling script is developed in Python. You should download the script along with its dependencies in the requirement.txt file.

$ wget https://raw.githubusercontent.com/milvus-io/milvus/master/deployments/offline/requirements.txt
$ wget https://raw.githubusercontent.com/milvus-io/milvus/master/deployments/offline/save_image.py

3. Pull and save images

Run the following command to pull and save the required images.

$ pip3 install -r requirements.txt
$ python3 save_image.py --manifest milvus_manifest.yaml

The images are pulled into a sub-folder named images in the current directory.

4. Load images

You can now load the images to the hosts in the network-restricted environment as follows:

$ for image in $(find . -type f -name "*.tar.gz") ; do gunzip -c $image | docker load; done

5. Deploy Milvus

$ kubectl apply -f milvus_manifest.yaml

Till now, you can follow steps 2 and 3 of the online install to check the cluster status and forward a local port to Milvus.

Upgrade running Milvus cluster

Run the following command to upgrade your running Milvus cluster to the latest version:

$ helm repo update
$ helm upgrade my-release zilliztech/milvus --reset-then-reuse-values

Uninstall Milvus

Run the following command to uninstall Milvus.

$ helm uninstall my-release

What’s next

Having installed Milvus in Docker, you can: