milvus-logo
LFAI
首页
  • 开始

使用 Milvus Operator 在 Kubernetes 中运行 Milvus

本页说明如何使用Milvus Operator 在 Kubernetes 中启动Milvus 实例。

概述

Milvus Operator 是一个解决方案,可帮助你在目标 Kubernetes (K8s) 集群中部署和管理完整的 Milvus 服务栈。堆栈包括所有 Milvus 组件和相关依赖项,如 etcd、Pulsar 和 MinIO。

前提条件

  • 创建 K8s 集群

  • 安装一个StorageClass。可按以下步骤检查已安装的 StorageClass。

    $ kubectl get sc
    
    NAME PROVISIONER RECLAIMPOLICY VOLUMEBIINDINGMODE ALLOWVOLUMEEXPANSION AGE
    standard (default) k8s.io/minikube-hostpath Delete Immediate false
    
  • 安装前检查硬件和软件要求

  • 安装 Milvus 之前,建议使用Milvus 大小工具,根据数据大小估算硬件需求。这有助于确保 Milvus 安装的最佳性能和资源分配。

如果您在拉动映像时遇到任何问题,请通过community@zilliz.com联系我们,并提供有关问题的详细信息,我们将为您提供必要的支持。

安装 Milvus Operator

Milvus Operator 在Kubernetes 自定义资源之上定义 Milvus 集群自定义资源。定义了自定义资源后,你就能以声明的方式使用 K8s API 并管理 Milvus 部署栈,确保其可扩展性和高可用性。

1.安装证书管理器

Milvus Operator 使用cert-manager为 webhook 服务器提供证书。

运行以下命令安装 cert-manager。

$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml

安装过程结束后,你会看到类似下面的输出。

customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
namespace/cert-manager created
serviceaccount/cert-manager-cainjector created
...
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

您可以按以下步骤检查 cert-manager pod 是否正在运行:

$ kubectl get pods -n cert-manager

NAME READY STATUS RESTARTS AGE
cert-manager-848f547974-gccz8 1/1 Running 0 70s
cert-manager-cainjector-54f4cc6b5-dpj84 1/1 Running 0 70s
cert-manager-webhook-7c9588c76-tqncn 1/1 Running 0 70s

2.安装 Milvus Operator

你可以通过以下任一方式安装 Milvus Operator:

使用 Helm 安装

运行以下命令,用 Helm 安装 Milvus Operator。

$ helm install milvus-operator \
  -n milvus-operator --create-namespace \
  --wait --wait-for-jobs \
  https://github.com/zilliztech/milvus-operator/releases/download/v1.0.1/milvus-operator-1.0.1.tgz

安装过程结束后,你将看到类似下面的输出。

NAME: milvus-operator
LAST DEPLOYED: Thu Jul  7 13:18:40 2022
NAMESPACE: milvus-operator
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Milvus Operator Is Starting, use `kubectl get -n milvus-operator deploy/milvus-operator` to check if its successfully installed
If Operator not started successfully, check the checker's log with `kubectl -n milvus-operator logs job/milvus-operator-checker`
Full Installation doc can be found in https://github.com/zilliztech/milvus-operator/blob/main/docs/installation/installation.md
Quick start with `kubectl apply -f https://raw.githubusercontent.com/zilliztech/milvus-operator/main/config/samples/milvus_minimum.yaml`
More samples can be found in https://github.com/zilliztech/milvus-operator/tree/main/config/samples
CRD Documentation can be found in https://github.com/zilliztech/milvus-operator/tree/main/docs/CRD

使用 kubectl 安装

运行以下命令,使用kubectl 安装 Milvus Operator。

$ kubectl apply -f https://raw.githubusercontent.com/zilliztech/milvus-operator/main/deploy/manifests/deployment.yaml

安装过程结束后,你将看到类似下面的输出。

namespace/milvus-operator created
customresourcedefinition.apiextensions.k8s.io/milvusclusters.milvus.io created
serviceaccount/milvus-operator-controller-manager created
role.rbac.authorization.k8s.io/milvus-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/milvus-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/milvus-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/milvus-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/milvus-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/milvus-operator-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/milvus-operator-proxy-rolebinding created
configmap/milvus-operator-manager-config created
service/milvus-operator-controller-manager-metrics-service created
service/milvus-operator-webhook-service created
deployment.apps/milvus-operator-controller-manager created
certificate.cert-manager.io/milvus-operator-serving-cert created
issuer.cert-manager.io/milvus-operator-selfsigned-issuer created
mutatingwebhookconfiguration.admissionregistration.k8s.io/milvus-operator-mutating-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/milvus-operator-validating-webhook-configuration created

您可以按以下步骤检查 Milvus Operator pod 是否正在运行:

$ kubectl get pods -n milvus-operator

NAME READY STATUS RESTARTS AGE
milvus-operator-5fd77b87dc-msrk4 1/1 Running 0 46s

部署 Milvus

1.部署 Milvus 群集

一旦运行了 Milvus Operator pod,就可以按如下方式部署 Milvus 群集。

$ kubectl apply -f https://raw.githubusercontent.com/zilliztech/milvus-operator/main/config/samples/milvus_cluster_default.yaml

上述命令使用默认配置部署一个 Milvus 集群,其组件和依赖项分别位于不同的 pod 中。要自定义这些设置,我们建议你使用Milvus 大小调整工具,根据实际数据大小调整配置,然后下载相应的 YAML 文件。要了解有关配置参数的更多信息,请参阅Milvus 系统配置清单

  • 版本名称只能包含字母、数字和破折号。版本名称中不允许有圆点。
  • 您也可以在独立模式下部署 Milvus 实例,即所有组件都包含在单个 pod 中。为此,请将上述命令中的配置文件 URL 更改为https://raw.githubusercontent.com/zilliztech/milvus-operator/main/config/samples/milvus_default.yaml

2.检查 Milvus 集群状态

运行以下命令检查 Milvus 群集状态

$ kubectl get milvus my-release -o yaml

一旦 Milvus 群集准备就绪,上述命令的输出结果应与下图类似。如果status.status 字段保持Unhealthy ,则您的 Milvus 群集仍在创建中。

apiVersion: milvus.io/v1alpha1
kind: Milvus
metadata:
...
status:
  conditions:
  - lastTransitionTime: "2021-11-02T05:59:41Z"
    reason: StorageReady
    status: "True"
    type: StorageReady
  - lastTransitionTime: "2021-11-02T06:06:23Z"
    message: Pulsar is ready
    reason: PulsarReady
    status: "True"
    type: PulsarReady
  - lastTransitionTime: "2021-11-02T05:59:41Z"
    message: Etcd endpoints is healthy
    reason: EtcdReady
    status: "True"
    type: EtcdReady
  - lastTransitionTime: "2021-11-02T06:12:36Z"
    message: All Milvus components are healthy
    reason: MilvusClusterHealthy
    status: "True"
    type: MilvusReady
  endpoint: my-release-milvus.default:19530
  status: Healthy

Milvus Operator 会创建 Milvus 依赖项,如 etcd、Pulsar 和 MinIO,然后创建 Milvus 组件,如代理、协调器和节点。

Milvus 集群准备就绪后,Milvus 集群中所有 pod 的状态应与下图类似。

$ kubectl get pods

NAME READY STATUS RESTARTS AGE
my-release-etcd-0 1/1 Running 0 14m
my-release-etcd-1 1/1 Running 0 14m
my-release-etcd-2 1/1 Running 0 14m
my-release-milvus-datanode-5c686bd65-wxtmf 1/1 Running 0 6m
my-release-milvus-indexnode-5b9787b54-xclbx 1/1 Running 0 6m
my-release-milvus-proxy-84f67cdb7f-pg6wf 1/1 Running 0 6m
my-release-milvus-querynode-5bcb59f6-nhqqw 1/1 Running 0 6m
my-release-milvus-mixcoord-fdcccfc84-9964g 1/1 Running 0 6m
my-release-minio-0 1/1 Running 0 14m
my-release-minio-1 1/1 Running 0 14m
my-release-minio-2 1/1 Running 0 14m
my-release-minio-3 1/1 Running 0 14m
my-release-pulsar-bookie-0 1/1 Running 0 14m
my-release-pulsar-bookie-1 1/1 Running 0 14m
my-release-pulsar-bookie-init-h6tfz 0/1 Completed 0 14m
my-release-pulsar-broker-0 1/1 Running 0 14m
my-release-pulsar-broker-1 1/1 Running 0 14m
my-release-pulsar-proxy-0 1/1 Running 0 14m
my-release-pulsar-proxy-1 1/1 Running 0 14m
my-release-pulsar-pulsar-init-d2t56 0/1 Completed 0 14m
my-release-pulsar-recovery-0 1/1 Running 0 14m
my-release-pulsar-toolset-0 1/1 Running 0 14m
my-release-pulsar-zookeeper-0 1/1 Running 0 14m
my-release-pulsar-zookeeper-1 1/1 Running 0 13m
my-release-pulsar-zookeeper-2 1/1 Running 0 13m

3.将本地端口转发到 Milvus

运行以下命令获取 Milvus 集群的服务端口。

$ kubectl get pod my-release-milvus-proxy-84f67cdb7f-pg6wf --template
='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
19530

输出结果显示,Milvus 实例的默认端口为19530

如果以独立模式部署了 Milvus,请将 pod 名称从my-release-milvus-proxy-xxxxxxxxxx-xxxxx 更改为my-release-milvus-xxxxxxxxxx-xxxxx

然后,运行以下命令将本地端口转发到 Milvus 服务的端口。

$ kubectl port-forward service/my-release-milvus 27017:19530
Forwarding from 127.0.0.1:27017 -> 19530

可以选择在上述命令中使用:19530 而不是27017:19530 ,让kubectl 为你分配一个本地端口,这样你就不必管理端口冲突了。

默认情况下,kubectl 的端口转发只监听localhost 。如果想让 Milvus 监听所选或所有 IP 地址,请使用address 标志。以下命令将使端口转发监听主机上的所有 IP 地址。

$ kubectl port-forward --address 0.0.0.0 service/my-release-milvus 27017:19530
Forwarding from 0.0.0.0:27017 -> 19530

卸载 Milvus

运行以下命令卸载 Milvus 群集。

$ kubectl delete milvus my-release
  • 使用默认配置删除 Milvus 群集时,不会删除 etcd、Pulsar 和 MinIO 等依赖项。因此,下次安装相同的 Milvus 群集实例时,将再次使用这些依赖项。
  • 要连同 Milvus 群集一起删除依赖项和私有虚拟云 (PVC),请参阅配置文件

卸载 Milvus Operator

卸载 Milvus Operator 也有两种方法。

使用 Helm 卸载

$ helm -n milvus-operator uninstall milvus-operator

使用 kubectl 卸载

$ kubectl delete -f https://raw.githubusercontent.com/zilliztech/milvus-operator/v1.0.1/deploy/manifests/deployment.yaml

下一步

在 Docker 中安装 Milvus 后,你就可以:

翻译自DeepLogo

反馈

此页对您是否有帮助?