• About Milvus

Release Notes

Find out what’s new in Milvus! This page summarizes new features, improvements, known issues, and bug fixes in each release. You can find the release notes for each released version after v2.2.0 in this section. We suggest that you regularly visit this page to learn about updates.


Release date: Nov 27, 2023

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

Milvus 2.2.16 represents a minor patch release following Milvus 2.2.15. This update primarily concentrates on bolstering system stability, enhancing fault recovery speed, and addressing various identified issues. Notably, the Knowhere version has been updated in this release, leading to quicker loading of DiskAnn indexes.

For an optimal experience, we highly recommend all users currently on the 2.2.0 series to upgrade to this version before considering a move to 2.3.

Bug Fixes

  • Corrected the docker-compose etcd health check command (27980).
  • Completed the cleanup of remaining meta information after dropping a Collection (28500).
  • Rectified the issue causing panic during the execution of stop logic in query coordination (28543).
  • Resolved the problem of the cmux server failing to gracefully shut down (28384).
  • Eliminated the reference counting logic related to the query shard service to prevent potential leaks (28547).
  • Removed the logic of polling collection information from RootCoord during the restart process of QueryCoord to prevent startup failures (28607).
  • Fixed parsing errors in expressions containing mixed single and double quotations (28417).
  • Addressed DataNode panic during flushing delete buffer (28710).


  • Updated Knowhere to version 1.3.20 to accelerate the loading process (28658).
  • Made etcdkv request timeout configurable (28664).
  • Increased the timeout duration for QueryCoord to probe the query nodes via gRPC to 2 seconds (28647).
  • Bumped milvus-proto/go-api to version 2.2.16 (28708).


Release date: Nov 14, 2023

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

Milvus 2.2.15, a bugfix version of the Milvus 2.2.x series, has introduced significant improvements and bug fixes. This version enhanced the bulkinsert functionality to support partitionkey and the new JSON list format. Additionally, 2.2.15 has substantially improved the rolling upgrade process to 2.3.3 and resolved many critical issues. We strongly recommend all 2.2.15 users upgrade to this version before moving to 2.3.

Incompatible Update

  • Removed MySQL metastore support (#26634).


  • Enabled bulkinsert of binlog data with partitionkey (#27336).
  • Added support for bulkinsert with pure list JSON (#28127).


  • Added -g flag for compiling with debug information (#26698).
  • Implemented a workaround to fix ChannelManager holding mutex for too long (#26870, #26874).
  • Reduced the number of goroutines resulting from GetIndexInfos (#27547).
  • Eliminated the recollection of segment stats during datacoord startup (#27562).
  • Removed flush from the DDL queue (#27691).
  • Decreased the write lock scope in channel manager (#27824).
  • Reduced the number of parallel tasks for compaction (#27900).
  • Refined RPC call in unwatch drop channel (#27884).
  • Enhanced bulkinsert to read varchar in batches (#26198).
  • Optimized Milvus rolling upgrade process, including:
    • Refined standalone components' stop order (#26742, #26778).
    • Improved RPC client retry mechanism (#26797).
    • Handled errors from new RootCoord for DescribeCollection (#27029).
    • Added a stop hook for session cleanup (#27565).
    • Accelerated shard leader cache update frequency (#27641).
    • Disabled retryable error logic in search/query operations (#27661).
    • Supported signal reception from parent process (#27755).
    • Checked data sync service number during graceful stop (#27789).
    • Fixed query shard service leak (#27848).
    • Refined Proxy stop process (#27910).
    • Fixed deletion of session key with prefix (#28261).
    • Addressed unretryable errors (#27955).
    • Refined stop order for components (#28017).
    • Added timeout for graceful stop (#27326, #28226).
    • Implemented fast fail when querynode is not ready (#28204).

Bug Fixes

  • Resolved CollectionNotFound error during describe rg (#26569).
  • Fixed issue where timeout tasks never released the queue (#26594).
  • Refined signal handler for the entire Milvus role lifetime (#26642, #26702).
  • Addressed panic caused by non-nil component pointer to component interface (#27079).
  • Enhanced garbage collector to fetch meta after listing from storage (#27205).
  • Fixed Kafka consumer connection leak (#27223).
  • Reduced RPC size for GetRecoveryInfoV2 (#27484).
  • Resolved concurrent parsing expression issues with strings (#26721, #27539).
  • Fixed query shard inUse leak (#27765).
  • Corrected rootPath issue when querynode cleaned local directory (#28314).
  • Ensured compatibility with sync target version (#28290).
  • Fixed release of query shard when releasing growing segment (#28040).
  • Addressed slow response in flushManager.isFull (#28141, #28149).
  • Implemented check for length before comparing strings (#28111).
  • Resolved panic during close delete flow graph (#28202).
  • Fixed bulkinsert bug where segments were compacted after import (#28200).
  • Solved data node panic during save binlog path (#28243).
  • Updated collection target after observer start (#27962).


Release date: Aug 23, 2023

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

Milvus 2.2.14 is a minor bug-fix release that mainly addresses cluster unavailability issues during rolling upgrades. With this new release, Milvus deployed with Kubernetes operator can be upgraded with almost zero downtime.

Bug Fixes

This update addresses the following issues:

  • Fixed the issues that caused rolling upgrades to take longer than expected:
    • Changed the default gracefulStopTimeout and now only displays a warning when there is a failure to refresh the policy cache. (#26443)
    • Refined gRPC retries. (#26464)
    • Checked and reset the gRPC client server ID if it mismatches with the session. (#26473)
    • Added a server ID validation interceptor. (#26395) (#26424)
    • Improved the performance of the server ID interceptor validation. (#26468) (#26496)
  • Fixed the expression incompatibility issue between the parser and the executor. (#26493) (#26495)
  • Fixed failures in serializing string index when its size exceeds 2 GB. (#26393)
  • Fixed issues where enormous duplicate collections were being re-dropped during restore. (#26030)
  • Fixed the issue where the leader view returns a loading shard cluster. (#26263)
  • Fixed the Liveness check block in SessionUtil to watch forever. (#26250)
  • Fixed issues related to logical expressions. (#26513) (#26515)
  • Fixed issues related to continuous restart of DataNode/DataCoord. #26470 (#26506)
  • Fixed issues related to being stuck in channel checkpoint. (#26544)
  • Fixed an issue so that Milvus considers the balance task with a released source segment as stale. (#26498)


  • Refined error messages for fields that do not exist (#26331).
  • Fixed unclear error messages of the proto parser (#26365) (#26366).
  • Prohibited setting a partition name for a collection that already has a partition key (#26128).
  • Added disk metric information (#25678).
  • Fixed the CollectionNotExists error during vector search and retrieval (#26532).
  • Added a default MALLOC_CONF environment variable to release memory after dropping a collection (#26353).
  • Made pulsar request timeout configurable (#26526).


Release date: Aug 9, 2023

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

Milvus 2.2.13 is a minor bugfix release that fixes several performance degrading issues, including excessive disk usage when TTL is enabled, and the failure to import dynamic fields via bulk load. In addition, Milvus 2.2.13 also extends object storage support beyond S3 and MinIO.


  • Resolved a crash bug in bulk-insert for dynamic fields. (#25980)
  • Reduced excessive MinIO storage usage by saving metadata (timestampFrom, timestampTo) during compaction. (#26210)
  • Corrected lock usage in DataCoord compaction. (#26032) (#26042)
  • Incorporated session util fixes through cherry-picking. (#26101)
  • Removed user-role mapping information along with a user. (#25988) (#26048)
  • Improved the RBAC cache update process. (#26150) (#26151)
  • Fixed MsgPack from mq msgstream ts not being set. (#25924)
  • Fixed the issue of sc.distribution being nil. (#25904)
  • Fixed incorrect results while retrieving data of int8. (#26171)


  • Upgraded MinIO-go and add region and virtual host config for segcore chunk manager (#25811)
  • Reduced log volumes of DC&DN (#26060) (#26094)
  • Added a new configuration item: proxy.http.port (#25923)
  • Forced use DNS for AliyunOSS because of sdk bug (#26176)
  • Fixed indexnode and datanode num metric (#25920)
  • Disabled deny writing when the growing segment size exceeds the watermark (#26163) (#26208)
  • Fixed the performance degradation in version 2.2.12 by adding back the segment CGO pool and separating sq/dm operations (#26035).


Release date: 24 July, 2023

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

This minor release is the last one in Milvus 2.2.x that comes with new features. Future minor releases of Milvus 2.2.x will focus on essential bug fixes.

New features in this release include:

  • A new set of RESTful APIs that simplify user-side operations.

    Note that you must set a token even if the authentication is disabled in Milvus for now. For details, see #25873.

  • Improved ability to retrieve vectors during ANN searches, along with better vector-retrieving performance during queries. Users can now set the vector field as one of the output fields in ANN searches and queries against HNSW-, DiskANN-, or IVF-FLAT-indexed collections.

  • Better search performance with reduced overhead, even when dealing with large top-K values, improved write performance in partition-key-enabled or multi-partition scenarios, and enhanced CPU usage in scenarios with large machines.

Additionally, a large number of issues have been fixed, including excessive disk usage, stuck compaction, infrequent data deletions, object storage access failures using AWS S3 SDK, and bulk-insertion failures.

New Features

  • Added support for a high-level RESTful API that listens on the same port as gRPC (#24761).
  • Added support for getting vectors by IDs (#23450) (#25090).
  • Added support for json_contains (#25724).
  • Enabled bulk-insert to support partition keys (#24995).
  • Enabled the chunk manager to use GCS and OSS with an access key (#25241).


  • Fixed issue where Milvus was using too much extra MinIO/local disk space
    • Added constraint for compaction based on indexed segments (#25470)
    • (FastCompact) Added function to check output fields and modify cases (#25510)
  • Fixed Delete related issues
    • Fixed delete messages being unsorted (#25757)
    • Fixed deleted records being re-applied (#24858)
    • Fixed duplicate deletions making deleted records visible (#25369)
    • Fixed deleted data being returned by search/query (#25513)
  • Fixed Blob storage-related issues
    • Added error code to Minio chunkmanager exception (#25153) (#25181)
    • Fixed program crash caused by incorrect use of noexcept modifier (#25194)
    • Fixed GetObject returning null value bug in MacOS (#24959)(#25002) (#25107)
    • Reverted aws-sdk-cpp version (#25305)
  • Fixed etcd failure causing Milvus to crash (#25463)(#25111)
  • Fixed Bulk-load issues
    • Enabled segment checks if a segment exists before conducting checks against the import task state (#25809)
    • Added a timeout config for bulk-insert requests (#25758)
  • Fixed indexnode memory leakage when update index fails (#25460) (#25478)
  • Fixed Kafka panic when sending a message to a closed channel (#25116)
  • Fixed insert returning success but not storing dynamic fields (#25494)
  • Refined sync_cp_lag_too_behind_policy to avoid submitting sync tasks too frequently (#25441) (#25442)
  • Fixed bug of missing JSON type when sorting retrieve results (#25412)
  • Fixed possible deadlock when syncing segments to datanode (#25196) (#25211)
  • Added write lock for lru_cache.Get (#25010)
  • Fixed expression on integer overflow case (#25320, #25372)
  • Fixed data race in waitgroup for graceful stop (#25224)
  • Fixed drop index with large txn exceeding etcd limit (#25623)
  • Fixed incorrect IP distance (#25527) (#25528)
  • Prevented exclusive consumer exception in Pulsar (#25376) (#25378)
  • Made query set guarantee ts based on default consistency level (#25579)
  • Fixed rootcoord restoration missing gcConfirmStep (#25280)
  • Fixed missing db parameter (#25759)


  • Improved monitoring metrics:
    • Fixed DataCoord consuming DataNode tt metrics (#25761)
    • Fixed monitoring metrics (#25549) (#25659)
  • Reduced Standalone CPU usage:
    • Used zstd compression after level 2 for RocksMQ (#25238)
  • Made compaction RPC timeout and parallel maximum configurable (#25654)
  • Accelerated compiling third-party libraries for AWS and Google SDK (#25408)
  • Removed DataNode time-tick MQ and use RPC to report instead (#24011)
  • Changed default log level to info (#25278)
  • Added refunding tokens to limiter (#25660)
  • Added write the cache file to the cacheStorage.rootpath directory (#25714)
  • Fixed inconsistency between catalog and in-memory segments meta (#25799) (#25801)
    • fixed DataCoord consume DataNode tt metrics (#25761)
    • Fixed monitoring metrics (#25549) (#25659)
  • Added PK index for string data type (#25402)
  • Improved write performance with partition key; remove sync segmentLastExpire every time when assigning (#25271) (#25316)
  • Fixed issues to avoid unnecessary reduce phase during search (#25166) (#25192)
  • Updated default nb to 2000 (#25169)
  • Added minCPUParallelTaskNumRatio config to enable better parallelism when estimated CPU usage of a single task is higher than total CPU usage (#25772)
  • Fixed coping segment offsets twice (#25729) (#25730)
  • Added limits on the number of go routines (#25171)


Release date: 29 June, 2023

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

We're happy to share that Milvus 2.2.11 is now available! This update includes significant bug fixes, addressing occasional system crashes and ensuring a more stable experience. We've also implemented various optimizations related to monitoring, logging, rate limiting, and interception of cross-cluster requests.


  • Fixed occasionally QueryNode panic during load (#24902)
  • Fixed panic in the session module caused by uninitialized atomic variable (#25005)
  • Rectified the issue of read request throttling caused by miscalculation of queue length twice. (#24440)
  • Fixed Flush hang after SyncSegments timeout. (#24692)
  • Fixed miss loading the same name collection during the recovery stage. (#24941)
  • Added a format check for Authorization Tokens. (#25033)
  • Fixed the issue of RemoteChunkManager not being thread-safe. (#25069)
  • Optimized the internal component of GPRC state handling by allowing retry based on different error types. (#25042)
  • Rectified the problem of erroneously excessive logging of error messages related to the stats log. (#25094)
  • Fixed compaction stuck due to channel rebalance. (#25098)
  • Fixed the issue of coroutines staying blocked after the consumer is closed. (#25123)
  • Avoided indefinite blocking of keepAliveOnce by a timeout parameter. (#25111)
  • Fixed crash caused by incorrect use of noexcept modifier (#25194)
  • Fixed panic caused by sending the message to closed channel (#25116)
  • Optimized length verification when inserting data of VarChar type (#25183)
  • Fixed GetObject return null value in MacOs (#25107)


  • Optimize the panic code logic of key components. (#24859)
  • Bump semver to development v2.2.11. (#24938) (#25075)
  • Add cluster validation interceptor to prevent the Cross-Cluster routing issue. (#25030)
  • Add some compaction logs for better issue tracking. (#24975)
  • Add log for confirming gc finished in RootCoord. (#24946)
  • Prioritize checking the upper limit of Collection numbers in the DataBase. (#24951)
  • Upgrade the dependent milvus-proto/go-api to version 2.2.10. (#24885)
  • Close kafka internal consumer properly. (#24997) (#25049) (#25071)
  • Restrict the concurrency of sync tasks for each flowgraph in DataNode. (#25035)
  • Updated Minio version. (#24897)
  • Add error code to minio chunkmanager exception. (#25181)
  • Utilize a singleton coroutine pool to reduce the number of employed coroutines. (#25171)
  • Optimized disk usage for RocksMq by enabling zstd compression starting from level 2 (#25231) (#25238)


Release date: 14 June, 2023

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

We are excited to announce the release of Milvus 2.2.10! This update includes important bug fixes, specifically addressing occasional system crashes, ensuring a more stable experience. We have also made significant improvements to loading and indexing speeds, resulting in smoother operations. A significant optimization in this release is the reduction of memory usage in data nodes, made possible through the integration of the Go payload writer instead of the old CGO implementation. Furthermore, we have expanded our Role-Based Access Control (RBAC) capabilities, extending these protections to the database and 'Flush All' API. Enjoy the enhanced security and performance of Milvus 2.2.10!

New Features

  • Added role-based access control (RBAC) for the new interface:
    1. Added RBAC for FlushAll (#24751) (#24755)
    2. Added RBAC for Database API (#24653)

Bug Fixes

  • Fixed random crash introduced by AWS S3 SDK:
    1. Used SA_ONSTACK flag for SIGPIPE handler (#24661)
    2. Added sa_mask for SIGPIPE handler (#24824)
  • Fixed "show loaded collections" (#24628) (#24629)
  • Fixed creating a collection not being idempotent (#24721) (#24722)
  • Fixed DB name being empty in the "describe collection" response (#24603)
  • Fixed deleted data still being visible (#24796)


  • Replaced CGO payload writer with Go payload writer to reduce memory usage (#24656)
  • Enabled max result window limit (#24768)
  • Removed unused iterator initialization (#24758)
  • Enabled metric type checks before search (#24652) (#24716)
  • Used go-api/v2 for milvus-proto (#24723)
  • Optimized the penalty mechanism for exceeding rate limits (#24624)
  • Allowed default params in HNSW & DISKANN (#24807)
  • Security -


  • Fixed build index performance downgrade (#24651)


Release date: 2 June, 2023

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

Milvus 2.2.9 has added JSON support, allowing for more flexible schemas within collections through dynamic schemas. The search efficiency has been improved through partition keys, which enable data separation for different data categories, such as multiple users, in a single collection. Additionally, database support has been integrated into Role-Based Access Control (RBAC), further fortifying multi-tenancy management and security. Support has also been extended to Alibaba Cloud OSS, and connection management has been refined, resulting in an improved user experience.

As always, this release includes bug fixes, enhancements, and performance improvements. Notably, disk usage has been significantly reduced, and performance has been improved, particularly for filtered searches.

We hope you enjoy the latest release!

New Features

  • JSON support

    • Introduced JSON data type (#23839).
    • Added support for expressions with JSON fields (#23804, #24016).
    • Enabled JSON support for bulk insert operations (#24227).
    • Enhanced performance of filters using JSON fields (#24268, #24282).
  • Dynamic schema

  • Partition key

    • Introduced partition key (#23994).
    • Added support for imports when partition key is enabled and backup is present (#24454).
    • Added unit tests for partition key (#24167).
    • Resolved issue with bulk insert not supporting partition key (#24328).
  • Database support in RBAC

    • Added database support in Role-Based Access Control (RBAC) (#23742).
    • Resolved non-existent database error for FlushAll function (#24222).
    • Implemented default database value for RBAC requests (#24307).
    • Ensured backward compatibility with empty database name (#24317).
  • Connection management

    • Implemented the connect API to manage connections (#24224) (#24293)
    • Implemented checks if a database exists when Connect was called (#24399)
  • Alibaba Cloud OSS support

    • Added support for Aliyun OSS in chunk manager (#22663, #22842, #23956).
    • Enabled Alibaba Cloud OSS as object storage using access key (AK) or Identity and Access Management (IAM) (#23949).
  • Additional features

    • Implemented AutoIndex (#24387, #24443).
    • Added configurable policy for query node and user-level schedule policy (#23718).
    • Implemented rate limit based on growing segment size (#24157).
    • Added support for single quotes within string expressions (#24386, #24406).

Read these pages to learn more.

For the use of these new features, please refer to related pages in the User Guides and the PyMilvus API reference.

Bug fixes

  • Added temporary disk data cleaning upon the start of Milvus (#24400).
  • Fixed crash issue of bulk insert caused by an invalid Numpy array file (#24480).
  • Fixed an empty result set type for Int8~Int32 (#23851).
  • Fixed the panic that occurs while balancing releasing a collection (#24003) (#24070).
  • Fixed an error that occurs when a role removes a user that has already been deleted (#24049).
  • Fixed an issue where session stop/goingStop becomes stuck after a lost connection (#23771).
  • Fixed the panic caused by incorrect logic of getting unindexed segments (#24061).
  • Fixed the panic that occurs when a collection does not exist in quota effect (#24321).
  • Fixed an issue where refresh may be notified as finished early (#24438) (#24466).


  • Added an error response to return when an unimplemented request is received (#24546)

  • Reduced disk usage for Milvus Lite and Standalone:

    • Refine RocksDB option (#24394)
    • Fix RocksMQ retention not triggering at DataCoord timetick channel (#24134)
  • Optimized quota to avoid OOM on search

  • Added consistency_level in search/query request (#24541)

  • (pr24562) Supported search with default parameters (#24516)

  • Put DataNode load statslog lazy if SkipBFStatsLog is true (#23779)

  • Put QueryNode lazy load statslog if SkipBFLoad is true (#23904)

  • Fixed concurrent map read/write in rate limiter (#23957)

  • Improved load/release performance:

    • Implemented more frequent CollectionObserver checks to trigger during load procedure (#23925)
    • Implemented checks to trigger while waiting for collection/partition to be released (#24535)
  • Optimized PrivilegeAll permission check (#23972)

  • Fixed the "not shard leader" error when gracefully stopping (#24038)

  • Checked the overflow for inserted integer (#24142) (#24172)

  • Lowered the task merge cap to mitigate an insufficient memory error (#24233)

  • Removed constraint that prevents creating an index after load (#24415)

  • Removed index check to trigger compaction (#23657) (#23688)

  • Optimized the search performance with a high filtering ratio (#23948)

Performance improvements

  • Added SIMD support for several filtering expressions (#23715, #23781).
  • Reduced data copying during insertion into growing segments (#24492).


Release date: 3 May, 2023

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

In this update, we fixed 1 critical bug.


  • Fixed RootCoord panic caused by the upgrades from v2.2.x to v2.2.7 (#23828).


Release date: 28 April, 2023

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

In this update, we have focused on resolving various issues reported by our users, enhancing the software's overall stability and functionality. Additionally, we have implemented several optimizations, such as load balancing, search grouping, and memory usage improvements.


  • Fixed a panic caused by not removing metadata of a dropped segment from the DataNode. (#23492)
  • Fixed a bug that caused forever blocking due to the release of a non-loaded partition. (#23612)
  • To prevent the query service from becoming unavailable, automatic balancing at the channel level has been disabled as a workaround. (#23632) (#23724)
  • Cancel failed tasks in the scheduling queue promptly to prevent an increase in QueryCoord scheduling latency. (#23649)
  • Fixed compatibility bug and recalculate segment rows to prevent service queries from being unavailable. (#23696)
  • Fixed a bug in the superuser password validation logic. (#23729)
  • Fixed the issue of shard detector rewatch failure, which was caused by returning a closed channel. (#23734)
  • Fixed a loading failure caused by unhandled interrupts in the AWS SDK. (#23736)
  • Fixed the "HasCollection" check in DataCoord. (#23709)
  • Fixed the bug that assigned all available nodes to a single replica incorrectly. (#23626)


  • Optimized the display of RootCoord histogram metrics. (#23567)
  • Reduced peak memory consumption during collection loading. (#23138)
  • Removed unnecessary handoff event-related metadata. (#23565)
  • Added a plugin logic to QueryNode to support the dynamic loading of shared library files. (#23599)
  • Supports load balancing with replica granularity. (#23629)
  • Released a load-balancing strategy based on scores. (#23805)
  • Added a coroutine pool to limit the concurrency of cgo calls triggered by "delete". (#23680)
  • Improved the compaction algorithm to make the distribution of segment sizes tend towards the ideal value. (#23692)
  • Changed the default shard number to 1. (#23593)
  • Improved search grouping algorithm to enhance throughput. (#23721)
  • Code refactoring: Separated the read, build, and load DiskANN parameters. (#23722)
  • Updated etcd and Minio versions. (#23765)


Release date: 18 April, 2023

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

You are advised to refrain from using version 2.2.5 due to several critical issues that require immediate attention. Version 2.2.6 addresses these issues. One of the critical issues is the inability to recycle dirty binlog data. We highly recommend using version 2.2.6 version instead of version 2.2.5 to avoid any potential complications.

If you hit the issue where data on object storage cannot be recycled, upgrade your Milvus to v2.2.6 to fix these issues.


  • Fixed the problem of DataCoord GC failure (#23298)
  • Fixed the problem that index parameters passed when creating a collection will override those passed in subsequent create_index operations (#23242)
  • Fix the problem that the message backlog occurs in RootCoord, which causes the delay of the whole system to increase (#23267)
  • Fixed the accuracy of metric RootCoordInsertChannelTimeTick (#23284)
  • Fixed the issue that the timestamp reported by the proxy may stop in some cases (#23291)
  • Fixed the problem that the coordinator role may self-destruct by mistake during the restart process (#23344)
  • Fixed the problem that the checkpoint is left behind due to the abnormal exit of the garbage collection goroutine caused by the etcd restart (#23401)


  • Added slow logging performance for query/search when the latency is not less than 5 seconds (#23274)


Release date: 29 March, 2023

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version


Fixed MinIO CVE-2023-28432 by upgrading MinIO to RELEASE.2023-03-20T20-16-18Z.

New Features

  • First/Random replica selection policy

    This policy allows for a random replica selection if the first replica chosen under the round-robin selection policy fails. This improves the throughput of database operations.

Bug fixes

  • Fixed index data loss during the upgrade from Milvus 2.2.0 to 2.2.3 or higher.

    • Fixed an issue to prevent DataCoord from calculating segment lines by stale log entries num (#23069)
    • Fixed DataCoord's meta that may be broken with DataNode of the prior version (#23031)
  • Fixed DataCoord Out-of-Memory (OOM) with large fresh pressure.

    • Fixed an issue to make DataNode's tt interval configurable (#22990)
    • Fixed endless appending SIDs (#22989)
  • Fixed a concurrency issue in the LRU cache that was caused by concurrent queries with specified output fields.

    • Fixed an issue to use single-flight to limit the readWithCache concurrent operation (#23037)
    • Fixed LRU cache concurrency (#23041)
  • Fixed shard leader cache

    • Fixed GetShardLeader returns old leader (#22887) (#22903)
    • Fixed an issue to deprecate the shard cache immediately if a query failed (#22848)
  • Other fixes

    • Fixed query performance issue with a large number of segments (#23028)
    • Fixed an issue to enable batch delete files on GCP of MinIO (#23052) (#23083)
    • Fixed flush delta buffer if SegmentID equals 0 (#23064)
    • fixed unassigned from resource group (#22800)
    • Fixed load partition timeout logic still using createdAt (#23022)
    • Fixed unsub channel always removes QueryShard (#22961)


  • Added memory Protection by using the buffer size in memory synchronization policy (#22797)
  • Added dimension checks upon inserted records (#22819) (#22826)
  • Added configuration item to disable BF load (#22998)
  • Aligned the maximum dimensions of the DisANN index and that of a collection (#23027)
  • Added checks whether all columns aligned with same num_rows (#22968) (#22981)
  • Upgraded Knowhere to 1.3.11 (#22975)
  • Added the user RPC counter (#22870)


Release date: 17 March, 2023

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

Milvus 2.2.4 is a minor update to Milvus 2.2.0. It introduces new features, such as namespace-based resource grouping, collection-level physical isolation, and collection renaming.

In addition to these features, Milvus 2.2.4 also addresses several issues related to rolling upgrades, failure recovery, and load balancing. These bug fixes contribute to a more stable and reliable system.

We have also made several enhancements to make your Milvus cluster faster and consume less memory with reduced convergence time for failure recovery.

New Features

  • Resource grouping

    Milvus has implemented resource grouping for QueryNodes. A resource group is a collection of QueryNodes. Milvus supports grouping QueryNodes in the cluster into different resource groups, where access to physical resources in different resource groups is completely isolated. See Manage Resource Group for more information.

  • Collection renaming

    The Collection-renaming API provides a way for users to change the name of a collection. Currently, PyMilvus supports this API, and SDKs for other programming languages are on the way. See Rename a Collection for details.

  • Google Cloud Storage support

    Milvus now supports Google Cloud Storage as the object storage.

  • New option to the search and query APIs

    If you are more concerned with performance rather than data freshness, enabling this option will skip search on all growing segments and offer better search performance under the scenario search with insertion. See search() and query() for details.

Bug fixes

  • Fixed segment not found when forwarding delete to empty segment (#22528)(#22551)
  • Fixed possible broken channel checkpoint in v2.2.2 (#22205) (#22227)
  • Fixed entity number mismatch with some entities inserted (#22306)
  • Fixed DiskANN recovery failure after QueryNode reboots (#22488) (#22514)
  • Fixed search/release on same segment (#22414)
  • Fixed file system crash during bulk-loading files prefixed with a '.' (#22215)
  • Added tickle for DataCoord watch event (#21193) (#22209)
  • Fixed deadlock when releasing segments and removing nodes concurrently (#22584)
  • Added channel balancer on DataCoord (#22324) (#22377)
  • Fixed balance generated reduce task (#22236) (#22326)
  • Fixed QueryCoord panic caused by balancing (#22486)
  • Added scripts for rolling update Milvus's component installed with helm (#22124)
  • Added NotFoundTSafer and NoReplicaAvailable to retriable error code (#22505)
  • Fixed no retires upon gRPC error (#22529)
  • Fixed an issue for automatic component state update to healthy after start (#22084)
  • Added graceful-stop for sessions (#22386)
  • Added retry op for all servers (#22274)
  • Fixed metrics info panic when network error happens (#22802)
  • Fixed disordered minimum timestamp in proxy's pchan statistics (#22756)
  • Fixed an issue to ensure segment ID recovery upon failures to send time-tick (#22771)
  • Added segment info retrieval without the binlog path (#22741)
  • Added distribution.Peek for GetDataDistribution in case of blocked by release (#22752)
  • Fixed the segment not found error (#22739)
  • Reset delta position to vchannel in packSegmentLoadReq (#22721)
  • Added vector float data verification for bulkinsert and insert (#22729)
  • Upgraded Knowhere to 1.3.10 to fix bugs (#22746)
  • Fixed RootCoord double updates TSO (#22715) (#22723)
  • Fixed confused time-tick logs (#22733) (#22734)
  • Fixed session nil point (#22696)
  • Upgraded Knowhere to 1.3.10 (#22614)
  • Fixed incorrect sequence of timetick statistics on proxy(#21855) (#22560)
  • Enabled DataCoord to handle GetIndexedSegment error from IndexCoord (#22673)
  • Fixed an issue for Milvus writes flushed segment key only after the segment is flushed (#22667)
  • Marked cache deprecated instead of removing it (#22675)
  • Updated shard leader cache (#22632)
  • Fixed an issue for the replica observer to assign node (#22635)
  • Fixed the not found issue when retrieving collection creation timestamp (#22629) (#22634)
  • Fixed time-tick running backwards during DDLs (#22617) (#22618)
  • Fixed max collection name case (#22601)
  • Fixed DataNode tickle not run default (#22622)-
  • Fixed DataCoord panic while reading timestamp of an empty segment (#22598)
  • Added scripts to get etcd info (#22589)
  • Fixed concurrent loading timeout during DiskANN indexing (#22548)
  • Fixed an issue to ensure index file not finish early because of compaction (#22509)
  • Added MultiQueryNodes tag for resource group (#22527) (#22544)


  • Performance

    • Improved query performance by avoiding counting all bits (#21909) (#22285)
    • Fixed dual copy of varchar fields while loading (#22114) (#22291)
    • Updated DataCoord compaction panic after DataNode update plan to ensure consistency (#22143) (#22329)
    • Improved search performance by avoiding allocating a zero-byte vector during searches (#22219) (#22357)
    • Upgraded Knowhere to 1.3.9 to accelerate IVF/BF (#22368)
    • Improved search task merge policy (#22006) (#22287)
    • Refined Read method of MinioChunkManager to reduce IO(#22257)
  • Memory Usage

    • Saved index files by 16m to save memory usage while indexing (#22369)
    • Added memory usage too large sync policy (#22241)
  • Others

    • Removed constraints that compaction happens only on indexed segment (#22145)
    • Changed RocksMQ page size to 256M to reduce RocksMQ disk usage (#22433)
    • Changed the etcd session timeout to 20s to improve recovery speed(#22400)
    • Added the RBAC for the GetLoadingProgress and GetLoadState API (#22313)


Release date: 10 February, 2023

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

Milvus 2.2.3 introduces the rolling upgrade capability to Milvus clusters and brings high availability settings to RootCoords. The former greatly reduces the impacts brought by the upgrade and restart of the Milvus cluster in production to the minimum, while the latter enables coordinators to work in active-standby mode and ensures a short failure recovery time of no more than 30 seconds.

In this release, Milvus also ships with a lot of improvements and enhancements in performance, including a fast bulk-insert experience with reduced memory usage and less loading time.

Breaking changes

In 2.2.3, the maximum number of fields in a collection is reduced from 256 to 64. (#22030)


  • Rolling upgrade

    The rolling upgrade feature allows Milvus to respond to incoming requests during the upgrade, which is not possible in previous releases. In such releases, upgrading a Milvus instance requires it to be stopped first and then restarted after the upgrade is complete, leaving all incoming requests unanswered.

    Related issues:

    • Graceful stop of index nodes implemented (#21556)
    • Graceful stop of query nodes implemented (#21528)
    • Auto-sync of segments on closing implemented (#21576)
    • Graceful stop APIs and error messages improved (#21580)
    • Issues identified and fixed in the code of QueryNode and QueryCoord (#21565)
  • Coordinator HA

    Coordinator HA allows Milvus coordinators to work in active-standby mode to avoid single-point of failures.

    Related issues:

    • HA-related issues identified and fixed in QueryCoordV2 (#21501)
    • Auto-registration on the startup of Milvus was implemented to prevent both coordinators from working as the active coordinators. (#21641)
    • HA-related issues identified and fixed in RootCoords (#21700)
    • Issues identified and fixed in active-standby switchover (#21747)


  • Bulk-insert performance enhanced

    • Bulk-insert enhancement implemented (#20986 #21532)
    • JSON parser optimized for data import (#21332)
    • Stream-reading NumPy data implemented (#21540)
    • Bulk-insert progress report implemented (#21612)
    • Issues identified and fixed so that Milvus checks indexes before flushes segments before bulk-insert is complete (#21604)
    • Issues related to bulk-insert progress identified and fixed (#21668)
    • Issues related to bulk-insert report identified and fixed (#21758)
    • Issues identified and fixed so that Milvus does not seal failed segments while performing bulk-insert operations. (#21779)
    • Issues identified and fixed so that bulk-insert operations do not cause a slow flush (#21918)
    • Issues identified and fixed so that bulk-insert operations do not crash the DataNodes (#22040)
    • Refresh option added to LoadCollection and LoadPartition APIs (#21811)
    • Segment ID update on data import implemented (#21583)
  • Memory usage reduced

    • Issues identified and fixed so that loading failures do not return insufficient memory (#21592)
    • Arrow usage removed from FieldData (#21523)
    • Memory usage reduced in indexing scalar fields (#21970) (#21978)
  • Monitoring metrics optimized

    • Issues related to unregistered metrics identified and fixed (#22098)
    • A new segment metric that counts the number of binlog files added (#22085)
    • Many new metrics added (#21975)
    • Minor fix on segment metric (#21977)
  • Meta storage performance improved

    • Improved ListSegments performance for Datacoord catalog. (#21600)
    • Improved LoadWithPrefix performance for SuffixSnapshot. (#21601)
    • Removed redundant LoadPrefix requests for Catalog ListCollections. (#21551) (#21594)
    • Added A WalkWithPrefix API for MetaKv interface. (#21585)
    • Added GC for snapshot KV based on time-travel. (#21417) (#21763)
  • Performance improved

    • Upgraded Knowhere to 1.3.7. (#21735)
    • Upgraded Knowhere to 1.3.8. (#22024)
    • Skipped search GRPC call for standalone. (#21630)
    • Optimized some low-efficient code. (#20529) (#21683)
    • Fixed fill the string field twice when string index exists. (#21852) (#21865)
    • Used all() API for bitset check. (#20462) (#21682)
  • Others

    • Implemented the GetLoadState API. (#21533)
    • Added a task to unsubscribe dmchannel. (#21513) (#21794)
    • Explicitly list the triggering reasons when Milvus denies reading/writing. (#21553)
    • Verified and adjusted the number of rows in a segment before saving and passing SegmentInfo. (#21200)
    • Added a segment seal policy by the number of binlog files. (#21941)
    • Upgraded etcd to 3.5.5. (#22007

Bug Fixes

  • QueryCoord segment replacement fixed

    • Fixed the mismatch of sealed segments IDs after enabling load-balancing in 2.2. (#21322)
    • Fixed the sync logic of the leader observer. (#20478) (#21315)
    • Fixed the issues that observers may update the current target to an unfinished next target. (#21107) (#21280)
    • Fixed the load timeout after the next target updates. (#21759) (#21770)
    • Fixed the issue that the current target may be updated to an invalid target. (#21742) (#21762)
    • Fixed the issue that a failed node may update the current target to an unavailable target. (#21743)
  • Improperly invalidated proxy cache fixed

    • Fixed the issue that the proxy does not update the shard leaders cache for some types of error (#21185) (#21303)
    • Fixed the issue that Milvus invalidates the proxy cache first when the shard leader list contains error (#21451) (#21464)
  • CheckPoint and GC Related issues fixed

    • Fixed the issue that the checkpoint will not update after data delete and compact (#21495)
    • Fixed issues related to channel checkpoint and GC (#22027)
    • Added restraints on segment GC of DML position before channel copy (#21773)
    • Removed collection meta after GC is complete (#21595) (#21671)
  • Issues related to not being able to use embedded etcd with Milvus fixed

    • Added setup config files for embedded etcd (#22076)
  • Others

    • Fixed the offset panic in queries (#21292) (#21751)
    • Fixed the issue that small candidate compaction should only happen with more than one segment (#21250)
    • Fixed the issue of memory usage calculation (#21798)
    • Fixed the issue that a timestamp allocation failure blocks compaction queue forever (#22039) (#22046)
    • Fixed the issue that QueryNode may panic when stopped (#21406) (#21419)
    • Modified lastSyncTime in advance to prevent multiple flush binlogs (#22048)
    • Fixed the issue that a collection does not exist when users try to recover it (#21471) (#21628)
    • Use tt msg stream for consume delete msg (#21478)
    • Prevent users from deleting entities by any non-primary-key field (#21459) (#21472)
    • Fixed potential nil access on segments (#22104)


Release date: 22 December, 2022

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

Milvus 2.2.2 is a minor fix of Milvus 2.2.1. It fixed a few loading failure issues as of the upgrade to 2.2.1 and the issue that the proxy cache is not cleaned upon some types of errors.

Bug Fixes

  • Fixed the issue that the proxy doesn't update the cache of shard leaders due to some types of errors. (#21320)
  • Fixed the issue that the loaded info is not cleaned for released collections/partitions. (#21321)
  • Fixed the issue that the load count is not cleared on time. (#21314)


Release date: 15 December, 2022

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

Milvus 2.2.1 is a minor fix of Milvus 2.2.0. It supports authentication and TLS on all dependencies, optimizes the performance ludicrously on searches and fixes some critical issues. With tremendous contribution from the community, this release managed to resolve over 280 issues, so please try the new release and give us feedback on stability, performance and ease of use.

New Features

  • Supports Pulsa tenant and authentication. (#20762)
  • Supports TLS in etcd config source. (#20910)


After upgrading the Knowhere vector engine and changing the parallelism strategy, Milvus 2.2.1 improves search performance by over 30%.

Optimizes the scheduler, and increases merge tasks probability. (#20931)

Bug Fixes

  • Fixed term filtering failures on indexed scalar fields. (#20840)
  • Fixed the issue that only partial data returned upon QueryNode restarts. (#21139)(#20976)
  • Fixed IndexNode panic upon failures to create an index. (#20826)
  • Fixed endless BinaryVector compaction and generation of data on Minio. (#21119) (#20971)
  • Fixed the issue that meta_cache of proxy partially updates. (#21232)
  • Fixed slow segment loading due to staled checkpoints. (#21150)
  • Fixed concurrently loaded Casbin model causing concurrent write operations. (#21132)(#21145)(#21073)
  • Forbade garbage-collecting index meta when creating an index. (#21024)
  • Fixed a bug that the index data can not be garbage-collected because ListWithPrefix from Minio with recursive is false. (#21040)
  • Fixed an issue that an error code is returned when a query expression does not match any results. (#21066)
  • Fixed search failures on disk index when search_list equals to limit. (#21114)
  • Filled collection schema after DataCoord restarts. (#21164)
  • Fixed an issue that the compaction handler may double release and hang. (#21019)
  • [restapi] Fixed precision loss for Int64 fields upon insert requests. (#20827)
  • Increased MaxWatchDuration and make it configurable to prevent shards with large data loads from timing out. (#21010)
  • Fixed the issue that the compaction target segment rowNum is always 0. (#20941)
  • Fixed the issue that IndexCoord deletes segment index by mistake because IndexMeta is not stored in time. (#21058)
  • Fixed the issue that DataCoord crushes if auto-compaction is disabled. (#21079)
  • Fixed the issue that searches on growing segments even though the segments are indexed. (#21215)


  • Refined logs and the default log level is set to INFO.
  • Fixed incorrect metrics and refined the metric dashboard.
  • Made TopK limit configurable (#21155)

Breaking changes

Milvus now limits each RPC to 64 MB to avoid OOM and generating large message packs.


Release date: 18 November, 2022

Milvus versionPython SDK versionJava SDK versionGo SDK versionNode.js SDK version

Milvus 2.2.0 introduces many new features including support for Disk-based approximate nearest neighbor (ANN) algorithm, bulk insertion of entities from files, and role-based access control (RBAC) for an improved security. In addition, this major release also ushers in a new era for vector search with enhanced stability, faster search speed, and more flexible scalability.

Breaking changes

Since metadata storage is refined and API usage is normalized, Milvus 2.2 is not fully compatible with earlier releases. Read this guide to learn how to safely upgrade from Milvus 2.1.x to 2.2.0.


  • Support for bulk insertion of entities from files Milvus now offers a new set of bulk insertion APIs to make data insertion more efficient. You can now upload entities in a Json file directly to Milvus. See Insert Entities from Files for details.

  • Query result pagination To avoid massive search and query results returned in a single RPC, Milvus now supports configuring offset and filtering results with keywords in searches and queries. See Search and Query for details.

  • Role-based access control (RBAC) Like other traditional databases, Milvus now supports RBAC so that you can manages users, roles and privileges. See Enable RBAC for details.

  • Quotas and limits Quota is a new mechanism that protects the system from OOM and crash under a burst of traffic. By imposing quota limitations, you can limit ingestion rate, search rate, etc. See Quota and Limitation Configurations for details.

  • Time to live (TTL) at a collection level In prior releases, we only support configuring TTL at a cluster level. Milvus 2.2.0 now supports configuring collection TTL when you create or modify a collection. After setting TTL for a collection, the entities in this collection automatically expires after the specified period of time. See Create a collection or Modify a collection for details.
  • Support for disk-based approximate nearest neighbor search (ANNS) indexes (Beta) Traditionally, you need to load the entire index into memory before search. Now with DiskANN, an SSD-resident and Vamana graph-based ANNS algorithm, you can directly search on large-scale datasets and save up to 10 times the memory.
  • Data backup (Beta) Thanks to the contribution from Zilliz, Milvus 2.2.0 now provides a tool to back up and restore data. The tool can be used either in a command line or an API server for data security.

Bug fixes and stability

  • Implements query coord V2, which handles all channel/segment allocation in a fully event-driven and asynchronous mode. Query coord V2 address all issues of stuck searches and accelerates failure recovery.
  • Root coord and index coord are refactored for more elegant handling of errors and better task scheduling.
  • Fixes the issue of invalid RocksMQ retention mechanism when Milvus Standalone restarts.
  • Meta storage format in etcd is refactored. With the new compression mechanism, etcd kv size is reduced by 10 times and the issues of etcd memory and space are solved.
  • Fixes a couple of memory issues when entities are continuously inserted or deleted.


  • Performance

    • Fixes performance bottleneck to that Milvus can fully utilize all cores when CPU is more than 8 cores.
    • Dramatically improves the search throughput and reduce the latency.
    • Decreases load speed by processing load in parallel.
  • Observability

    • Changes all log levels to info by default.
    • Added collection-level latency metrics for search, query, insertion, and deletion.
  • Debug tool

    • BirdWatcher, the debug tool for Milvus, is further optimized as it can now connect to Milvus meta storage and inspect the part of the internal status of the Milvus system.


  • Index and load

    • A collection can only be loaded with an index created on it.
    • Indexes cannot be created after a collection is loaded.
    • A loaded collection must be released before dropping the index created on this collection.
  • Flush

    • Flush API, which forces a seal on a growing segment and syncs the segment to object storage, is now exposed to users. Calling flush() frequently may affect search performance as too many small segments are created.
    • No auto-flush is triggered by any SDK APIs such as num_entities(), create_index(), etc.
  • Time Travel

    • In Milvus 2.2, Time Travel is disabled by default to save disk usage. To enable Time Travel, configure the parameter common.retentionDuration manually.

Was this page helpful?