Create Import Jobs
This operation imports the prepared data files to a Milvus instance. To learn how to prepare your data files, read Prepare Data Import.
The base URL for this API is in the following format:
http://localhost:19530
export CLUSTER_ENDPOINT="http://localhost:19530"
The authentication token should be an API key with appropriate privileges.
The name of the database that to which the collection belongs . Setting this to a non-existing database results in an error.
The name of the target collection. Setting this to a non-existing collection results in an error.
The name of the target partition. Setting this to a non-existing partition results in an error.
The files that contain the data to import. The files should reside within the Milvus bucket on the MinIO instance deployed along with your Milvus instance.
A sub-list that contains a single JSON or Parquet file, or a set of NumPy files.
Bulk-import options.
The timeout duration of the created import jobs. The value should be a positive number suffixed by s (seconds), m (minutes), and h(hours). For example, 300s, 1.5h, and 1h45 are all valid values.
export TOKEN="YOUR_API_KEY"
curl --request POST \--url "${BASE_URL}/v2/vectordb/jobs/import/create" \--header "Authorization: Bearer ${TOKEN}" \--header "Content-Type: application/json" \-d '{ "files": [ [ "/d1782fa1-6b65-4ff3-b05a-43a436342445/1.json" ], [ "/2a12dea7-2eff-4b34-97b6-9554063fd791/1/id.npy", "/2a12dea7-2eff-4b34-97b6-9554063fd791/1/vector.npy", "/2a12dea7-2eff-4b34-97b6-9554063fd791/1/$meta.npy" ], [ "/a6fb2d1c-7b1b-427c-a8a3-178944e3b66d/1.parquet" ] ], "collectionName": "quick_setup"}'
Response code.
Response payload which carries the IDs of the created bulk-import jobs.
A created bulk-import job.
The ID of the current bulk-import job.
Returns an error message.
Response code.
Error message.
{ "code": 0, "data": { "jobId": "job-xxxxxxxxxxxxxxxxxxxxx" }}
{ "code": 0, "message": "The token is illegal."}