Standard Tokenizer
The standard
tokenizer in Milvus splits text based on spaces and punctuation marks, making it suitable for most languages.
Configuration
To configure an analyzer using the standard
tokenizer, set tokenizer
to standard
in analyzer_params
.
analyzer_params = {
"tokenizer": "standard",
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("tokenizer", "standard");
const analyzer_params = {
"tokenizer": "standard",
};
analyzerParams = map[string]any{"tokenizer": "standard"}
# restful
analyzerParams='{
"tokenizer": "standard"
}'
The standard
tokenizer can work in conjunction with one or more filters. For example, the following code defines an analyzer that uses the standard
tokenizer and lowercase
filter:
analyzer_params = {
"tokenizer": "standard",
"filter": ["lowercase"]
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("tokenizer", "standard");
analyzerParams.put("filter", Collections.singletonList("lowercase"));
const analyzer_params = {
"tokenizer": "standard",
"filter": ["lowercase"]
};
analyzerParams = map[string]any{"tokenizer": "standard", "filter": []any{"lowercase"}}
# restful
analyzerParams='{
"tokenizer": "standard",
"filter": [
"lowercase"
]
}'
For simpler setup, you may choose to use the standard
analyzer, which combines the standard
tokenizer with the lowercase
filter.
After defining analyzer_params
, you can apply them to a VARCHAR
field when defining a collection schema. This allows Milvus to process the text in that field using the specified analyzer for efficient tokenization and filtering. For details, refer to Example use.
Examples
Before applying the analyzer configuration to your collection schema, verify its behavior using the run_analyzer
method.
Analyzer configuration
analyzer_params = {
"tokenizer": "standard",
"filter": ["lowercase"]
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("tokenizer", "standard");
analyzerParams.put("filter", Collections.singletonList("lowercase"));
// javascript
analyzerParams = map[string]any{"tokenizer": "standard", "filter": []any{"lowercase"}}
# restful
Verification using run_analyzer
from pymilvus import (
MilvusClient,
)
client = MilvusClient(uri="http://localhost:19530")
# Sample text to analyze
sample_text = "The Milvus vector database is built for scale!"
# Run the standard analyzer with the defined configuration
result = client.run_analyzer(sample_text, analyzer_params)
print("English analyzer output:", result)
import io.milvus.v2.client.ConnectConfig;
import io.milvus.v2.client.MilvusClientV2;
import io.milvus.v2.service.vector.request.RunAnalyzerReq;
import io.milvus.v2.service.vector.response.RunAnalyzerResp;
ConnectConfig config = ConnectConfig.builder()
.uri("http://localhost:19530")
.build();
MilvusClientV2 client = new MilvusClientV2(config);
List<String> texts = new ArrayList<>();
texts.add("The Milvus vector database is built for scale!");
RunAnalyzerResp resp = client.runAnalyzer(RunAnalyzerReq.builder()
.texts(texts)
.analyzerParams(analyzerParams)
.build());
List<RunAnalyzerResp.AnalyzerResult> results = resp.getResults();
// javascript
// go
# restful
Expected output
['the', 'milvus', 'vector', 'database', 'is', 'built', 'for', 'scale']