Length
The length
filter removes tokens that do not meet specified length requirements, allowing you to control the length of tokens retained during text processing.
Configuration
The length
filter is a custom filter in Milvus, specified by setting "type": "length"
in the filter configuration. You can configure it as a dictionary within the analyzer_params
to define length limits.
analyzer_params = {
"tokenizer": "standard",
"filter":[{
"type": "length", # Specifies the filter type as length
"max": 10, # Sets the maximum token length to 10 characters
}],
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("tokenizer", "standard");
analyzerParams.put("filter",
Collections.singletonList(new HashMap<String, Object>() {{
put("type", "length");
put("max", 10);
}}));
cosnt analyzer_params = {
"tokenizer": "standard",
"filter":[{
"type": "length", # Specifies the filter type as length
"max": 10, # Sets the maximum token length to 10 characters
}],
};
analyzerParams = map[string]any{"tokenizer": "standard",
"filter": []any{map[string]any{
"type": "length",
"max": 10,
}}}
# restful
analyzerParams='{
"tokenizer": "standard",
"filter": [
{
"type": "length",
"max": 10
}
]
}'
The length
filter accepts the following configurable parameters.
Parameter |
Description |
---|---|
|
Sets the maximum token length. Tokens longer than this length are removed. |
The length
filter operates on the terms generated by the tokenizer, so it must be used in combination with a tokenizer. For a list of tokenizers available in Milvus, refer to Tokenizer Reference.
After defining analyzer_params
, you can apply them to a VARCHAR
field when defining a collection schema. This allows Milvus to process the text in that field using the specified analyzer for efficient tokenization and filtering. For details, refer to Example use.
Examples
Before applying the analyzer configuration to your collection schema, verify its behavior using the run_analyzer
method.
Analyzer configuration
analyzer_params = {
"tokenizer": "standard",
"filter":[{
"type": "length", # Specifies the filter type as length
"max": 10, # Sets the maximum token length to 10 characters
}],
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("tokenizer", "standard");
analyzerParams.put("filter",
Collections.singletonList(new HashMap<String, Object>() {{
put("type", "length");
put("max", 10);
}}));
// javascript
analyzerParams = map[string]any{"tokenizer": "standard",
"filter": []any{map[string]any{
"type": "length",
"max": 10,
}}}
# restful
Verification using run_analyzer
# Sample text to analyze
sample_text = "The length filter allows control over token length requirements for text processing."
# Run the standard analyzer with the defined configuration
result = MilvusClient.run_analyzer(sample_text, analyzer_params)
print(result)
// java
// javascript
// go
# restful
Expected output
['The', 'length', 'filter', 'allows', 'control', 'over', 'token', 'length', 'for', 'text', 'processing']