Stop

The stop filter removes specified stop words from tokenized text, helping to eliminate common, less meaningful words. You can configure the list of stop words using the stop_words parameter.

Configuration

The stop filter accepts its stop-words list either inline via the stop_words parameter or from a registered file resource via the stop_words_file parameter.

Inline stop-words list

To use the stop filter with an inline list, specify "type": "stop" in the filter configuration, along with a stop_words parameter that provides the list of stop words.

analyzer_params = {
    "tokenizer": "standard",
    "filter":[{
        "type": "stop", # Specifies the filter type as stop
        "stop_words": ["of", "to", "_english_"], # Defines custom stop words and includes the English stop word list
    }],
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("tokenizer", "standard");
analyzerParams.put("filter",
        Collections.singletonList(
                new HashMap<String, Object>() {{
                    put("type", "stop");
                    put("stop_words", Arrays.asList("of", "to", "_english_"));
                }}
        )
);
const analyzer_params = {
    "tokenizer": "standard",
    "filter":[{
        "type": "stop", # Specifies the filter type as stop
        "stop_words": ["of", "to", "_english_"], # Defines custom stop words and includes the English stop word list
    }],
};
analyzerParams = map[string]any{"tokenizer": "standard",
    "filter": []any{map[string]any{
        "type":       "stop",
        "stop_words": []string{"of", "to", "_english_"},
    }}}
# restful
analyzerParams='{
  "tokenizer": "standard",
  "filter": [
    {
      "type": "stop",
      "stop_words": [
        "of",
        "to",
        "_english_"
      ]
    }
  ]
}'

The stop filter accepts the following configurable parameters.

Parameter

Description

stop_words

A list of words to be removed from tokenization. By default, the filter uses the built‑in _english_ dictionary. You can override or extend it in three ways:

  • Built‑in dictionaries – supply one of these language aliases to use a predefined dictionary:

    "_english_", "_danish_", "_dutch_", "_finnish_", "_french_", "_german_", "_hungarian_", "_italian_", "_norwegian_", "_portuguese_", "_russian_", "_spanish_", "_swedish_"

  • Custom list – pass an array of your own terms, e.g. ["foo", "bar", "baz"].

  • Mixed list – combine aliases and custom terms, e.g. ["of", "to", "_english_"].

    For details on the exact content of each predefined dictionary, refer to stop_words.

The stop filter operates on the terms generated by the tokenizer, so it must be used in combination with a tokenizer. For a list of tokenizers available in Milvus, refer to Standard Tokenizer and its sibling pages.

After defining analyzer_params, you can apply them to a VARCHAR field when defining a collection schema. This allows Milvus to process the text in that field using the specified analyzer for efficient tokenization and filtering. For details, refer to Example use.

Load stop words from a file resourceCompatible with Milvus 3.0.x

For large custom stop-words lists — language-specific lists, domain vocabularies, or lists you want to share across many collections — store the words in a file and register the file as a remote file resource, then reference it from the filter via the stop_words_file parameter. You can use stop_words_file on its own or alongside inline stop_words; when both are set, the filter merges the two sources into a single stop-words list.

The file is plain UTF‑8 text with one stop word per line. For example:

the
of
for

Upload the file to the object store that your Milvus cluster is configured to use, then register it:

from pymilvus import MilvusClient

client = MilvusClient(uri="http://localhost:19530")

# Register the uploaded file under a name you'll reference from analyzer configs.
client.add_file_resource(
    name="en_stop_words",
    path="file/stop_words.txt",    # full S3 object key, including the rootPath prefix
)

Reference the registered resource in the filter via stop_words_file:

analyzer_params = {
    "tokenizer": "standard",
    "filter": [{
        "type": "stop",
        "stop_words_file": {
            "type": "remote",
            "resource_name": "en_stop_words",
            "file_name": "stop_words.txt",
        },
    }],
}

The stop_words_file parameter accepts an object with the following fields:

Field

Description

type

The resource type. Use "remote" for a file registered via add_file_resource. For the "local" variant used in self-hosted deployments, refer to Manage File Resources.

resource_name

The name used when the file was registered with add_file_resource.

file_name

The filename portion of the registered resource's object-store path (for example, "stop_words.txt" if the resource was registered with path="file/stop_words.txt").

Examples

Before applying the analyzer configuration to your collection schema, verify its behavior using the run_analyzer method.

Analyzer configuration

analyzer_params = {
    "tokenizer": "standard",
    "filter":[{
        "type": "stop", # Specifies the filter type as stop
        "stop_words": ["of", "to", "_english_"], # Defines custom stop words and includes the English stop word list
    }],
}
Map<String, Object> analyzerParams = new HashMap<>();
analyzerParams.put("tokenizer", "standard");
analyzerParams.put("filter",
        Collections.singletonList(
                new HashMap<String, Object>() {{
                    put("type", "stop");
                    put("stop_words", Arrays.asList("of", "to", "_english_"));
                }}
        )
);
// javascript
analyzerParams = map[string]any{"tokenizer": "standard",
    "filter": []any{map[string]any{
        "type":       "stop",
        "stop_words": []string{"of", "to", "_english_"},
    }}}
# restful

Verification using run_analyzer

from pymilvus import (
    MilvusClient,
)

client = MilvusClient(uri="http://localhost:19530")

# Sample text to analyze
sample_text = "The stop filter allows control over common stop words for text processing."

# Run the standard analyzer with the defined configuration
result = client.run_analyzer(sample_text, analyzer_params)
print("Standard analyzer output:", result)
import io.milvus.v2.client.ConnectConfig;
import io.milvus.v2.client.MilvusClientV2;
import io.milvus.v2.service.vector.request.RunAnalyzerReq;
import io.milvus.v2.service.vector.response.RunAnalyzerResp;

ConnectConfig config = ConnectConfig.builder()
        .uri("http://localhost:19530")
        .build();
MilvusClientV2 client = new MilvusClientV2(config);

List<String> texts = new ArrayList<>();
texts.add("The stop filter allows control over common stop words for text processing.");

RunAnalyzerResp resp = client.runAnalyzer(RunAnalyzerReq.builder()
        .texts(texts)
        .analyzerParams(analyzerParams)
        .build());
List<RunAnalyzerResp.AnalyzerResult> results = resp.getResults();
// javascript
import (
    "context"
    "encoding/json"
    "fmt"

    "github.com/milvus-io/milvus/client/v2/milvusclient"
)

client, err := milvusclient.New(ctx, &milvusclient.ClientConfig{
    Address: "localhost:19530",
    APIKey:  "root:Milvus",
})
if err != nil {
    fmt.Println(err.Error())
    // handle error
}

bs, _ := json.Marshal(analyzerParams)
texts := []string{"The stop filter allows control over common stop words for text processing."}
option := milvusclient.NewRunAnalyzerOption(texts).
    WithAnalyzerParams(string(bs))

result, err := client.RunAnalyzer(ctx, option)
if err != nil {
    fmt.Println(err.Error())
    // handle error
}
# restful

Expected output

['The', 'stop', 'filter', 'allows', 'control', 'over', 'common', 'stop', 'words', 'text', 'processing']

Try Managed Milvus for Free

Zilliz Cloud is hassle-free, powered by Milvus and 10x faster.

Get Started
Feedback

Was this page helpful?