elk笔20--Analysis
- 1 Analysis 简介
- 1.1 Index time analysis(索引时分析)
- 1.2 Specifying an index time analyzer(指定索引时分词器)
- 1.3 Search time analysis(搜索时分析)
- 1.4 Specifying a search time analyzer(指定搜索时分词器)
- 2 Analysis 分类
- 2.1 Anatomy of an analyzer(剖析分词器)
- 2.2 Testing analyzers(测试分词器)
- 2.3 Analyzers(分词器)
- 2.4 Normalizers
- 2.5 Tokenizers
- 2.6 Token Filters
- 2.7 Character Filters
- 3 案例
- 4 说明
1 Analysis 简介
分析|解是转化text类字段为符号或数据项目的过程,这些符号或数据项会被添加到倒排索引中以便于搜索,例如转化邮件的主体。分析|通常是被特定的分析器执行,每个索引的分析器要么是内置的,要么是自定义。
1.1 Index time analysis(索引时分析)
在索引时,内置的 english 分析|词器会将如下句子转化为多个不同的字符。转化后会将单个字符转化为小写,移除停用词,减少单词为其原始状态(即去掉复数、过去式等操作)。最后将转化后的数据项将如到倒排索引中。
原始:"The QUICK brown foxes jumped over the lazy dog!"
转化为:
[ quick, brown, fox, jump, over, lazi, dog ]
1.2 Specifying an index time analyzer(指定索引时分词器)
es 中,每个text 字段都可以在mapping中指定其自己的分词器。在索引时候,如果没有指定分词器,将会在索引的settings中找default对应的分词器;如果没找到,则默认会使用standard分词器。
此处创建my_index 索引,设置 titile 的分词器为 standard 分词器,当写入的时候会通过standard 分词器对title字段进行分词。
{
"mappings": {
"properties": {
"title": {
"type": "text",
"analyzer": "standard"
}
}
}
}
1.3 Search time analysis(搜索时分析)
在搜索时候,在一个全文检索中(例如 match query)会将相同的分析过程应用到 query string 上,该分析过程会使用和存储在倒排索引中相同分词器将query string中的文本转化为数据项。
例如,用户可能搜索如下内容,搜索的内容会被english 分词器解析为 quick 和 fox 两个数据项。
"a quick fox"
分词后的内容:
[ quick, fox ]
1.4 Specifying a search time analyzer(指定搜索时分词器)
通常情况下,相同的分词器应该同时应用在索引时和搜索时,且类似于match query 类型的全文查找会通过maping来查找每个字段的分词器。
在es中,搜索特定字段的分词器是通过如下方式查找到的:
1)query 自己指定的分词器;
2)mapping 参数中的 search_analyzer;
3)分词器mapping 参数;
4)索引settigns 中 default_search 对应的分词器;
5)索引settings 中 default 对应的分词器;
6)standard 分词器。
2 Analysis 分类
2.1 Anatomy of an analyzer(剖析分词器)
不论是内置还是自定义的分词器,它都是有3个低级的基础模块组成的一包,具体包括:字符过滤器、分词器、符号过滤器。
内置的分词器预先打包这些基础模块到分词器中,以适用于不通的语言和类别的文本。es 中也暴露来个体的基础模块,以便于它们可以被相互结合来定义新的自定义分词器。
字符过滤器接受原始文本为字符流,并且可以通过增加、移除或改变字符来转换字符流。
例如一个字符过滤器可以用于转换 印度教的数字符号 (٠١٢٣٤٥٦٧٨٩) 为阿拉伯语等价字 (0123456789),或者从字符流中去掉HTML元素中的<b>。
分词器可能有0个或者多个字符过滤器,它们会按照一定次序使用。
分词器接收字符流,然后把它分解为单个的字符(通常为单个的单词),并输出一个token|单词流。例如空格分词器一旦发现空格就把文本分解为多个token,它会把文本"Quick brown fox!" 分解为数据项 [Quick, brown, fox!].
分词器也负责记录每个数据项的次序和位置,当然也会记录数据项的开始和结束字符偏移位置。
一个analyzer 必须拥有一个确切的 tokenizer.
符号过滤器接收符号流,并且可能增加、移除或者改变相应符号。例如,以下小写符号过滤器转换所有的符号为小写字母,一个停止符号过滤器移除常见的停用符(例如移除字符流中的停用词the),一个同义词符号过滤器把一个或多个同义词引入符号流中。
符号过滤器不会改变每个符号的围追和字符偏移位置。
一个 analyzer可能拥有0个或者多个符号过滤器,它们会按照一定的次序使用。
2.2 Testing analyzers(测试分词器)
analyze API 是一个非常重要的工具,它可以用来观察分词器产生数据项。内置的分词器 analyzer (or combination of built-in tokenizer, token filters, and character filters) 可以特别指定到相应的请求中。
以下为2个分词器测试案例:
{
"analyzer": "whitespace",
"text": "The quick brown fox."
}
分词结果:[The, quick, brown, fox]
POST _analyze
{
"tokenizer": "standard",
"filter": [ "lowercase", "asciifolding" ],
"text": "Is this déja vu?"
}
分词结果:
{
"tokens" : [{
"token" : "is",
"start_offset" : 0,
"end_offset" : 2,
"type" : "<ALPHANUM>",
"position" : 0
},{
"token" : "this",
"start_offset" : 3,
"end_offset" : 7,
"type" : "<ALPHANUM>",
"position" : 1
},{
"token" : "deja",
"start_offset" : 8,
"end_offset" : 12,
"type" : "<ALPHANUM>",
"position" : 2
},{
"token" : "vu",
"start_offset" : 13,
"end_offset" : 15,
"type" : "<ALPHANUM>",
"position" : 3
}]
}
作为一种选择,自定义的分词器也可以通过在一个特定的索引上执行 analyze API 而被使用;以下案例创建了一个索引my_inde,并自定义分词器std_folded,同时设置 my_text 字段的分词器为 std_folded。
PUT my_index{
"settings": {
"analysis": {
"analyzer": {
"std_folded": { #1 定义一个自定义的分词器名称为std_folded
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"asciifolding"
]
}
}
}
},
"mappings": {
"properties": {
"my_text": {
"type": "text",
"analyzer": "std_folded" #2 字段my_text 使用自定义的分词器std_folded
}
}
}
}
GET my_index/_analyze #3 使用这个定义的分词器时必须指定对应的索引
{
"analyzer": "std_folded", #4 通过分词器的名称使用分词器
"text": "Is this déjà vu?"
}
GET my_index/_analyze
{
"field": "my_text", #5 字段mapping指定了分词器后,也可以直接通过字段来使用分词器
"text": "Is this déjà vu?"
}
2.3 Analyzers(分词器)
es 附带了大量的内置分词器,这些分词器不加而外配置的情况下被任何索引使用。es 中包含了如下内置分词器:
Standard Analyzer
The standard analyzer divides text into terms on word boundaries, as defined by the Unicode Text Segmentation algorithm. It removes most punctuation, lowercases terms, and supports removing stop words.
Simple Analyzer
The simple analyzer divides text into terms whenever it encounters a character which is not a letter. It lowercases all terms.
Whitespace Analyzer
The whitespace analyzer divides text into terms whenever it encounters any whitespace character. It does not lowercase terms.
Stop Analyzer
The stop analyzer is like the simple analyzer, but also supports removal of stop words.
Keyword Analyzer
The keyword analyzer is a “noop” analyzer that accepts whatever text it is given and outputs the exact same text as a single term.
Pattern Analyzer
The pattern analyzer uses a regular expression to split the text into terms. It supports lower-casing and stop words.
Language Analyzers
Elasticsearch provides many language-specific analyzers like english or french.
Fingerprint Analyzer
The fingerprint analyzer is a specialist analyzer which creates a fingerprint which can be used for duplicate detection.
当找不到合适分词器的时候,也可以在es使用自定义的分词器;自定义的分词可以结合适当的character filters, tokenizer, and token filters的功能,以实现特有的分词效果。
2.4 Normalizers
Normalizers 和分词器类似,但它只能产生一个token。因此,Normalizers 不需要 Tokenizers(分词器),且只接收部分 char filters and token filters(只有工作在单个字符上的过滤器才能被使用)。例如,lowercasing filter 可以使用(但是stemming filter 不可以使用),它会把keyword类型的字段视作一个整体。当前7.2 版本es的 normalizer 可以支持如下过滤器:arabic_normalization, asciifolding, bengali_normalization, cjk_width, decimal_digit, elision, german_normalization, hindi_normalization, indic_normalization, lowercase, persian_normalization, scandinavian_folding, serbian_normalization, sorani_normalization, uppercase.
目前es并没有内置normalizer,因此只能通过自定义的方式来得到一个normalizer。自定义的normalizer包含来一系列的字符过滤器和 token 过滤器。
以下为一个自定义的my_normalizer,它包括来自定义的quote 和 默认的 “lowercase”, “asciifolding” 过滤器。
{
"settings": {
"analysis": {
"char_filter": {
"quote": {
"type": "mapping",
"mappings": [
"« => \"",
"» => \""
]
}
},
"normalizer": {
"my_normalizer": { #1 定义一个normalizer
"type": "custom",
"char_filter": ["quote"],
"filter": ["lowercase", "asciifolding"]
}
}
}
},
"mappings": {
"properties": {
"foo": {
"type": "keyword",
"normalizer": "my_normalizer" #2 keyword中使用上述normlizer
}
}
}
}
2.5 Tokenizers
分词器|tokenizer接收字符流,并把它分解为单个的字符(通常为单个的单词),并输出一个token|单词流。例如空格分词器一旦发现空格就把文本分解为多个token,它会把文本"Quick brown fox!" 分解为数据项 [Quick, brown, fox!].
分词器也负责记录每个数据项的次序和位置,当然也会记录数据项的开始和结束字符偏移位置。分词结果记录了每个数据项的相对位置,它可以用来实现 phrase queries 或 word proximity queries;记录了每个数据项的起始偏移位置,它可以用来实现 highlighting search snippets。
es 拥有大量内置的tokenizers,它们可以用于创建自定义的analyzers。
如下 tokenizers 一般用来把完整的文本信息划分为单个的单词:
Standard Tokenizer
The standard tokenizer divides text into terms on word boundaries, as defined by the Unicode Text Segmentation algorithm. It removes most punctuation symbols. It is the best choice for most languages.
Letter Tokenizer
The letter tokenizer divides text into terms whenever it encounters a character which is not a letter.
Lowercase Tokenizer
The lowercase tokenizer, like the letter tokenizer, divides text into terms whenever it encounters a character which is not a letter, but it also lowercases all terms.
Whitespace Tokenizer
The whitespace tokenizer divides text into terms whenever it encounters any whitespace character.
UAX URL Email Tokenizer
The uax_url_email tokenizer is like the standard tokenizer except that it recognises URLs and email addresses as single tokens.
Classic Tokenizer
The classic tokenizer is a grammar based tokenizer for the English Language.
Thai Tokenizer
The thai tokenizer segments Thai text into words.
Partial Word Tokenizers主要用于把文本或者单词分解为更小的片段,以便于部分单词匹配。
N-Gram Tokenizer
The ngram tokenizer can break up text into words when it encounters any of a list of specified characters (e.g. whitespace or punctuation), then it returns n-grams of each word: a sliding window of continuous letters, e.g. quick → [qu, ui, ic, ck].
Edge N-Gram Tokenizer
The edge_ngram tokenizer can break up text into words when it encounters any of a list of specified characters (e.g. whitespace or punctuation), then it returns n-grams of each word which are anchored to the start of the word, e.g. quick → [q, qu, qui, quic, quick].
Structured Text Tokenizers 通常The following tokenizers are usually used with structured text like identifiers, email addresses, zip codes, and paths, rather than with full text:
Keyword Tokenizer
The keyword tokenizer is a “noop” tokenizer that accepts whatever text it is given and outputs the exact same text as a single term. It can be combined with token filters like lowercase to normalise the analysed terms.
Pattern Tokenizer
The pattern tokenizer uses a regular expression to either split text into terms whenever it matches a word separator, or to capture matching text as terms.
Simple Pattern Tokenizer
The simple_pattern tokenizer uses a regular expression to capture matching text as terms. It uses a restricted subset of regular expression features and is generally faster than the pattern tokenizer.
Char Group Tokenizer
The char_group tokenizer is configurable through sets of characters to split on, which is usually less expensive than running regular expressions.
Simple Pattern Split Tokenizer
The simple_pattern_split tokenizer uses the same restricted regular expression subset as the simple_pattern tokenizer, but splits the input at matches rather than returning the matches as terms.
Path Tokenizer
The path_hierarchy tokenizer takes a hierarchical value like a filesystem path, splits on the path separator, and emits a term for each component in the tree, e.g. /foo/bar/baz → [/foo, /foo/bar, /foo/bar/baz ].
2.6 Token Filters
Token filters 从tokenizer 接收 tokens 流,并且能够修改tokens(例如将token变为小写)、删除tokens(例如删除停用词)、增加tokens(例如添加同义词)。
es 具有大量内置的 token filters,它们可以用来创建自定义的 analyzers。es 7.2 版本包括如下token filters(此处暂时不逐一介绍,后续会按需挑选一些 token filter 加以说明):
ASCII Folding Token Filter
Flatten Graph Token Filter
Length Token Filter
Lowercase Token Filter
Uppercase Token Filter
NGram Token Filter
Edge NGram Token Filter
Porter Stem Token Filter
Shingle Token Filter
Stop Token Filter
Word Delimiter Token Filter
Word Delimiter Graph Token Filter
Multiplexer Token Filter
Conditional Token Filter
Predicate Token Filter Script
Stemmer Token Filter
Stemmer Override Token Filter
Keyword Marker Token Filter
Keyword Repeat Token Filter
KStem Token Filter
Snowball Token Filter
Phonetic Token Filter
Synonym Token Filter
Parsing synonym files
Synonym Graph Token Filter
Compound Word Token Filters
Reverse Token Filter
Elision Token Filter
Truncate Token Filter
Unique Token Filter
Pattern Capture Token Filter
Pattern Replace Token Filter
Trim Token Filter
Limit Token Count Token Filter
Hunspell Token Filter
Common Grams Token Filter
Normalization Token Filter
CJK Width Token Filter
CJK Bigram Token Filter
Delimited Payload Token Filter
Keep Words Token Filter
Keep Types Token Filter
Exclude mode settings example
Classic Token Filter
Apostrophe Token Filter
Decimal Digit Token Filter
Fingerprint Token Filter
MinHash Token Filter
Remove Duplicates Token Filter
2.7 Character Filters
Character filters 用来预处理字符流,被处理后就会继续交给 tokenizer来进一步分词。
字符过滤器接受原始文本为字符流,并且可以通过增加、移除或改变字符来转换字符流。
例如一个字符过滤器可以用于转换 印度教的数字符号 (٠١٢٣٤٥٦٧٨٩) 为阿拉伯语等价字 (0123456789),或者从字符流中去掉HTML元素中的<b>。
es 中有一些内置的字符过滤器,它们可以用来创建自定义的 analyzers,以下3个为es 中的字符过滤器:
HTML Strip Character Filter
html_strip 字符过滤器不仅可以过滤掉 HTML元素(例如过滤掉 <b>) , 且能够解码出html中的对象(例如 &,html 中的空格)。
Mapping Character Filter
mapping字符过滤器可以替换任何出现的指定字符串为特定的替换字符串。
Pattern Replace Character Filter
pattern_replace 字符过滤器可以替换任何匹配到特定正则表达的字符串为特定的替换字符串。
3 案例
此处主要用于介绍常见分词器和特有的案例。
to add
4 说明
7.2/analysis-lang-analyzer