Elasticsearch学习(七):Elasticsearch分析

Stella981
• 阅读 243

一、分析

1. 分析(analysis)

  • 首先,标记化一个文本块为适用于倒排索引单独的词(term)
  • 然后标准化这些词为标准形式,提高它们的“可搜索性”或“查全率” 分析是由分析器(analyzer)完成的。

2. 分析器(analyzer)

  • 字符过滤器(character filter) 过滤处理字符串(比如去掉多余的空格之类的),让字符串在被分词前变得更加“整洁”,一个分析器可能包含零到多个字符过滤器。
  • 分词器(tokenizer) 字符串被标记化成独立的词(比如按空格划分成一个个单词),一个分析器必须包含一个分词器。
  • 标记过滤器(token filters) 所有的词经过标记过滤,标记过滤器可能修改,添加或删除标记。

只有字段是全文字段(full-text fields)的时候分析器才会被使用,当字段是一个确切的值(exact value)时,不会对该字段做分析。

  • 全文字段:类似于string、text
  • 确切值:类似于数值、日期

二、自定义分析器

1. char_filter(字符过滤器)

  • html_strip(html标签过滤) 参数:
    • escaped_tags不应该从原始文本中删除的HTML标签数组
  • mapping(自定义映射过滤) 参数:
    • mappings一个映射数组,每个元素的格式为key => value
    • mappings_path一个以UTF-8编码的文件的绝对路径或者是相对于config目录的路径,文件每一行都是一个格式为key => value映射
  • pattern_replace(使用正则表达式来匹配字符并使用指定的字符串替换) 参数:

2. tokenizer(分词器)

这里只列出常用的几个,更多分词器请查阅官方文档

  • standard(标准分词,默认使用的分词。根据Unicode Consortium的定义的单词边界来切分文本,然后去掉大部分标点符号对于文本分析,所以对于任何语言都是最佳选择) 参数:
    • max_token_length最大标记长度。如果一个标记超过这个长度,就会被分割。默认值为255
  • letter(遇到不是字母的字符就分割) 参数:无
  • lowercase(在letter基础上把所分词都转为小写) 参数:无
  • whitespace(以空格分词) 参数:无
  • keyword(相当于不分词,接收啥输出啥) 参数:
    • buffer_size缓冲区大小。默认为256。缓冲区将以这种大小增长,直到所有文本被消耗。建议不要改变这个设置。

3. filter(标记过滤器)

由于标记过滤器太多,这里就不一一介绍了,请查阅官方文档

4. 自定义分析器

newindex PUT

{
  "settings": {
    "analysis": {
      "char_filter": {
        "my_char_filter": {
          "type": "mapping",
          "mappings": [
            "&=>and",
            ":)=>happy",
            ":(=>sad"
          ]
        }
      },
      "tokenizer": {
        "my_tokenizer": {
          "type": "standard",
          "max_token_length": 5
        }
      },
      "filter": {
        "my_filter": {
          "type": "stop",
          "stopwords": [
            "the",
            "a"
          ]
        }
      },
      "analyzer": {
        "my_analyzer": {
          "type": "custom",
          "char_filter": [
            "html_strip",
            "my_char_filter"
          ],
          "tokenizer": "my_tokenizer",
          "filter": [
            "lowercase",
            "my_filter"
          ]
        }
      }
    }
  }
}

然后用自定义分析器分析一段字符串:

newindex/_analyze POST

{
  "analyzer": "my_analyzer",
  "text": "<span>If you are :(, I will be :).</span> The people & a banana",
  "explain": true
}

可以看到分析过程:

{
  "detail": {
    "custom_analyzer": true,
    "charfilters": [
      {
        "name": "html_strip",
        "filtered_text": [
          "if you are :(, I will be :). the people & a banana"
        ]
      },
      {
        "name": "my_char_filter",
        "filtered_text": [
          "if you are sad, I will be happy. the people and a banana"
        ]
      }
    ],
    "tokenizer": {
      "name": "my_tokenizer",
      "tokens": [
        {
          "token": "if",
          "start_offset": 6,
          "end_offset": 8,
          "type": "<ALPHANUM>",
          "position": 0,
          "bytes": "[69 66]",
          "positionLength": 1
        },
        {
          "token": "you",
          "start_offset": 9,
          "end_offset": 12,
          "type": "<ALPHANUM>",
          "position": 1,
          "bytes": "[79 6f 75]",
          "positionLength": 1
        },
        {
          "token": "are",
          "start_offset": 13,
          "end_offset": 16,
          "type": "<ALPHANUM>",
          "position": 2,
          "bytes": "[61 72 65]",
          "positionLength": 1
        },
        {
          "token": "sad",
          "start_offset": 17,
          "end_offset": 19,
          "type": "<ALPHANUM>",
          "position": 3,
          "bytes": "[73 61 64]",
          "positionLength": 1
        },
        {
          "token": "I",
          "start_offset": 21,
          "end_offset": 22,
          "type": "<ALPHANUM>",
          "position": 4,
          "bytes": "[49]",
          "positionLength": 1
        },
        {
          "token": "will",
          "start_offset": 23,
          "end_offset": 27,
          "type": "<ALPHANUM>",
          "position": 5,
          "bytes": "[77 69 6c 6c]",
          "positionLength": 1
        },
        {
          "token": "be",
          "start_offset": 28,
          "end_offset": 30,
          "type": "<ALPHANUM>",
          "position": 6,
          "bytes": "[62 65]",
          "positionLength": 1
        },
        {
          "token": "happy",
          "start_offset": 31,
          "end_offset": 33,
          "type": "<ALPHANUM>",
          "position": 7,
          "bytes": "[68 61 70 70 79]",
          "positionLength": 1
        },
        {
          "token": "the",
          "start_offset": 42,
          "end_offset": 45,
          "type": "<ALPHANUM>",
          "position": 8,
          "bytes": "[74 68 65]",
          "positionLength": 1
        },
        {
          "token": "peopl",
          "start_offset": 46,
          "end_offset": 51,
          "type": "<ALPHANUM>",
          "position": 9,
          "bytes": "[70 65 6f 70 6c]",
          "positionLength": 1
        },
        {
          "token": "e",
          "start_offset": 51,
          "end_offset": 52,
          "type": "<ALPHANUM>",
          "position": 10,
          "bytes": "[65]",
          "positionLength": 1
        },
        {
          "token": "and",
          "start_offset": 53,
          "end_offset": 54,
          "type": "<ALPHANUM>",
          "position": 11,
          "bytes": "[61 6e 64]",
          "positionLength": 1
        },
        {
          "token": "a",
          "start_offset": 55,
          "end_offset": 56,
          "type": "<ALPHANUM>",
          "position": 12,
          "bytes": "[61]",
          "positionLength": 1
        },
        {
          "token": "banan",
          "start_offset": 57,
          "end_offset": 62,
          "type": "<ALPHANUM>",
          "position": 13,
          "bytes": "[62 61 6e 61 6e]",
          "positionLength": 1
        },
        {
          "token": "a",
          "start_offset": 62,
          "end_offset": 63,
          "type": "<ALPHANUM>",
          "position": 14,
          "bytes": "[61]",
          "positionLength": 1
        }
      ]
    },
    "tokenfilters": [
      {
        "name": "lowercase",
        "tokens": [
          {
            "token": "if",
            "start_offset": 6,
            "end_offset": 8,
            "type": "<ALPHANUM>",
            "position": 0,
            "bytes": "[69 66]",
            "positionLength": 1
          },
          {
            "token": "you",
            "start_offset": 9,
            "end_offset": 12,
            "type": "<ALPHANUM>",
            "position": 1,
            "bytes": "[79 6f 75]",
            "positionLength": 1
          },
          {
            "token": "are",
            "start_offset": 13,
            "end_offset": 16,
            "type": "<ALPHANUM>",
            "position": 2,
            "bytes": "[61 72 65]",
            "positionLength": 1
          },
          {
            "token": "sad",
            "start_offset": 17,
            "end_offset": 19,
            "type": "<ALPHANUM>",
            "position": 3,
            "bytes": "[73 61 64]",
            "positionLength": 1
          },
          {
            "token": "i",
            "start_offset": 21,
            "end_offset": 22,
            "type": "<ALPHANUM>",
            "position": 4,
            "bytes": "[69]",
            "positionLength": 1
          },
          {
            "token": "will",
            "start_offset": 23,
            "end_offset": 27,
            "type": "<ALPHANUM>",
            "position": 5,
            "bytes": "[77 69 6c 6c]",
            "positionLength": 1
          },
          {
            "token": "be",
            "start_offset": 28,
            "end_offset": 30,
            "type": "<ALPHANUM>",
            "position": 6,
            "bytes": "[62 65]",
            "positionLength": 1
          },
          {
            "token": "happy",
            "start_offset": 31,
            "end_offset": 33,
            "type": "<ALPHANUM>",
            "position": 7,
            "bytes": "[68 61 70 70 79]",
            "positionLength": 1
          },
          {
            "token": "the",
            "start_offset": 42,
            "end_offset": 45,
            "type": "<ALPHANUM>",
            "position": 8,
            "bytes": "[74 68 65]",
            "positionLength": 1
          },
          {
            "token": "peopl",
            "start_offset": 46,
            "end_offset": 51,
            "type": "<ALPHANUM>",
            "position": 9,
            "bytes": "[70 65 6f 70 6c]",
            "positionLength": 1
          },
          {
            "token": "e",
            "start_offset": 51,
            "end_offset": 52,
            "type": "<ALPHANUM>",
            "position": 10,
            "bytes": "[65]",
            "positionLength": 1
          },
          {
            "token": "and",
            "start_offset": 53,
            "end_offset": 54,
            "type": "<ALPHANUM>",
            "position": 11,
            "bytes": "[61 6e 64]",
            "positionLength": 1
          },
          {
            "token": "a",
            "start_offset": 55,
            "end_offset": 56,
            "type": "<ALPHANUM>",
            "position": 12,
            "bytes": "[61]",
            "positionLength": 1
          },
          {
            "token": "banan",
            "start_offset": 57,
            "end_offset": 62,
            "type": "<ALPHANUM>",
            "position": 13,
            "bytes": "[62 61 6e 61 6e]",
            "positionLength": 1
          },
          {
            "token": "a",
            "start_offset": 62,
            "end_offset": 63,
            "type": "<ALPHANUM>",
            "position": 14,
            "bytes": "[61]",
            "positionLength": 1
          }
        ]
      },
      {
        "name": "my_filter",
        "tokens": [
          {
            "token": "if",
            "start_offset": 6,
            "end_offset": 8,
            "type": "<ALPHANUM>",
            "position": 0,
            "bytes": "[69 66]",
            "positionLength": 1
          },
          {
            "token": "you",
            "start_offset": 9,
            "end_offset": 12,
            "type": "<ALPHANUM>",
            "position": 1,
            "bytes": "[79 6f 75]",
            "positionLength": 1
          },
          {
            "token": "are",
            "start_offset": 13,
            "end_offset": 16,
            "type": "<ALPHANUM>",
            "position": 2,
            "bytes": "[61 72 65]",
            "positionLength": 1
          },
          {
            "token": "sad",
            "start_offset": 17,
            "end_offset": 19,
            "type": "<ALPHANUM>",
            "position": 3,
            "bytes": "[73 61 64]",
            "positionLength": 1
          },
          {
            "token": "i",
            "start_offset": 21,
            "end_offset": 22,
            "type": "<ALPHANUM>",
            "position": 4,
            "bytes": "[69]",
            "positionLength": 1
          },
          {
            "token": "will",
            "start_offset": 23,
            "end_offset": 27,
            "type": "<ALPHANUM>",
            "position": 5,
            "bytes": "[77 69 6c 6c]",
            "positionLength": 1
          },
          {
            "token": "be",
            "start_offset": 28,
            "end_offset": 30,
            "type": "<ALPHANUM>",
            "position": 6,
            "bytes": "[62 65]",
            "positionLength": 1
          },
          {
            "token": "happy",
            "start_offset": 31,
            "end_offset": 33,
            "type": "<ALPHANUM>",
            "position": 7,
            "bytes": "[68 61 70 70 79]",
            "positionLength": 1
          },
          {
            "token": "peopl",
            "start_offset": 46,
            "end_offset": 51,
            "type": "<ALPHANUM>",
            "position": 9,
            "bytes": "[70 65 6f 70 6c]",
            "positionLength": 1
          },
          {
            "token": "e",
            "start_offset": 51,
            "end_offset": 52,
            "type": "<ALPHANUM>",
            "position": 10,
            "bytes": "[65]",
            "positionLength": 1
          },
          {
            "token": "and",
            "start_offset": 53,
            "end_offset": 54,
            "type": "<ALPHANUM>",
            "position": 11,
            "bytes": "[61 6e 64]",
            "positionLength": 1
          },
          {
            "token": "banan",
            "start_offset": 57,
            "end_offset": 62,
            "type": "<ALPHANUM>",
            "position": 13,
            "bytes": "[62 61 6e 61 6e]",
            "positionLength": 1
          }
        ]
      }
    ]
  }
}
点赞
收藏
评论区
推荐文章
刚刚好 刚刚好
4个月前
css问题
1、在IOS中图片不显示(给图片加了圆角或者img没有父级)<div<imgsrc""/</divdiv{width:20px;height:20px;borderradius:20px;overflow:h
blmius blmius
1年前
MySQL:[Err] 1292 - Incorrect datetime value: ‘0000-00-00 00:00:00‘ for column ‘CREATE_TIME‘ at row 1
文章目录问题用navicat导入数据时,报错:原因这是因为当前的MySQL不支持datetime为0的情况。解决修改sql\mode:sql\mode:SQLMode定义了MySQL应支持的SQL语法、数据校验等,这样可以更容易地在不同的环境中使用MySQL。全局s
晴空闲云 晴空闲云
4个月前
css中box-sizing解放盒子实际宽高计算
我们知道传统的盒子模型,如果增加内边距padding和边框border,那么会撑大整个盒子,造成盒子的宽度不好计算,在实务中特别不方便。boxsizing可以设置盒模型的方式,可以很好的设置固定宽高的盒模型。盒子宽高计算假如我们设置如下盒子:宽度和高度均为200px,那么这会这个盒子实际的宽高就都是200px。但是当我们设置这个盒子的边框和内间距的时候,那
艾木酱 艾木酱
3个月前
快速入门|使用MemFire Cloud构建React Native应用程序
MemFireCloud是一款提供云数据库,用户可以创建云数据库,并对数据库进行管理,还可以对数据库进行备份操作。它还提供后端即服务,用户可以在1分钟内新建一个应用,使用自动生成的API和SDK,访问云数据库、对象存储、用户认证与授权等功能,可专
Stella981 Stella981
1年前
KVM调整cpu和内存
一.修改kvm虚拟机的配置1、virsheditcentos7找到“memory”和“vcpu”标签,将<namecentos7</name<uuid2220a6d1a36a4fbb8523e078b3dfe795</uuid
Easter79 Easter79
1年前
SpringBoot整合Redis乱码原因及解决方案
问题描述:springboot使用springdataredis存储数据时乱码rediskey/value出现\\xAC\\xED\\x00\\x05t\\x00\\x05问题分析:查看RedisTemplate类!(https://oscimg.oschina.net/oscnet/0a85565fa
Wesley13 Wesley13
1年前
MySQL查询按照指定规则排序
1.按照指定(单个)字段排序selectfromtable_nameorderiddesc;2.按照指定(多个)字段排序selectfromtable_nameorderiddesc,statusdesc;3.按照指定字段和规则排序selec
Stella981 Stella981
1年前
Angular material mat
IconIconNamematiconcode_add\_comment_addcommenticon<maticonadd\_comment</maticon_attach\_file_attachfileicon<maticonattach\_file</maticon_attach\
Wesley13 Wesley13
1年前
MySQL部分从库上面因为大量的临时表tmp_table造成慢查询
背景描述Time:20190124T00:08:14.70572408:00User@Host:@Id:Schema:sentrymetaLast_errno:0Killed:0Query_time:0.315758Lock_
helloworld_34035044 helloworld_34035044
6个月前
皕杰报表之UUID
​在我们用皕杰报表工具设计填报报表时,如何在新增行里自动增加id呢?能新增整数排序id吗?目前可以在新增行里自动增加id,但只能用uuid函数增加UUID编码,不能新增整数排序id。uuid函数说明:获取一个UUID,可以在填报表中用来创建数据ID语法:uuid()或uuid(sep)参数说明:sep布尔值,生成的uuid中是否包含分隔符'',缺省为
helloworld_28799839 helloworld_28799839
4个月前
常用知识整理
Javascript判断对象是否为空jsObject.keys(myObject).length0经常使用的三元运算我们经常遇到处理表格列状态字段如status的时候可以用到vue