Elasticsearch

news2024/11/26 0:36:14

一、Spring Data
1、简介
Spring Data 是一个用于简化数据库、非关系型数据库、索引库访问,并支持云服务的开源框架。Spring Data 可以极大的简化JPA的写法,可以在几乎不用写实现的情况下,实现对数据库的访问和操作。除了 CRUD 之外,还包括如分页、排序等一下常用的功能。

2、Spring Data 的官网
https://spring.io/projects/spring-data

二、java 集成 Spring Data Elasticsearch 操作 es
1、pom依赖(spring boot 版本 2.4.1)

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

备注:
spring-data-elasticsearch 版本 4.1.1
elasticsearch 版本 7.11.1
elasticsearch-rest-high-level-client 版本 7.11.1
在这里插入图片描述

2、application.yml 配置文件

server:
  port: 8015

spring:
  application:
    name: test-15-elasticsearch
  elasticsearch:
    rest:
      # es 连接地址,多个用逗号隔开
      uris: 192.168.3.56:9200,192.168.3.57:9200,192.168.3.58:9200
      username: spadger
      password: spadger
      # 连接超时时间,单位毫秒,默认1s
      connection-timeout: 1000
      # 读取超时时间,单位毫秒,默认30s
      read-timeout: 1000

3、Product 实体类代码

package com.test.elasticsearch.entity;

import lombok.Data;
import org.springframework.data.annotation.Id;
import org.springframework.data.elasticsearch.annotations.Document;

import java.io.Serializable;
import org.springframework.data.elasticsearch.annotations.Field;
import org.springframework.data.elasticsearch.annotations.FieldType;

/**
 * indexName    指定索引名
 * shards       指定分片(无特殊配置,默认即可)
 * replicas     指定副本集(无特殊配置,默认即可)
 */
@Data
@Document(indexName = "product", shards = 1, replicas = 1)
public class Product implements Serializable {

  private static final long serialVersionUID = 551589397625941751L;

  @Id
  private Long id;//商品唯一标识
  @Field(type = FieldType.Text)
  private String title;//商品名称
  @Field(type = FieldType.Keyword)
  private String category;//商品分类
  @Field(type = FieldType.Double)
  private Double price;//商品价格
  @Field(type = FieldType.Keyword, index = false)//index = false 表示该字段不能作为查询条件
  private String images;//图片地址

}

4、ElasticsearchRestTemplate 客户端测试类

package com.test.elasticsearch;

import com.test.elasticsearch.entity.Product;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import lombok.extern.slf4j.Slf4j;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.index.query.BoolQueryBuilder;
import org.elasticsearch.index.query.MatchAllQueryBuilder;
import org.elasticsearch.index.query.MatchPhraseQueryBuilder;
import org.elasticsearch.index.query.MatchQueryBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.index.query.QueryStringQueryBuilder;
import org.elasticsearch.index.query.RangeQueryBuilder;
import org.elasticsearch.search.aggregations.AggregationBuilders;
import org.elasticsearch.search.aggregations.Aggregations;
import org.elasticsearch.search.aggregations.bucket.terms.ParsedStringTerms;
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.data.domain.PageRequest;
import org.springframework.data.domain.Sort;
import org.springframework.data.domain.Sort.Direction;
import org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate;
import org.springframework.data.elasticsearch.core.IndexOperations;
import org.springframework.data.elasticsearch.core.SearchHits;
import org.springframework.data.elasticsearch.core.mapping.IndexCoordinates;
import org.springframework.data.elasticsearch.core.query.NativeSearchQuery;
import org.springframework.data.elasticsearch.core.query.NativeSearchQueryBuilder;
import org.springframework.data.elasticsearch.core.query.Query;
import org.springframework.data.elasticsearch.core.query.UpdateQuery;
import org.springframework.data.elasticsearch.core.query.UpdateResponse;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

/**
 * 使用 Spring Data Elasticsearch 操作 es 索引库
 * ElasticsearchRestTemplate 客户端测试
 */
@Slf4j
@RunWith(SpringJUnit4ClassRunner.class)//不加这个注解 service层 就无法注入
@SpringBootTest
public class SpringDataESIndexTest {

  @Autowired
  private ElasticsearchRestTemplate elasticsearchRestTemplate;

  /**
   * 创建索引
   */
  @Test
  public void createIndex() {
    /**
     * 方式一:指定索引库名称创建索引(只会创建一个名为 product 的索引库,不会创建映射)
     * 存在问题:创建索引之后,删除索引,再重新创建的时候,会报错 "reason":"index [product/906Vi3zzRaKb1zJaIKzvtg] already exists"
     * 报错原因:分词器反复创建:先用standard分词器建的index,然后使用ik分词器又建索引出现这个错误,相当于对已有的 index 修改 mapping
     * 报错解决办法:不创建索引就好了。如果已经删除掉了 product 索引库,es里是没有 product 索引的。但实体类  Product 里有"indexName = “product”",你往索引里插入数据时,索引就自动存在了
     */
    IndexOperations indexOperations = elasticsearchRestTemplate.indexOps(IndexCoordinates.of("product"));
    boolean result = indexOperations.create();
    System.out.println("创建索引结果:" + result);
    boolean putMappingResult = indexOperations.putMapping(Product.class);
    System.out.println("索引映射添加结果:" + putMappingResult);
    /**
     * 方式二:指定实体类创建索引(创建一个名为 product 的索引库,同时会自动创建映射,映射关系就对应实体类 Product.class 的属性)
     * 存在问题:创建索引之后,删除索引,再重新创建的时候,会报错 "reason":"index [product/906Vi3zzRaKb1zJaIKzvtg] already exists"
     * 报错原因:分词器反复创建:先用standard分词器建的index,然后使用ik分词器又建索引出现这个错误,相当于对已有的 index 修改 mapping
     * 报错解决办法:不创建索引就好了。如果已经删除掉了 product 索引库,es里是没有 product 索引的。但实体类  Product 里有"indexName = “product”",你往索引里插入数据时,索引就自动存在了
     */
//    IndexOperations indexOperations = elasticsearchRestTemplate.indexOps(Product.class);
//    boolean result = indexOperations.create();
//    System.out.println("创建索引结果:" + result);
  }

  /**
   * 根据索引名称判断索引是否存在
   */
  @Test
  public void existsIndex() {
    //方式一:单纯的根据索引库名称进行判断
    boolean result01 = elasticsearchRestTemplate.indexOps(IndexCoordinates.of("product")).exists();
    System.out.println("索引是否存在:" + result01);
    //方式二:根据索引库对应的实体类,也就是加了 @Document(indexName = "product", shards = 1, replicas = 1) 注解的实体类进行判断
    boolean result02 = elasticsearchRestTemplate.indexOps(Product.class).exists();
    System.out.println("索引是否存在:" + result02);
  }

  /**
   * 根据索引名称删除索引库
   */
  @Test
  public void deleteIndex() {
    boolean result = false;
    IndexOperations indexOperations = elasticsearchRestTemplate.indexOps(Product.class);
    System.out.println("索引是否存在:" + indexOperations.exists());
    if (indexOperations.exists()) {
      result = indexOperations.delete();
    }
    System.out.println("删除结果:" + result);
    System.out.println("索引是否存在:" + indexOperations.exists());
  }

  /**
   * 创建文档(索引库插入数据)
   * 注意:
   * ① 假如索引库 product 不存在,在添加文档的时候会自动创建 product 索引库,索引库信息会和 Product.class 中的各个属性一一对应
   * ② 创建文档的时候如果指定了id,则会使用指定的id,如果没有指定文档id,es默认会自动创建一个uuid作为文档id
   * ③ 创建文档的时候如果指定了id,并且该文档已经存在,则默认为更新操作
   */
  @Test
  public void createDocument() {
    Product product = new Product();
    product.setId(1001L);
    product.setTitle("小米 S12");
    product.setImages("F:\\pictures\\yy测试下载图片.jpg");
    product.setPrice(3688.99);
    product.setCategory("手机");
    Product saveResult = elasticsearchRestTemplate.save(product);
    System.out.println("文档新增,指定id,创建结果:" + saveResult);
    Product product01 = new Product();
    product01.setTitle("一加 T3");
    product01.setImages("F:\\pictures\\yy测试下载图片.jpg");
    product01.setPrice(4666.88);
    product01.setCategory("手机");
    Product saveResult01 = elasticsearchRestTemplate.save(product01);
    System.out.println("文档新增,不指定id,创建结果:" + saveResult01);
  }

  /**
   * 批量创建文档(索引库批量插入数据)
   * 注意:
   * ① 假如索引库 product 不存在,在添加文档的时候会自动创建 product 索引库,索引库信息会和 Product.class 中的各个属性一一对应
   * ② 创建文档的时候如果指定了id,则会使用指定的id,如果没有指定文档id,es默认会自动创建一个uuid作为文档id
   * ③ 创建文档的时候如果指定了id,并且该文档已经存在,则默认为更新操作
   */
  @Test
  public void createDocumentList() {
    List<Product> list = new ArrayList<>();
    Product product01 = new Product();
    product01.setId(1001L);
    product01.setTitle("小米 S12");
    product01.setImages("F:\\pictures\\yy测试下载图片.jpg");
    product01.setPrice(3688.99);
    product01.setCategory("手机");
    list.add(product01);
    Product product02 = new Product();
    product02.setId(1003L);
    product02.setTitle("一加 T3");
    product02.setImages("F:\\pictures\\yy测试下载图片.jpg");
    product02.setPrice(4666.88);
    product02.setCategory("手机");
    Product product03 = new Product();
    list.add(product02);
    product03.setId(1002L);
    product03.setTitle("S11");
    product03.setImages("F:\\pictures\\yy测试下载图片.jpg");
    product03.setPrice(2999.99);
    product03.setCategory("小米手机");
    list.add(product03);
    Iterable<Product> saveResult = elasticsearchRestTemplate.save(list);
    System.out.println("文档批量新增,创建结果:" + saveResult);
  }

  /**
   * 根据文档id判断文档是否存在
   */
  @Test
  public void existsDocument() {
    boolean result = elasticsearchRestTemplate.exists("1001", Product.class);
    System.out.println("文档是否存在:" + result);
  }

  /**
   * 根据文档id删除文档
   */
  @Test
  public void deleteDocument() {
    String result = elasticsearchRestTemplate.delete("j07wvIQBG2vb63j9iSAY", Product.class);
    System.out.println("根据文档id删除,结果::" + result);
  }

  /**
   * 根据查询条件批量删除文档
   */
  @Test
  public void deleteDocumentByQueryCondition() {
    QueryStringQueryBuilder queryStringQueryBuilder = QueryBuilders.queryStringQuery("小米");
    NativeSearchQuery nativeSearchQuery = new NativeSearchQuery(queryStringQueryBuilder);
    elasticsearchRestTemplate.delete(nativeSearchQuery, Product.class);
    System.out.println("根据查询条件批量删除文档!!!");
  }

  /**
   * 批量修改文档
   */
  @Test
  public void updateDocumentList() {
    List<Product> list = new ArrayList<>();
    Product product01 = new Product();
    product01.setId(1003L);
    product01.setTitle("一加 T3");
    product01.setImages("F:\\pictures\\yy测试下载图片.jpg");
    product01.setPrice(4688.88);
    product01.setCategory("手机");
    list.add(product01);
    Product product03 = new Product();
    product03.setId(1004L);
    product03.setTitle("一加 T3 PRO");
    product03.setImages("F:\\pictures\\yy测试下载图片.jpg");
    product03.setPrice(5499.99);
    product03.setCategory("手机");
    list.add(product03);
    Iterable<Product> saveResult = elasticsearchRestTemplate.save(list);
    System.out.println("批量修改文档,结果:" + saveResult);
  }

  /**
   * 局部修改文档
   */
  @Test
  public void updateDocument() {
    //ctx._source 为固定内容,title、price 为文档属性名称(字段名)
    String script = "ctx._source.title='小米 S12 PRO';ctx._source.price=3999.99";
    //根据文档id修改文档指定内容
    UpdateQuery updateQuery = UpdateQuery.builder("1001").withScript(script).build();
    IndexCoordinates indexCoordinates = IndexCoordinates.of("product");
    UpdateResponse updateResponse = elasticsearchRestTemplate.update(updateQuery, indexCoordinates);
    System.out.println("局部修改文档,结果:" + updateResponse);
  }

  /**
   * 查询文档 -- 查询全部
   */
  @Test
  public void searchDocumentMatchAll() {
    MatchAllQueryBuilder matchAllQueryBuilder = QueryBuilders.matchAllQuery();
    NativeSearchQuery nativeSearchQuery = new NativeSearchQuery(matchAllQueryBuilder);
    SearchHits<Product> searchHits = elasticsearchRestTemplate.search(nativeSearchQuery, Product.class);
    List<Product> list = new ArrayList<>();
    System.out.println("查询文档,查询全部,查询总数:" + searchHits.getTotalHits());
    searchHits.forEach(sh -> {
      list.add(sh.getContent());
    });
    System.out.println("查询文档,查询全部,结果集:" + list);
  }

  /**
   * 查询文档 -- 根据文档id查询
   */
  @Test
  public void searchDocumentById() {
    Product product = elasticsearchRestTemplate.get("1001", Product.class);
    System.out.println("查询文档,根据文档id查询,结果集:" + product);
  }

  /**
   * 查询文档 -- 模糊查询
   */
  @Test
  public void searchDocumentLike() {
    //QueryBuilders.queryStringQuery("小米") 这种查询方式不指定匹配的字段,默认根据文档中除id外的第一个字段进行匹配,并且只会通过这一个字段进行匹配,而且查询条件会被分词,根据分词后的结果进行匹配
    QueryStringQueryBuilder queryStringQueryBuilder = QueryBuilders.queryStringQuery("小米");
    NativeSearchQuery nativeSearchQuery = new NativeSearchQuery(queryStringQueryBuilder);
    SearchHits<Product> searchHits = elasticsearchRestTemplate.search(nativeSearchQuery, Product.class);
    List<Product> list = new ArrayList<>();
    System.out.println("查询文档,模糊查询,查询总数:" + searchHits.getTotalHits());
    searchHits.forEach(sh -> {
      list.add(sh.getContent());
    });
    System.out.println("查询文档,模糊查询,结果集:" + list);
  }

  /**
   * 查询文档 -- 使用 match 查询 -- 模糊查询
   */
  @Test
  public void searchDocumentMatch() {
    //QueryBuilders.matchQuery("title", "小米") 指定具体的匹配字段,查询条件会被分词,根据分词后的结果进行匹配
    MatchQueryBuilder matchQueryBuilder = QueryBuilders.matchQuery("title", "小米");
    NativeSearchQuery nativeSearchQuery = new NativeSearchQuery(matchQueryBuilder);
    SearchHits<Product> searchHits = elasticsearchRestTemplate.search(nativeSearchQuery, Product.class);
    List<Product> list = new ArrayList<>();
    System.out.println("查询文档,使用 match 查询,模糊查询,查询总数:" + searchHits.getTotalHits());
    searchHits.forEach(sh -> {
      list.add(sh.getContent());
    });
    System.out.println("查询文档,使用 match 查询,模糊查询,结果集:" + list);
  }

  /**
   * 查询文档 -- 使用 match 查询 -- 模糊查询之短语搜索
   * 备注:短语搜索是对条件不分词,但是文档中属性根据配置实体类时指定的分词类型进行分词,如果属性使用ik分词器,从分词后的索引数据中进行匹配。
   */
  @Test
  public void searchDocumentMatchPhrase() {
    //QueryBuilders.matchPhraseQuery("title", "小米") 指定具体的匹配字段为 title,查询条件不分词,直接匹配
    MatchPhraseQueryBuilder matchPhraseQueryBuilder = QueryBuilders.matchPhraseQuery("title", "小米");
    NativeSearchQuery nativeSearchQuery = new NativeSearchQuery(matchPhraseQueryBuilder);
    SearchHits<Product> searchHits = elasticsearchRestTemplate.search(nativeSearchQuery, Product.class);
    List<Product> list = new ArrayList<>();
    System.out.println("查询文档,使用 match 查询,模糊查询,查询总数:" + searchHits.getTotalHits());
    searchHits.forEach(sh -> {
      list.add(sh.getContent());
    });
    System.out.println("查询文档,使用 match 查询,模糊查询,结果集:" + list);
  }

  /**
   * 查询文档 -- 区间查询
   * 备注:短语搜索是对条件不分词,但是文档中属性根据配置实体类时指定的分词类型进行分词,如果属性使用ik分词器,从分词后的索引数据中进行匹配。
   */
  @Test
  public void searchDocumentRange() {
    //QueryBuilders.rangeQuery("price") 根据 price 字段进行区间查询。gte():大于等于,lte():小于等于,gt():大于,lt():小于。
    RangeQueryBuilder rangeQueryBuilder = QueryBuilders.rangeQuery("price").gte(4688.88D).lte(6000D);//gte(4688.88D) 和 gte(4688.88) 效果相同
    NativeSearchQuery nativeSearchQuery = new NativeSearchQuery(rangeQueryBuilder);
    SearchHits<Product> searchHits = elasticsearchRestTemplate.search(nativeSearchQuery, Product.class);
    List<Product> list = new ArrayList<>();
    System.out.println("查询文档,区间查询,查询总数:" + searchHits.getTotalHits());
    searchHits.forEach(sh -> {
      list.add(sh.getContent());
    });
    System.out.println("查询文档,区间查询,结果集:" + list);
  }

  /**
   * 查询文档 -- 简单多条件查询
   * 实例:查询商品名称中包含 “米” 字,且价格大于等于 2555.88 的所有商品
   */
  @Test
  public void searchDocumentMoreCondition() {
    //QueryBuilders.rangeQuery("price") 根据 price 字段进行区间查询。gte():大于等于,lte():小于等于,gt():大于,lt():小于。
    List<QueryBuilder> queryBuilderList = new ArrayList<>();
    BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
    queryBuilderList.add(QueryBuilders.matchQuery("title","米"));//商品名称 中要包含 “米” 字
    queryBuilderList.add(QueryBuilders.rangeQuery("price").gte(2555.88));//商品价格 要大于等于 2555.88
//    boolQueryBuilder.must().addAll(queryBuilderList);//逻辑 与(and -- 多个条件都要满足)
    boolQueryBuilder.should().addAll(queryBuilderList);//逻辑 或(or -- 多个条件满足其一即可)
    NativeSearchQuery nativeSearchQuery = new NativeSearchQuery(boolQueryBuilder);
    SearchHits<Product> searchHits = elasticsearchRestTemplate.search(nativeSearchQuery, Product.class);
    List<Product> list = new ArrayList<>();
    System.out.println("查询文档,简单多条件查询,查询总数:" + searchHits.getTotalHits());
    searchHits.forEach(sh -> {
      list.add(sh.getContent());
    });
    System.out.println("查询文档,简单多条件查询,结果集:" + list);
  }

  /**
   * 查询文档 -- 复杂多条件查询
   * 实例:查询商品名称包含 “小米” 或者 “一加”,并且商品类型为 “手机” 的所有商品
   */
  @Test
  public void searchDocumentMoreConditionOther() {
    //封装第一个查询条件
    List<QueryBuilder> queryBuilderList01 = new ArrayList<>();
    BoolQueryBuilder boolQueryBuilder01 = QueryBuilders.boolQuery();
    queryBuilderList01.add(QueryBuilders.matchQuery("title","小米"));//商品名称 中要包含 “小米” 关键字
    queryBuilderList01.add(QueryBuilders.matchQuery("category","手机"));//商品类型 要是 “手机”
    boolQueryBuilder01.must().addAll(queryBuilderList01);//逻辑 与(and -- 多个条件都要满足)
    //封装第二个查询条件
    List<QueryBuilder> queryBuilderList02 = new ArrayList<>();
    BoolQueryBuilder boolQueryBuilder02 = QueryBuilders.boolQuery();
    queryBuilderList02.add(QueryBuilders.matchQuery("title","一加"));//商品名称 中要包含 “一加” 关键字
    queryBuilderList02.add(QueryBuilders.matchQuery("category","手机"));//商品类型 要是 “手机”
    boolQueryBuilder02.must().addAll(queryBuilderList02);//逻辑 与(and -- 多个条件都要满足)
    //将 条件一 和 条件二 封装到 条件三 中,再使用 条件三 作为最后的查询条件
    List<QueryBuilder> queryBuilderList03 = new ArrayList<>();
    BoolQueryBuilder boolQueryBuilder03 = QueryBuilders.boolQuery();
    queryBuilderList03.add(boolQueryBuilder01);
    queryBuilderList03.add(boolQueryBuilder02);
    boolQueryBuilder03.should().addAll(queryBuilderList03);//逻辑 或(or -- 多个条件满足其一即可)
    //使用 条件三 作为最后的查询条件进行查询
    NativeSearchQuery nativeSearchQuery = new NativeSearchQuery(boolQueryBuilder03);
    SearchHits<Product> searchHits = elasticsearchRestTemplate.search(nativeSearchQuery, Product.class);
    List<Product> list = new ArrayList<>();
    System.out.println("查询文档,复杂多条件查询,查询总数:" + searchHits.getTotalHits());
    searchHits.forEach(sh -> {
      list.add(sh.getContent());
    });
    System.out.println("查询文档,复杂多条件查询,结果集:" + list);
  }

  /**
   * 查询文档 -- 分页与排序
   * 实例:查询商品名称中包含 “米” 字,且价格大于等于 2555.88 的所有商品
   */
  @Test
  public void searchDocumentPageAndOrder() {
    MatchAllQueryBuilder matchAllQueryBuilder = QueryBuilders.matchAllQuery();
    Query query = new NativeSearchQuery(matchAllQueryBuilder);
    //排序(Direction.ASC:升序;Direction.DESC:降序)
    query.addSort(Sort.by(Direction.ASC,"id"));
    //分页(起始页:0)
    query.setPageable(PageRequest.of(0,2));
    SearchHits<Product> searchHits = elasticsearchRestTemplate.search(query, Product.class);
    List<Product> list = new ArrayList<>();
    System.out.println("查询文档,分页与排序,查询总数:" + searchHits.getTotalHits() + ",当前页的数量:" + searchHits.getSearchHits().size());
    searchHits.forEach(sh -> {
      list.add(sh.getContent());
    });
    System.out.println("查询文档,分页与排序,结果集:" + list);
  }

  /**
   * 查询文档 -- 去重
   * 实例:查询商品名称中包含 “米” 字,且价格大于等于 2555.88 的所有商品
   * 注意:去重的字段不能是 text 类型
   */
  @Test
  public void searchDocumentCollapse() {
    MatchAllQueryBuilder matchAllQueryBuilder = QueryBuilders.matchAllQuery();
    NativeSearchQueryBuilder nativeSearchQueryBuilder = new NativeSearchQueryBuilder();
    BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
    boolQueryBuilder.must(matchAllQueryBuilder);
    nativeSearchQueryBuilder.withQuery(boolQueryBuilder);
    //根据商品类型 category 进行去重,去重的字段一定不能是 text 类型
    nativeSearchQueryBuilder.withCollapseField("category");
    SearchHits<Product> searchHits = elasticsearchRestTemplate.search(nativeSearchQueryBuilder.build(), Product.class);
    List<Product> list = new ArrayList<>();
    System.out.println("查询文档,去重,查询总数:" + searchHits.getTotalHits() + ",去重后的数量:" + searchHits.getSearchHits().size());
    searchHits.forEach(sh -> {
      list.add(sh.getContent());
    });
    System.out.println("查询文档,去重,结果集:" + list);
  }

  /**
   * 查询文档 -- 分组聚合
   * 实例:根据商品类型进行分类,查询所有类型对应的商品数量
   * 注意:作为聚合的字段不能是 text 类型
   */
  @Test
  public void searchDocumentAggregations() {
    MatchAllQueryBuilder matchAllQueryBuilder = QueryBuilders.matchAllQuery();
    NativeSearchQueryBuilder nativeSearchQueryBuilder = new NativeSearchQueryBuilder();
    BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
    boolQueryBuilder.must(matchAllQueryBuilder);
    nativeSearchQueryBuilder.withQuery(boolQueryBuilder);
    //AggregationBuilders.terms("product_count") 中 product_count 是分组后的结果集对应的 key,field("category") 中的 category 为分组的字段
    nativeSearchQueryBuilder.addAggregation(AggregationBuilders.terms("product_count").field("category"));
    SearchHits<Product> searchHits = elasticsearchRestTemplate.search(nativeSearchQueryBuilder.build(), Product.class);
    Aggregations aggregations = searchHits.getAggregations();//分组聚合后的结果集对象
    ParsedStringTerms product_count = aggregations.get("product_count");
    Map<String,Long> map = new HashMap<>();
    System.out.println("查询文档,分组聚合,分组聚合后的结果集对象:" + product_count);
    for (Terms.Bucket bucket: product_count.getBuckets()) {
      map.put(bucket.getKeyAsString(),bucket.getDocCount());
    }
    System.out.println("查询文档,分组聚合,结果集:" + map);
  }

}

5、User 实体类

package com.test.elasticsearch.entity;

import java.util.Date;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;
import org.springframework.data.annotation.Id;
import org.springframework.data.elasticsearch.annotations.Document;
import org.springframework.data.elasticsearch.annotations.Field;
import org.springframework.data.elasticsearch.annotations.FieldType;

@Data
@NoArgsConstructor
@AllArgsConstructor
@ToString
@Document(indexName = "user", shards = 1, replicas = 1)
public class User {

  @Id
  @Field(type = FieldType.Long)
  private Long id;
  @Field(type = FieldType.Text)
  private String name;
  @Field(type = FieldType.Keyword)
  private String sex;
  @Field(type = FieldType.Integer)
  private Integer age;
  @Field(type = FieldType.Keyword)
  private String role;
  @Field(type = FieldType.Date)
  private Date birthday;
  @Field(type = FieldType.Long)
  private Long worth;
  @Field(type = FieldType.Boolean)
  private boolean isDied;

}

6、RestHighLevelClient 客户端测试类

package com.test.elasticsearch;

import com.alibaba.fastjson.JSON;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.test.elasticsearch.entity.User;
import java.io.IOException;
import java.util.Date;
import lombok.extern.slf4j.Slf4j;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.support.master.AcknowledgedResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.indices.CreateIndexRequest;
import org.elasticsearch.client.indices.CreateIndexResponse;
import org.elasticsearch.client.indices.GetIndexRequest;
import org.elasticsearch.client.indices.GetIndexResponse;
import org.elasticsearch.common.unit.Fuzziness;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.query.BoolQueryBuilder;
import org.elasticsearch.index.query.FuzzyQueryBuilder;
import org.elasticsearch.index.query.IdsQueryBuilder;
import org.elasticsearch.index.query.MatchAllQueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.index.query.RangeQueryBuilder;
import org.elasticsearch.index.query.TermQueryBuilder;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.SearchHits;
import org.elasticsearch.search.aggregations.AggregationBuilder;
import org.elasticsearch.search.aggregations.AggregationBuilders;
import org.elasticsearch.search.aggregations.Aggregations;
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
import org.elasticsearch.search.aggregations.metrics.CardinalityAggregationBuilder;
import org.elasticsearch.search.aggregations.metrics.ParsedAvg;
import org.elasticsearch.search.aggregations.metrics.ParsedCardinality;
import org.elasticsearch.search.aggregations.metrics.ParsedMax;
import org.elasticsearch.search.aggregations.metrics.ParsedMin;
import org.elasticsearch.search.aggregations.metrics.ParsedStats;
import org.elasticsearch.search.aggregations.metrics.ParsedValueCount;
import org.elasticsearch.search.aggregations.metrics.StatsAggregationBuilder;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
import org.elasticsearch.search.sort.SortOrder;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

/**
 * 使用 Spring Data Elasticsearch 操作 es 索引库
 * RestHighLevelClient 客户端测试
 */
@Slf4j
@RunWith(SpringJUnit4ClassRunner.class)//不加这个注解 service层 就无法注入
@SpringBootTest
public class SpringDataESIndexRestHighTest {

  @Autowired
  private RestHighLevelClient restHighLevelClient;

  /**
   * 创建索引
   */
  @Test
  public void restHighCreateIndex() throws IOException {
    //创建创建索引的请求对象
    CreateIndexRequest createIndexRequest = new CreateIndexRequest("user");
    //创建索引
    CreateIndexResponse createIndexResponse = restHighLevelClient.indices().create(createIndexRequest, RequestOptions.DEFAULT);
    //获取创建索引的响应状态
    boolean acknowledged = createIndexResponse.isAcknowledged();
    System.out.println("索引操作,创建索引,创建结果:" + acknowledged);
  }

  /**
   * 查询索引库的相关信息
   */
  @Test
  public void restHighQueryIndexsInfo() throws IOException {
    GetIndexRequest getIndexRequest = new GetIndexRequest("user");
    GetIndexResponse getIndexResponse = restHighLevelClient.indices().get(getIndexRequest, RequestOptions.DEFAULT);
    //获取创建索引的响应状态
    //获取索引库的相关信息
    System.out.println("索引库的别名:" + getIndexResponse.getAliases());
    System.out.println("索引库的映射信息:" + JSON.toJSONString(getIndexResponse.getMappings()));
    System.out.println("索引库的配置信息:" + getIndexResponse.getSettings());
    System.out.println("索引库的默认配置信息:" + getIndexResponse.getDefaultSettings());
    System.out.println("不知道啥信息111:" + getIndexResponse.getIndices());
    System.out.println("不知道啥信息222:" + getIndexResponse.getDataStreams());
  }

  /**
   * 删除索引库
   */
  @Test
  public void restHighDeleteIndex() throws IOException {
    GetIndexRequest getIndexRequest = new GetIndexRequest("user");
    System.out.println("索引操作,删除之前--判断索引是否存在,结果:" + restHighLevelClient.indices().exists(getIndexRequest,RequestOptions.DEFAULT));
    DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest("user");
    AcknowledgedResponse acknowledgedResponse = restHighLevelClient.indices().delete(deleteIndexRequest, RequestOptions.DEFAULT);
    System.out.println("索引操作,删除索引,删除结果:" + acknowledgedResponse.isAcknowledged());
    System.out.println("索引操作,删除之后--判断索引是否存在,结果:" + restHighLevelClient.indices().exists(getIndexRequest,RequestOptions.DEFAULT));
  }

  /**
   * 索引库添加文档
   * 注意:
   * ① 如果索引库不存在,添加文档的时候,默认会自动创建索引库,前提是实体类中使用 @Document(indexName = "user", shards = 1, replicas = 1) 注解绑定了索引库
   * ② 如果映射关系没有创建,添加文档的时候,默认会根据实体类中 @Field(type = FieldType.Text) 注解绑定的属性自动创建映射关系
   */
  @Test
  public void restHighAddDocument() throws IOException {
    IndexRequest indexRequest = new IndexRequest();
    indexRequest.index("user").id("1001");
    User user01 = new User();
    user01.setId(1001L);
    user01.setName("萧炎");
    user01.setSex("男");
    user01.setAge(18);
    user01.setRole("男主");
    //向es添加数据,必须将数据转换为json格式
    ObjectMapper objectMapper = new ObjectMapper();
    String user01Str = objectMapper.writeValueAsString(user01);
    indexRequest.source(user01Str, XContentType.JSON);
    //向索引库中插入数据
    IndexResponse indexResponse = restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT);
    System.out.println("文档操作,添加文档,结果:" + indexResponse.getResult());
  }

  /**
   * 索引库查询文档
   */
  @Test
  public void restHighGetDocument() throws IOException {
    GetRequest getRequest = new GetRequest();
    getRequest.index("user").id("1001");//根据文档id查询
    GetResponse getResponse = restHighLevelClient.get(getRequest, RequestOptions.DEFAULT);
    System.out.println("文档操作,查询文档,结果:" + getResponse.getSourceAsString());
  }

  /**
   * 索引库修改文档
   */
  @Test
  public void restHighUpdateDocument() throws IOException {
    UpdateRequest updateRequest = new UpdateRequest();
    updateRequest.index("user").id("1001");
    updateRequest.doc(XContentType.JSON,"id",1002L);
    UpdateResponse updateResponse = restHighLevelClient.update(updateRequest, RequestOptions.DEFAULT);
    System.out.println("文档操作,修改文档,结果:" + updateResponse.getResult());
  }

  /**
   * 索引库删除文档
   */
  @Test
  public void restHighDeleteDocument() throws IOException {
    GetRequest getRequest = new GetRequest();
    getRequest.index("user").id("1001");//根据文档id查询
    System.out.println("文档操作,删除之前 -- 判断文档是否存在,结果:" + restHighLevelClient.exists(getRequest, RequestOptions.DEFAULT));
    DeleteRequest deleteRequest = new DeleteRequest();
    deleteRequest.index("user").id("1001");//根据文档id进行删除
    DeleteResponse deleteResponse = restHighLevelClient.delete(deleteRequest, RequestOptions.DEFAULT);
    System.out.println("文档操作,删除文档,结果:" + deleteResponse.getResult());
    System.out.println("文档操作,删除之后 -- 判断文档是否存在,结果:" + restHighLevelClient.exists(getRequest, RequestOptions.DEFAULT));
  }

  /**
   * 索引库批量添加文档
   */
  @Test
  public void restHighAddDocumentBatch() throws IOException {
    BulkRequest bulkRequest = new BulkRequest();
    bulkRequest.add(new IndexRequest().index("user").id("1001").source(XContentType.JSON,"name","萧炎","sex","男","age",18,"role","男主","birthday",new Date(1417251624000L),"worth",100000000L,"isDied",false));
    bulkRequest.add(new IndexRequest().index("user").id("1002").source(XContentType.JSON,"name","云韵","sex","女","age",25,"role","女二","birthday",new Date(867574824000L),"worth",55555555L,"isDied",false));
    bulkRequest.add(new IndexRequest().index("user").id("1003").source(XContentType.JSON,"name","萧熏儿","sex","女","age",18,"role","女一","birthday",new Date(1411981224000L),"worth",99999999L,"isDied",false));
    bulkRequest.add(new IndexRequest().index("user").id("1004").source(XContentType.JSON,"name","美杜莎","sex","女","age",25,"role","女一","birthday",new Date(858934824000L),"worth",99999999L,"isDied",false));
    bulkRequest.add(new IndexRequest().index("user").id("1005").source(XContentType.JSON,"name","药老","sex","男","age",55,"role","男二","birthday",new Date(-87145176000L),"worth",88888888L,"isDied",true));
    bulkRequest.add(new IndexRequest().index("user").id("1006").source(XContentType.JSON,"name","海波东","sex","男","age",45,"role","男三","birthday",new Date(240915624000L),"worth",66666666L,"isDied",false));
    bulkRequest.add(new IndexRequest().index("user").id("1007").source(XContentType.JSON,"name","石漠城萧鼎","sex","男","age",21,"role","男四","birthday",new Date(990349224000L),"worth",7895562L,"isDied",false));
    bulkRequest.add(new IndexRequest().index("user").id("1008").source(XContentType.JSON,"name","石漠城萧厉","sex","男","age",20,"role","男四","birthday",new Date(1026982824000L),"worth",6975898L,"isDied",false));
    bulkRequest.add(new IndexRequest().index("user").id("1009").source(XContentType.JSON,"name","萧战-萧族族主","sex","男","age",46,"role","男四","birthday",new Date(190198824000L),"worth",8546897L,"isDied",false));
    BulkResponse bulkResponse = restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT);
    System.out.println("文档操作,批量添加文档,耗时:" + bulkResponse.getTook());
  }

  /**
   * 索引库批量删除文档
   */
  @Test
  public void restHighDeleteDocumentBatch() throws IOException {
    BulkRequest bulkRequest = new BulkRequest();
    bulkRequest.add(new DeleteRequest().index("user").id("1001"));
    bulkRequest.add(new DeleteRequest().index("user").id("1002"));
    bulkRequest.add(new DeleteRequest().index("user").id("1003"));
    bulkRequest.add(new DeleteRequest().index("user").id("1004"));
    bulkRequest.add(new DeleteRequest().index("user").id("1005"));
    bulkRequest.add(new DeleteRequest().index("user").id("1006"));
    BulkResponse bulkResponse = restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT);
    System.out.println("文档操作,批量删除文档,耗时:" + bulkResponse.getTook());
  }

  /**
   * 高级查询之全量查询
   */
  @Test
  public void restHighSearchDocumentAll() throws IOException {
    SearchRequest searchRequest = new SearchRequest();
    searchRequest.indices("user");
    SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
    SearchHits hits = searchResponse.getHits();
    System.out.println("文档操作,高级查询之全量查询,命中数量:" + hits.getTotalHits());
    System.out.println("文档操作,高级查询之全量查询,查询耗时:" + searchResponse.getTook());
    SearchHit[] result = hits.getHits();
    for (SearchHit searchHit: result) {
      System.out.println("文档操作,高级查询之全量查询,id:" + searchHit.getId() + ",内容:" + searchHit.getSourceAsString());
    }
  }

  /**
   * 高级查询之精确匹配
   * 注意:
   * ① QueryBuilders.termQuery(String name,String value):termQuery是精确匹配,即输入的查询内容是什么,就会按照什么取查询,不会解析查询内容,进行分词。这里的 name 是文档中的字段名,value 是要查询的内容。
   */
  @Test
  public void restHighSearchDocumentTermQuery() throws IOException {
    SearchRequest searchRequest = new SearchRequest();
    searchRequest.indices("user");
    //根据年龄字段查询
//    searchRequest.source(new SearchSourceBuilder().query(QueryBuilders.termQuery("age",18)));
    //根据名称 name 字段精确匹配
    searchRequest.source(new SearchSourceBuilder().query(QueryBuilders.termQuery("name","炎")));
    //根据角色 role 字段精确匹配
//    searchRequest.source(new SearchSourceBuilder().query(QueryBuilders.termQuery("role","男主")));
    SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
    SearchHits hits = searchResponse.getHits();
    System.out.println("文档操作,高级查询之精确匹配,命中数量:" + hits.getTotalHits());
    System.out.println("文档操作,高级查询之精确匹配,查询耗时:" + searchResponse.getTook());
    hits.forEach(p -> {
      System.out.println("文档操作,高级查询之精确匹配,查询结果,文档id:" + p.getId() + ",文档原生信息:" + p.getSourceAsString());
    });
  }

  /**
   * 高级查询之布尔查询
   */
  @Test
  public void restHighSearchDocumentBoolQuery() throws IOException {
    SearchRequest searchRequest = new SearchRequest();
    searchRequest.indices("user");
    SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();

    BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
    boolQueryBuilder.should(QueryBuilders.rangeQuery("age").gte(20));//年龄大于等于20
    boolQueryBuilder.should(QueryBuilders.rangeQuery("age").lte(45));//年龄小于等于45
    boolQueryBuilder.must(QueryBuilders.matchQuery("sex","女"));//性别为女

    searchSourceBuilder.query(boolQueryBuilder);
    searchRequest.source(searchSourceBuilder);
    SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
    SearchHits hits = searchResponse.getHits();
    System.out.println("文档操作,高级查询之布尔查询,命中数量:" + hits.getTotalHits());
    System.out.println("文档操作,高级查询之布尔查询,查询耗时:" + searchResponse.getTook());
    hits.forEach(p -> {
      System.out.println("文档操作,高级查询之布尔查询,查询结果,文档id:" + p.getId() + ",文档原生信息:" + p.getSourceAsString());
    });
  }

  /**
   * 高级查询之日期查询
   */
  @Test
  public void restHighSearchDocumentDateQuery() throws IOException {
    SearchRequest searchRequest = new SearchRequest();
    searchRequest.indices("user");
    SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();

    RangeQueryBuilder rangeQueryBuilder = QueryBuilders.rangeQuery("birthday");
//    rangeQueryBuilder.gt("now-21y");//now-21y:表示当前时间减去21年
    rangeQueryBuilder.gt(1027158006000L);//可以直接填时间戳,毫秒值

    searchSourceBuilder.query(rangeQueryBuilder);
    searchRequest.source(searchSourceBuilder);
    SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
    SearchHits hits = searchResponse.getHits();
    System.out.println("文档操作,高级查询之日期查询,命中数量:" + hits.getTotalHits());
    System.out.println("文档操作,高级查询之日期查询,查询耗时:" + searchResponse.getTook());
    hits.forEach(p -> {
      System.out.println("文档操作,高级查询之日期查询,查询结果,文档id:" + p.getId() + ",文档原生信息:" + p.getSourceAsString());
    });
  }

  /**
   * 高级查询之多个id查询
   */
  @Test
  public void restHighSearchDocumentMoreIdsQuery() throws IOException {
    SearchRequest searchRequest = new SearchRequest();
    searchRequest.indices("user");
    SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();

    IdsQueryBuilder idsQueryBuilder = QueryBuilders.idsQuery();
    idsQueryBuilder.addIds("1001","1002","1003");

    searchSourceBuilder.query(idsQueryBuilder);
    searchRequest.source(searchSourceBuilder);
    SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
    SearchHits hits = searchResponse.getHits();
    System.out.println("文档操作,高级查询之多个id查询,命中数量:" + hits.getTotalHits());
    System.out.println("文档操作,高级查询之多个id查询,查询耗时:" + searchResponse.getTook());
    hits.forEach(p -> {
      System.out.println("文档操作,高级查询之多个id查询,查询结果,文档id:" + p.getId() + ",文档原生信息:" + p.getSourceAsString());
    });
  }

  /**
   * 高级查询之多条件组合查询
   */
  @Test
  public void restHighSearchDocumentByConditionsMore() throws IOException {
    SearchRequest searchRequest = new SearchRequest();
    searchRequest.indices("user");
    SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
    BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
    //年龄必须是18岁,must 相当于 and
    boolQueryBuilder.must(QueryBuilders.matchQuery("age",18));
    //性别必须是男,must 相当于 and
//    boolQueryBuilder.must(QueryBuilders.matchQuery("sex","男"));
    //性别应该是男,should 相当于 or
    boolQueryBuilder.should(QueryBuilders.matchQuery("sex","男"));
    //性别应该是女,should 相当于 or
    boolQueryBuilder.should(QueryBuilders.matchQuery("sex","女"));
    searchSourceBuilder.query(boolQueryBuilder);
    searchRequest.source(searchSourceBuilder);
    //执行查询操作
    SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
    SearchHits hits = searchResponse.getHits();
    System.out.println("文档操作,高级查询之多条件组合查询,命中数量:" + hits.getTotalHits());
    System.out.println("文档操作,高级查询之多条件组合查询,查询耗时:" + searchResponse.getTook());
    SearchHit[] result = hits.getHits();
    for (SearchHit searchHit: result) {
      System.out.println("文档操作,高级查询之多条件组合查询,id:" + searchHit.getId() + ",内容:" + searchHit.getSourceAsString());
    }
  }

  /**
   * 高级查询之模糊查询
   * 1、模糊查询 FuzzyQueryBuilder:QueryBuilders.fuzzyQuery(String name, Object value) 分词模糊查询,通过增加fuzziness 模糊属性,来查询
   * ① 输入的 name 会被分词
   * ② Fuzziness 参数会指定编辑次数,Fuzziness.ZERO 为绝对匹配,Fuzziness.ONE 为编辑一次(指修改一个字符),Fuzziness.TWO 为编辑两次(指修改两个字符),Fuzziness.AUTO 为默认推荐的方式
   * 2、Fuzziness 具体指什么?
   * QueryBuilders.fuzzyQuery("name", "萧").fuzziness() 方法是用来度量把一个单词转换为另一个单词需要的单字符编辑次数。单字符编辑方式如下:
   * ① 替换 一个字符到另一个字符: _f_ox -> _b_ox
   * ② 插入 一个新字符: sic -> sick
   * ③ 删除 一个字符:: b_l_ack -> back
   * ④ 换位 调整字符: _st_ar -> _ts_ar
   * 当然, 一个字符串的单次编辑次数依赖于它的长度。例如:对 hat 进行两次编辑可以得到 mad,所以允许对长度为3的字符串进行两次修改就太过了,Fuzziness 参数可以被设置成 AUTO,结果会在下面的最大编辑距离中:
   * ① Fuzziness.ZERO:适用于只有 1 或 2 个字符的字符串
   * ② Fuzziness.ONE:适用于 3、4或5个字符的字符串
   * ③ Fuzziness.TWO:适用于多于5个字符的字符串
   * 当然, 你可能发现编辑距离为 2 仍然是太过了,返回的结果好像并没有什么关联,把 Fuzziness 设置为 Fuzziness.ONE ,你可能会获得更好的结果和性能.
   */
  @Test
  public void restHighSearchDocumentByConditionsLike() throws IOException {
    SearchRequest searchRequest = new SearchRequest();
    searchRequest.indices("user");
    SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
    //根据姓名 name 字段进行模糊匹配,不加 fuzziness() 方法表示默认,name 字段中要包含一个 “萧” 字算满足条件
//    FuzzyQueryBuilder fuzzyQueryBuilder = QueryBuilders.fuzzyQuery("name", "萧");
    //根据姓名 name 字段进行模糊匹配,Fuzziness.AUTO 是默认配置,name 字段中要包含一个 “萧” 字算满足条件
//    FuzzyQueryBuilder fuzzyQueryBuilder = QueryBuilders.fuzzyQuery("name", "萧").fuzziness(Fuzziness.AUTO);
    //根据姓名 name 字段进行模糊匹配,表示 name 字段在不做任何修改的前提下包含一个 “萧” 字则满足条件
//    FuzzyQueryBuilder fuzzyQueryBuilder = QueryBuilders.fuzzyQuery("name", "萧").fuzziness(Fuzziness.ZERO);
    //根据姓名 name 字段进行模糊匹配,表示 name 字段如果修改一个字符能使 name 字段包含一个 “萧炎” 字则满足条件
//    FuzzyQueryBuilder fuzzyQueryBuilder = QueryBuilders.fuzzyQuery("name", "萧炎").fuzziness(Fuzziness.ONE);
    //根据姓名 name 字段进行模糊匹配,表示 name 字段如果修改两个个字符能使 name 字段包含一个 “石漠” 字则满足条件
    FuzzyQueryBuilder fuzzyQueryBuilder = QueryBuilders.fuzzyQuery("name", "石漠").fuzziness(Fuzziness.TWO);
    searchSourceBuilder.query(fuzzyQueryBuilder);
    searchRequest.source(searchSourceBuilder);
    //执行查询操作
    SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
    SearchHits hits = searchResponse.getHits();
    System.out.println("文档操作,高级查询之模糊查询,命中数量:" + hits.getTotalHits());
    System.out.println("文档操作,高级查询之模糊查询,查询耗时:" + searchResponse.getTook());
    SearchHit[] result = hits.getHits();
    for (SearchHit searchHit: result) {
      System.out.println("文档操作,高级查询之模糊查询,id:" + searchHit.getId() + ",内容:" + searchHit.getSourceAsString());
    }
  }

  /**
   * 高级查询之范围查询
   */
  @Test
  public void restHighSearchDocumentRange() throws IOException {
    SearchRequest searchRequest = new SearchRequest();
    searchRequest.indices("user");
    SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
    //指定根据 age 字段进行区间查询
    RangeQueryBuilder rangeQueryBuilder = QueryBuilders.rangeQuery("age");
    //gte():大于等于,lte():小于等于,gt():大于,lt():小于。
    rangeQueryBuilder.gte(18);
    rangeQueryBuilder.lte(25);
    searchSourceBuilder.query(rangeQueryBuilder);
    searchRequest.source(searchSourceBuilder);
    //执行查询操作
    SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
    SearchHits hits = searchResponse.getHits();
    System.out.println("文档操作,高级查询之范围查询,命中数量:" + hits.getTotalHits());
    System.out.println("文档操作,高级查询之范围查询,查询耗时:" + searchResponse.getTook());
    SearchHit[] result = hits.getHits();
    for (SearchHit searchHit: result) {
      System.out.println("文档操作,高级查询之范围查询,id:" + searchHit.getId() + ",内容:" + searchHit.getSourceAsString());
    }
  }

  /**
   * 高级查询之高亮查询
   */
  @Test
  public void restHighSearchDocumentHighLight() throws IOException {
    SearchRequest searchRequest = new SearchRequest();
    searchRequest.indices("user");
    SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();

    TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("sex", "女");
    searchSourceBuilder.query(termQueryBuilder);
    //构建高亮对象
    HighlightBuilder highlightBuilder = new HighlightBuilder();
    //高亮样式
    highlightBuilder.preTags("<font color='red'>");//前缀标签
    highlightBuilder.postTags("</font>");//后缀标签
    //选择高亮字段
    highlightBuilder.field("name");
    highlightBuilder.requireFieldMatch(false);//多字段高亮显示时,需要设置为 false
    highlightBuilder.field("role");
    searchSourceBuilder.highlighter(highlightBuilder);

    searchRequest.source(searchSourceBuilder);
    //执行查询操作
    SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
    SearchHits hits = searchResponse.getHits();
    System.out.println("文档操作,高级查询之高亮查询,命中数量:" + hits.getTotalHits());
    System.out.println("文档操作,高级查询之高亮查询,查询耗时:" + searchResponse.getTook());
    System.out.println("文档操作,高级查询之高亮查询,结果:" + searchResponse);
    hits.forEach(p -> {
      System.out.println("文档操作,高级查询之高亮查询,文档原生信息:" + p.getSourceAsString() + ",高亮信息:" + p.getHighlightFields());
    });
  }

  /**
   * 高级查询之查询结果字段过滤
   */
  @Test
  public void restHighSearchDocumentResultFieldFilter() throws IOException {
    SearchRequest searchRequest = new SearchRequest();
    searchRequest.indices("user");
    SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();

    MatchAllQueryBuilder matchAllQueryBuilder = QueryBuilders.matchAllQuery();
    searchSourceBuilder.query(matchAllQueryBuilder);
    //要排除的字段
    String[] excludes = {"age","role"};
    //要展示的字段
    String[] includes = {};
    searchSourceBuilder.fetchSource(includes,excludes);

    searchRequest.source(searchSourceBuilder);
    //执行查询操作
    SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
    SearchHits hits = searchResponse.getHits();
    System.out.println("文档操作,高级查询之查询结果字段过滤,命中数量:" + hits.getTotalHits());
    System.out.println("文档操作,高级查询之查询结果字段过滤,查询耗时:" + searchResponse.getTook());
    System.out.println("文档操作,高级查询之查询结果字段过滤,结果:" + searchResponse);
    SearchHit[] result = hits.getHits();
    for (SearchHit searchHit: result) {
      System.out.println("文档操作,高级查询之查询结果字段过滤,id:" + searchHit.getId() + ",内容:" + searchHit.getSourceAsString());
    }
  }

  /**
   * 高级查询之查询结果排序
   */
  @Test
  public void restHighSearchDocumentResultOrder() throws IOException {
    SearchRequest searchRequest = new SearchRequest();
    searchRequest.indices("user");
    SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();

    MatchAllQueryBuilder matchAllQueryBuilder = QueryBuilders.matchAllQuery();
    searchSourceBuilder.query(matchAllQueryBuilder);
    //要排除的字段
    String[] excludes = {"sex","role"};
    //要展示的字段
    String[] includes = {"name","age"};
    searchSourceBuilder.fetchSource(includes,excludes);
    //按照年龄字段 age 排序,SortOrder.ASC 升序
//    searchSourceBuilder.sort("age", SortOrder.ASC);
    //按照年龄字段 age 排序,SortOrder.DESC 降序
    searchSourceBuilder.sort("age", SortOrder.DESC);

    searchRequest.source(searchSourceBuilder);
    //执行查询操作
    SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
    SearchHits hits = searchResponse.getHits();
    System.out.println("文档操作,高级查询之查询结果排序,命中数量:" + hits.getTotalHits());
    System.out.println("文档操作,高级查询之查询结果排序,查询耗时:" + searchResponse.getTook());
    System.out.println("文档操作,高级查询之查询结果排序,结果:" + searchResponse);
    SearchHit[] result = hits.getHits();
    for (SearchHit searchHit: result) {
      System.out.println("文档操作,高级查询之查询结果排序,id:" + searchHit.getId() + ",内容:" + searchHit.getSourceAsString());
    }
  }

  /**
   * 高级查询之查询结果去重
   */
  @Test
  public void restHighSearchDocumentResultCollapse() throws IOException {
    SearchRequest searchRequest = new SearchRequest();
    searchRequest.indices("user");
    SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();

    MatchAllQueryBuilder matchAllQueryBuilder = QueryBuilders.matchAllQuery();
    searchSourceBuilder.query(matchAllQueryBuilder);
    //年龄 age 字段去重
    CardinalityAggregationBuilder cardinalityAggregationBuilderAgeCard = AggregationBuilders.cardinality("ageCard").field("age");
    searchSourceBuilder.aggregation(cardinalityAggregationBuilderAgeCard);
    //角色 role 字段去重
    CardinalityAggregationBuilder cardinalityAggregationBuilderRoleCard = AggregationBuilders.cardinality("roleCard").field("role");
    searchSourceBuilder.aggregation(cardinalityAggregationBuilderRoleCard);

    searchRequest.source(searchSourceBuilder);
    //执行查询操作
    SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
    SearchHits hits = searchResponse.getHits();
    System.out.println("文档操作,高级查询之查询结果去重,命中数量:" + hits.getTotalHits());
    System.out.println("文档操作,高级查询之查询结果去重,查询耗时:" + searchResponse.getTook());
    Aggregations aggregations = searchResponse.getAggregations();
    ParsedCardinality parsedCardinalityAge = aggregations.get("ageCard");
    System.out.println("文档操作,高级查询之查询结果去重,年龄去重后的结果,key:" + parsedCardinalityAge.getName() + ",去重后的数量:" + parsedCardinalityAge.getValue());
    ParsedCardinality parsedCardinalityRole = aggregations.get("roleCard");
    System.out.println("文档操作,高级查询之查询结果去重,角色去重后的结果,key:" + parsedCardinalityRole.getName() + ",去重后的数量:" + parsedCardinalityRole.getValue());
    SearchHit[] result = hits.getHits();
    for (SearchHit searchHit: result) {
      System.out.println("文档操作,高级查询之查询结果去重,id:" + searchHit.getId() + ",内容:" + searchHit.getSourceAsString());
    }
  }

  /**
   * 高级查询之聚合查询
   * 注意:
   * ① Text 类型的字段是不允许进行聚合操作的,不能进行分组统计
   */
  @Test
  public void restHighSearchDocumentAggregation() throws IOException {
    SearchRequest searchRequest = new SearchRequest();
    searchRequest.indices("user");
    SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();

    //查询年龄 age 的最大值,最大值对应的key为 avgAge
    AggregationBuilder aggregationBuilderMax = AggregationBuilders.max("maxAge").field("age");
    //查询年龄 age 的最小值,最小值对应的key为 avgAge
    AggregationBuilder aggregationBuilderMin = AggregationBuilders.min("minAge").field("age");
    //查询年龄 age 的平均值,平均值对应的key为 avgAge
    AggregationBuilder aggregationBuilderAvg = AggregationBuilders.avg("avgAge").field("age");
    //统计年龄 age 的数量,未去重
    AggregationBuilder aggregationBuilderAgeCount = AggregationBuilders.count("ageCount").field("age");
    //统计年龄 age 的数量,去重
    CardinalityAggregationBuilder cardinalityAggregationBuilderAgeCard = AggregationBuilders.cardinality("ageCard").field("age");
    //对年龄 age 字段进行统计,可以统计出最大值、最小值、平均值、数量、和......
    StatsAggregationBuilder statsAggregationBuilder = AggregationBuilders.stats("ageStatsResult").field("age");
    //根据年龄 age 进行分组,分组后的结果对应的key为 ageGroup
    AggregationBuilder aggregationBuilderAgeGroup = AggregationBuilders.terms("ageGroup").field("age");
    //根据性别 sex 进行分组,分组后的结果对应的key为 sexGroup
    AggregationBuilder aggregationBuilderSexGroup = AggregationBuilders.terms("sexGroup").field("sex");

    searchSourceBuilder.aggregation(aggregationBuilderMax);
    searchSourceBuilder.aggregation(aggregationBuilderMin);
    searchSourceBuilder.aggregation(aggregationBuilderAvg);
    searchSourceBuilder.aggregation(aggregationBuilderAgeCount);
    searchSourceBuilder.aggregation(cardinalityAggregationBuilderAgeCard);
    searchSourceBuilder.aggregation(statsAggregationBuilder);
    searchSourceBuilder.aggregation(aggregationBuilderAgeGroup);
    searchSourceBuilder.aggregation(aggregationBuilderSexGroup);
    searchSourceBuilder.sort("age", SortOrder.DESC);

    searchRequest.source(searchSourceBuilder);
    //执行查询操作
    SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
    SearchHits hits = searchResponse.getHits();
    System.out.println("文档操作,高级查询之聚合查询,命中数量:" + hits.getTotalHits());
    System.out.println("文档操作,高级查询之聚合查询,查询耗时:" + searchResponse.getTook());
    Aggregations aggregations = searchResponse.getAggregations();
    ParsedMax parsedMax = aggregations.get("maxAge");
    System.out.println("文档操作,高级查询之聚合查询,最大值:" + parsedMax.getValue());
    ParsedMin parsedMin = aggregations.get("minAge");
    System.out.println("文档操作,高级查询之聚合查询,最小值:" + parsedMin.getValue());
    ParsedAvg parsedAvg = aggregations.get("avgAge");
    System.out.println("文档操作,高级查询之聚合查询,平均值:" + parsedAvg.getValue());
    ParsedValueCount parsedAgeCount = aggregations.get("ageCount");
    System.out.println("文档操作,高级查询之聚合查询,年龄未去重的数量:" + parsedAgeCount.getValue());
    ParsedCardinality parsedCardinalityAge = aggregations.get("ageCard");
    System.out.println("文档操作,高级查询之聚合查询,年龄去重后的数量:" + parsedCardinalityAge.getValue());
    ParsedStats ageStatsResult = aggregations.get("ageStatsResult");
    System.out.println("文档操作,高级查询之聚合查询,对年龄进行统计后的结果,统计个数(未去重):" + ageStatsResult.getCount());
    System.out.println("文档操作,高级查询之聚合查询,对年龄进行统计后的结果,年龄平均值:" + ageStatsResult.getAvg());
    System.out.println("文档操作,高级查询之聚合查询,对年龄进行统计后的结果,年龄最大值:" + ageStatsResult.getMax());
    System.out.println("文档操作,高级查询之聚合查询,对年龄进行统计后的结果,年龄最小值:" + ageStatsResult.getMin());
    System.out.println("文档操作,高级查询之聚合查询,对年龄进行统计后的结果,年龄之和:" + ageStatsResult.getSum());
    System.out.println("############# 年龄分组结果输出部分 ###############");
    Terms termsAgeGroup = aggregations.get("ageGroup");
    for (Terms.Bucket bucket : termsAgeGroup.getBuckets()) {
      System.out.println("文档操作,高级查询之聚合查询,年龄分组,key:" + bucket.getKeyAsString() + ",value:" + bucket.getDocCount());
    }
    System.out.println("############# 年龄分组结果输出部分 ###############");
    System.out.println("------------- 性别分组结果输出部分 ----------------");
    Terms termsSexGroup = aggregations.get("sexGroup");
    for (Terms.Bucket bucket : termsSexGroup.getBuckets()) {
      System.out.println("文档操作,高级查询之聚合查询,性别分组,key:" + bucket.getKeyAsString() + ",value:" + bucket.getDocCount());
    }
    System.out.println("------------- 性别分组结果输出部分 ----------------");
    SearchHit[] result = hits.getHits();
    for (SearchHit searchHit: result) {
      System.out.println("文档操作,高级查询之聚合查询,id:" + searchHit.getId() + ",内容:" + searchHit.getSourceAsString());
    }
  }

}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/46835.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

作业-11.29

将txt中的单词转到数据库中 #include <stdio.h> #include <sqlite3.h> #include <stdlib.h> #include <string.h> void do_insert(sqlite3* db, int id, char word[], char jieshi[]); void txt_todatabase(sqlite3* db); int main(int argc, const ch…

DevExpress FMX Data Grid全面编辑和定制

DevExpress FMX Data Grid全面编辑和定制 FMX数据网格(CTP)FireMonkey(FMX)的高性能数据网格组件&#xff0c;具有集成的主细节和数据分组支持。它被优化并构建为与RAD Studio/Delphi/CBuilder一起使用。它支持Windows、Android和macOS平台。 DevExpress FMX数据网格功能强大&a…

redis介绍和理解

官网 介绍: https://www.bilibili.com/video/BV1Fd4y1T7pD/?spm_id_from333.337.search-card.all.click&vd_source4c263677a216945c0d21ca65ee15a5f9 Redis是一个key value的数据库&#xff0c;基于内存、分布式、可选持久性的键值对(Key-Value)存储数据库。 https://ww…

【Java+LeetCode训练】binarySearch源码解析

二分搜索Arrays.binarySearch(int[] a,int key)源码分析【LeetCode】209. 长度最小的子数组解法1&#xff1a;前缀和 暴力解法解法2&#xff1a;前缀和 二分搜索序&#xff1a;使用Arrays工具类中的binarySearch方法进行二分搜索时&#xff0c;我们知道搜索成功会返回其下标&…

数字化餐饮| 刘大厨湘菜馆进杭州,开场及巅峰

盼了几年的刘大厨辣椒炒肉终于来杭州了&#xff0c;但我却没有吃到&#xff0c;小钱对雨科网说&#xff1a;驱车三十里&#xff0c;排队三小时都没吃上&#xff0c;原来他们是每天10点开始放号&#xff0c;11点开餐&#xff0c;去的晚就吃不到。 5月20日&#xff0c;刘大厨在杭…

5G无线技术基础自学系列 | 5G上行物理信道和信号

素材来源&#xff1a;《5G无线网络规划与优化》 一边学习一边整理内容&#xff0c;并与大家分享&#xff0c;侵权即删&#xff0c;谢谢支持&#xff01; 附上汇总贴&#xff1a;5G无线技术基础自学系列 | 汇总_COCOgsta的博客-CSDN博客 5G上行的物理信道包括PRACH、PUCCH、PU…

产品经理要不要考PMP?进化你能力的阶梯!(附:新版考纲及教材)

产品经理和项目经理看起来是毫不相关的两个专业&#xff0c;那么产品经理要不要考PMP呢&#xff1f;其实是非常有必要的。 以前去面试产品经理&#xff0c;HR只会问1个问题&#xff1a;会用axure吗&#xff1f;一开始对产品经理的定义就是设计产品原型的。能设计产品原型&…

【附源码】计算机毕业设计JAVA中小学教务管理平台

【附源码】计算机毕业设计JAVA中小学教务管理平台 目运行 环境项配置&#xff1a; Jdk1.8 Tomcat8.5 Mysql HBuilderX&#xff08;Webstorm也行&#xff09; Eclispe&#xff08;IntelliJ IDEA,Eclispe,MyEclispe,Sts都支持&#xff09;。 项目技术&#xff1a; JAVA …

【北京迅为】RK3568开发板android11系统固件讲解

脚本里面写入这些内容&#x1f446;&#xff0c; apt-get install uuid 后面就是包名&#xff0c;比如说安装了这些内容uuid 在安装之前先执行这个命令增加下载源&#x1f447; 这里会提示&#xff0c;需要输入 回车继续&#xff0c;还是输入 Ctrl-c取消 当然要输入回车继续…

51单片机学习笔记4 新建工程及点亮LED实战

51单片机学习笔记4 新建工程及点亮LED实战一、使用keil新建工程二、项目设置1. 点击魔术棒&#xff0c;钩选Output-Create Hex File2. 设置仿真器三、编写代码1. 尝试编译代码2. 点亮LED的代码3. GPIO引脚介绍4. GPIO内部结构P0端口&#xff1a;P1 端口四、软件仿真一、使用kei…

aws cloudformation 理解自定义资源的使用

资料 AWS::CloudFormation::CustomResourcecfn-response module 自定义资源的逻辑 cloudformation只能对aws service进行部署和配置&#xff0c;但是用户可能需要使用第三方产品&#xff0c;此时需要通过自定义资源将其纳入到cloudformation的管理中。通过编写自定义逻辑&am…

为什么我们提供了新的公共镜像库

众所周知&#xff0c;建木在项目初期就已经完成了“自举”&#xff0c;就是使用建木完成自身的全部CI/CD/CO等自动化流程。 另外&#xff0c;由于建木本身和官方支持的节点都是打包为镜像发布到Docker Hub上&#xff0c;结果最近半年我们频繁碰到如下场景。 场景一 “CI服务的…

flink程序执行管理-1.13

1. 版本说明 本文档内容基于 flink-1.13.x&#xff0c;其他版本的整理&#xff0c;请查看本人博客的 flink 专栏其他文章。 2. 执行配置 StreamExecutionEnvironment 包含 ExecutionConfig 对象&#xff0c;该对象允许程序指定运行时的配置值。改变默认值可以影响所有的任务…

【Nginx 原理】进程模型、HTTP 连接建立和请求处理过程、高性能、高并发、事件处理模型、模块化体系结构

Nginx 原理 Nginx 以其高性能&#xff0c;稳定性&#xff0c;丰富的功能&#xff0c;简单的配置和低资源消耗而闻名。 Nginx进程模型 Nginx 是一个多进程的模型&#xff0c;主要分为一个 Master 进程、多个 Worker 进程。 Master 进程&#xff1a; 管理 Worker 进程。 对外…

TKE 超级节点,Serverless 落地的最佳形态

陈冰心&#xff0c;腾讯云产品经理&#xff0c;负责超级节点迭代与客户拓展&#xff0c;专注于 TKE Serverless 产品演进。 背景 让人又爱又恨的 Serverless Serverless 炙手可热&#xff0c;被称为云原生未来发展的方向。信通院报告显示&#xff1a;在核心业务中使用 Server…

[oeasy]python0022_ python虚拟机_反编译_cpu架构_二进制字节码_汇编语言

程序本质 回忆上次内容 ​python3​​ 的程序是一个 5.3M 的可执行文件 我们通过which命令找到这个python3.8的位置将这个python3.8复制到我们的用户目录下这个文件还是能够执行的 将这个文件转化为字节形态 确实可以转化但是这个文件我们看不懂啊&#xff01;&#xff01;&a…

【应用多元统计分析】上机四五——主成分分析因子分析

目录 一、主成分分析 1.princomp命令 2.screeplot命令 3.【例7.3.3】对【例6.3.3】中的数据从相关矩阵出发进行主成分分析 ​编辑&#xff08;1&#xff09;代码 &#xff08;2&#xff09;碎石图 &#xff08;3&#xff09;散点图 二、因子分析 1.载荷矩阵求解 &…

考CISAW的N个理由!

随着信息科技的飞速发展&#xff0c;互联网的普及&#xff0c;面对信息安全的严峻局势&#xff0c;网络信息安全显得尤为重要&#xff0c;同时近些年来&#xff0c;国家也相继出台一些政策&#xff0c;并推出一些国家认证的资格证书&#xff0c;CISAW认证就是专门针对信息安全保…

深入理解java虚拟机:虚拟机字节码执行引擎(3)

文章目录4. 基于栈的字节码解释执行引擎4.1 解释执行4.2 基于栈的指令集与基于寄存器的指令集4.3 基于栈的解释器执行过程4. 基于栈的字节码解释执行引擎 关于虚拟机是如何调用方法已经讲解完毕&#xff0c;从本节开始&#xff0c;我们来探讨虚拟机是如何执行方法里面的字节码…

【千瓜行研】2022年11.11小红书保健品行业数据研报

2022年双十一营销盛会已落下帷幕&#xff0c;小红书平台保健品行业流量连续3年持续走高&#xff0c;热度破亿&#xff01; 本期「千瓜行研」重磅推出《2022年11.11保健品行业数据研报&#xff08;小红书平台&#xff09;》&#xff08;文末附完整版下载&#xff09;&#xff0c…