1.下载镜像
1.1存储和检索数据
docker pull elasticsearch:7.4.2
1.2可视化检索数据
docker pull kibana:7.4.2
2.创建elasticsearch实例
创建本地挂载数据卷配置目录
mkdir -p /software/elasticsearch/config
创建本地挂载数据卷数据目录
mkdir -p /software/elasticsearch/data
写入远程任何机器访问配置
echo "http.host: 0.0.0.0" >> /software/elasticsearch/config/elasticsearch.yml
9200 发送http请求端口
9300 es分布式集群状态下节点通信端口
"discovery.type=single-node" 单节点模式运行
ES_JAVA_OPTS="-Xms64m-Xmx256m" 指定es运行最小,最大内存
chmod -R 777 /software/elasticsearch/ 设置权限
未设置权限:"Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes"
docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms64m -Xmx256m" -v /software/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /software/elasticsearch/data:/usr/share/elasticsearch/data -v /software/elasticsearch/plugins:/usr/share/elasticsearch/plugins --name=elasticsearch7.4.2 elasticsearch:7.4.2
3.创建Kibana实例
ELASTICSEARCH_HOSTS 指定ES主机地址
docker run -d -p 5601:5601 -e ELASTICSEARCH_HOSTS=http://192.168.179.101:9200 --name=kibana7.4.2 kibana:7.4.2
4.安装分词器插件
1.解压
elasticsearch-analysis-ik 7.4.2版本(与ES版本一致)
将下载的分词器压缩包 解压到 本地挂载数据卷插件目录,解压完成后会同步到docker elasticsearch容器/usr/share/elasticsearch/plugins插件目录下
2.进入elasticsearch容器内部查看是否已同步
docker exec -it elasticsearch7.4.2 /bin/bash
3.更改ik目录权限
chmod -R 777 ik/
4.验证ik分词器是否安装好
进入elasticsearch bin目录执行elasticsearch-plugin list(列出安装好的ES插件)
5.重启ES容器实例
6.kibana中使用ik分词器
POST _analyze
{
"analyzer": "ik_smart",
"text": "我的项目"
}
{
"tokens" : [
{
"token" : "我",
"start_offset" : 0,
"end_offset" : 1,
"type" : "CN_CHAR",
"position" : 0
},
{
"token" : "的",
"start_offset" : 1,
"end_offset" : 2,
"type" : "CN_CHAR",
"position" : 1
},
{
"token" : "项目",
"start_offset" : 2,
"end_offset" : 4,
"type" : "CN_WORD",
"position" : 2
}
]
}