1.安装zookeeper
zookpeer下载地址:apache-zookeeper-3.7.1-bin.tar.gzhttps://link.csdn.net/?target=https%3A%2F%2Fwww.apache.org%2Fdyn%2Fcloser.lua%2Fzookeeper%2Fzookeeper-3.7.1%2Fapache-zookeeper-3.7.1-bin.tar.gz%3Flogin%3Dfrom_csdn
1.1解压安装zookeeper软件包(三个节点做)
cd /root
tar zxf apache-zookeeper-3.7.1-bin.tar.gz
mv apache-zookeeper-3.7.1-bin /usr/local/zookeeper-3.7.1
cd /usr/local/zookeeper-3.7.1/conf/
cp zoo_sample.cfg zoo.cfg
1.2 修改配置文件(第一个节点)
vim zoo.cfg
将zoo.cfg复制到其他两个节点
scp zoo.cfg 172.25.23.8:/usr/local/zookeeper-3.7.1/conf/
scp zoo.cfg 172.25.23.9:/usr/local/zookeeper-3.7.1/conf/
1.3 给每个机器指定对应的节点号
mkdir data logs
echo 1 > data/myid
mkdir data logs
echo 2 > data/myid
mkdir data logs
echo 3 > data/myid
cd /usr/local/zookeeper-3.7.1/bin
./zkServer.sh start
1.4 启动zookeeper
cd /usr/local/zookeeper-3.7.1/bin
./zkServer.sh start
1.5 开启之后,查看三个节点zookeeper状态
./zkServer.sh status
2、安装kafka
2.1 安装 kafka(3个节点都要操作)
cd /root
tar zxf kafka_2.13-2.7.1.tgz
mv kafka_2.13-2.7.1 /usr/local/kafka
2.2 修改配置文件(3个节点都要操作)
cd /usr/local/kafka/config/
vim server.properties
2.3 将相关命令加入到系统环境当中(3个节点都要操作)
vim /etc/profile
export KAFKA_HOME=/usr/local/kafka
export PATH=$PATH:$KAFKA_HOME/bin
source /etc/profile
2.4 启动kafka
cd /usr/local/kafka/config/
kafka-server-start.sh -daemon server.properties
netstat -antp | grep 9092
三个节点都启动
Kafka 命令行操作
创建topic
kafka-topics.sh --create --zookeeper 172.25.23.7:2181,172.25.23.8:2181,172.25.23.9:2181 --replication-factor 2 --partitions 3 --topic test
–zookeeper:定义 zookeeper 集群服务器地址,如果有多个 IP 地址使用逗号分割,一般使用一个 IP 即可
–replication-factor:定义分区副本数,1 代表单副本,建议为 2
–partitions:定义分区数
–topic:定义 topic 名称
查看当前服务器中的所有 topic
kafka-topics.sh --list --zookeeper 172.25.23.7:2181,172.25.23.8:2181,172.25.23.9:2181
查看某个 topic 的详情
kafka-topics.sh --describe --zookeeper 172.25.23.7:2181,172.25.23.8:2181,172.25.23.9:2181
发布消息
kafka-console-producer.sh --broker-list 172.25.23.7:9092,172.25.23.8:9092,172.25.23.9:9092 --topic test
消费消息
kafka-console-consumer.sh --bootstrap-server 172.25.23.7:9092,172.25.23.8:9092,172.25.23.9:9092 --topic test --from-beginning
–from-beginning:会把主题中以往所有的数据都读取出来
修改分区数
kafka-topics.sh --zookeeper 172.25.23.7:2181,172.25.23.8:2181,172.25.23.9:2181 --alter --topic test --partitions 6
删除 topic
kafka-topics.sh --delete --zookeeper 172.25.23.7:2181,172.25.23.8:2181,172.25.23.9:2181 --topic test
2.5 创建topic
cd /usr/local/kafka/bin
kafka-topics.sh --create --zookeeper 172.25.23.7:2181,172.25.23.8:2181,172.25.23.9:2181 --partitions 3 --replication-factor 2 --topic wyn
kafka-topics.sh --describe --zookeeper 172.25.23.7:2181
2.6 测试topic
发布消息
kafka-console-producer.sh --broker-list 172.25.23.7:9092,172.25.23.8:9092,172.25.23.9:9092 --topic wyn
kafka-console-producer.sh --broker-list 172.25.23.7:9092,172.25.23.8:9092,172.25.23.9:9092 --topic wyn
消费消息
kafka-console-consumer.sh --bootstrap-server 172.25.23.7:9092,172.25.23.8:9092,172.25.23.9:9092 --topic nginx-es --from-beginning
3、配置数据采集层filebeat(172.25.23.8)
安装nginx
cd /root
rpm -ivh nginx-1.20.0-1.el7.ngx.x86_64.rpm
3.1 定制日志格式
vim /etc/nginx/nginx.conf
log_format json '{"@timestamp":"$time_iso8601",'
'"@version":"1",'
'"client":"$remote_addr",'
'"url":"$uri",'
'"status":"$status",'
'"domain":"$host",'
'"host":"$server_addr",'
'"size":$body_bytes_sent,'
'"responsetime":$request_time,'
'"referer": "$http_referer",'
'"ua": "$http_user_agent"'
'}';
access_log /var/log/nginx/access.log json;
3.2 上传、解压安装包
cd /opt
rpm -ivh filebeat-5.5.0-x86_64.rpm
3.3 修改配置文件filebeat.yml
vim /etc/filebeat/filebeat.yml
3.4 启动filebeat
systemctl restart filebeat
systemctl status filebeat
查看端口是否打开
netstat -antp|grep filebeat
4、所有组件部署完成之后,开始配置部署
4.1 在kafka上创建一个话题nginx-es
cd /usr/local/kafka/bin
./kafka-topics.sh --create --zookeeper 172.25.23.7:2181,172.25.23.8:2181,172.25.23.9:2181 --replication-factor 1 --partitions 1 --topic nginx-es
./kafka-topics.sh --list --zookeeper 172.16.39.66:2181,172.16.39.70:2181,172.16.39.71:2181
4.2 修改logstash的配置文件
cd /etc/logstash/conf.d/
vim nginxlog.conf
input {
kafka {
topics => "nginx-es"
#codec => "json"
decorate_events => true
bootstrap_servers => ["172.25.23.7:9092,172.25.23.8:9092,172.25.23.9:9092"]
}
}
output {
elasticsearch {
hosts => ["172.25.23.7:9200 "]
index => "nginx-%{+YYYY-MM-dd}"
}
}
systemctl restart logstash
systemctl status logstash
4.3 验证网页
cd /usr/local/src/node_modules/grunt/bin
./grunt server