Spark性能监测+集群配置

news2025/4/17 13:15:03

spark-dashboard

参考链接

架构图

Spark官网中提供了一系列的接口可以查看任务运行时的各种指标

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

运行

卸载docker

https://blog.csdn.net/wangerrong/article/details/126750198

sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

安装docker

# 默认的yum镜像源
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# 1、yum 包更新到最新 
yum update
# 2、安装需要的软件包, yum-util 提供yum-config-manager功能,另外两个是devicemapper驱动依赖的 
yum install -y yum-utils device-mapper-persistent-data lvm2
# 3、 设置yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 4、 安装docker,出现输入的界面都按 y 
sudo yum install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# 5、 查看docker版本,验证是否验证成功
docker -v

Docker.pdf

# root启动docker服务
systemctl start docker 
# 自动开启docker服务
systemctl enable docker

使用国内docker镜像源

https://blog.csdn.net/ximaiyao1984/article/details/128595319

vim /etc/docker/daemon.json

# 加入以下内容
{
    "registry-mirrors": [
        "http://hub-mirror.c.163.com",
        "https://docker.mirrors.ustc.edu.cn",
        "https://registry.docker-cn.com"
    ]
}

service docker restart
docker info

下载并运行Spark dashboard的docker镜像(第一次使用)

// 创建docker容器 名字为monitor -d 为后台运行参数
docker run --restart=always --network=host --name=monitor -d lucacanali/spark-dashboard:v01

之后使用

# 设置自动启动容器
docker update --restart=always monitor
docker start monitor

配置Spark参数

这里做的目的就是将Spark提供的接口数据传送到Spark dashboard中

复制metrics.properties.templatemetrics.properties(非常关键)

编辑metrics.properties文件,加入以下代码:

*.sink.graphite.class=org.apache.spark.metrics.sink.GraphiteSink
# docker运行在哪台机器上就填哪台43.143.103.171
*.sink.graphite.host=hadoop102
*.sink.graphite.port=2003 #端口不要自己更改
# 默认收集信息周期为10s,感觉太长了,可以设置成2s
*.sink.graphite.period=2
*.sink.graphite.unit=seconds
# 平台中的用户名
*.sink.graphite.prefix=jaken
*.source.jvm.class=org.apache.spark.metrics.source.JvmSource

注意分发配置文件!!!!!!!!!!

xsync

登录控制台

http://hadoop102:3000/

在Dashboards中的Browser中找到_v04

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

运行示例作业

/opt/module/spark3.1/bin/spark-submit --master yarn --deploy-mode cluster /opt/module/spark3.1/examples/src/main/python/pi.py 3000

结果 注意设置右上角的时间

image-20230412174213524

鼠标放在图上 按v可以放大查看

image-20230412174202178

修改最短时间间隔

image-20230412175036770

运行下Hibench

/opt/module/Hibench/hibench-master/HiBench-master/bin/workloads/micro/wordcount/prepare/prepare.sh

/opt/module/Hibench/hibench-master/HiBench-master/bin/workloads/micro/wordcount/spark/run.sh

turbostat

命令行

// 以1秒为周期 焦耳为单位 记录保存在test_record中
turbostat -i 1 -J -o ./test_record

image-20230408120815631

相关说明

Package 处理器包号 – 在只有一个处理器包的系统上不出现

CPU Linux的CPU(逻辑处理器)编号

TSC_MHz 整个间隔期间TSC运行的平均MHz

PkgWatt 整个处理器包所消耗的瓦特

CorWatt 由处理器包的核心部分消耗的瓦特

IRQ 在测量区间内,该CPU所服务的中断数

SMI 在测量时间段内为CPU服务的系统管理中断的数量(所有CPU)

CPU%c1 CPU%c3 CPU%c6 CPU%c7 显示的是硬件核心空闲状态下的驻留百分比。 这些数字来自硬件驻留计数器。

Totl%C0 Any%C0 GFX%C0 CPUGFX%

Pkg%pc8 Pkg%pc9 Pk%pc10 在硬件包空闲状态下的驻留百分比。 这些数字来自硬件驻留计数器。

GFXWatt 由软件包的图形部分消耗的瓦特

RAMWatt 由 DRAM DIMMS 消耗的瓦特

PKG_% RAPL节流在软件包上激活的时间间隔的百分比。

RAM_% RAPL对DRAM进行节流的时间间隔的百分比

lscpu

查看cpu的详细信息

image-20230408115920531

psutil

是python的一个用来测量CPU/MEMO/IO/NET的工具

可以监测单个进程的资源利用率

参考文档

参考博客

#!/usr/bin/env python
# coding:utf-8

import psutil
import datetime
import time
import platform
import socket
import sys
import os
import json
import redis
from multiprocessing import Process

# 声明进程的类型
monitor_process_types = ['python', 'java', 'scrapy', 'you-get']

# 计算进程的信息
def cal_process_msg(process_all_msg,process):
    # 进程数
	process_all_msg['process_num'] += 1
	for process_type in monitor_process_types:
		if process_type in process['name'] or process_type in process['cmdline'] or process_type in process['exe']:
			process_all_msg[process_type] += 1
	if "run" in process['status']:
		process_all_msg['process_running_num'] += 1
		process_all_msg["process_running_mem_percent"] += process.get("memory_percent")

	else:
		if "stop" in process['status']:
			process_all_msg['process_stopped_num'] += 1
			process_all_msg["process_stopped_mem_percent"] += process.get("memory_percent")
		else:
			process_all_msg['process_sleeping_num'] += 1
			process_all_msg["process_sleeping_mem_percent"] += process.get("memory_percent")



def get_disk_speed(interval):
	disk_msg = psutil.disk_io_counters()
	read_count, write_count = disk_msg.read_count, disk_msg.write_count
	read_bytes, write_bytes = disk_msg.read_bytes, disk_msg.write_bytes
	read_time, write_time = disk_msg.read_time, disk_msg.write_time
	time.sleep(interval)
	disk_msg = psutil.disk_io_counters()
	read_count2, write_count2 = disk_msg.read_count, disk_msg.write_count
	read_bytes2, write_bytes2 = disk_msg.read_bytes, disk_msg.write_bytes
	read_time2, write_time2 = disk_msg.read_time, disk_msg.write_time
	read_count_speed = str(int((read_count2 - read_count) / interval)) + " 次/s"
	write_count_speed = str(int((write_count2 - write_count) / interval)) + " 次/s"

	read_bytes_speed = (read_bytes2 - read_bytes) / interval
	read_bytes_speed = str(round((read_bytes_speed / 1048576), 2)) + " MB/s" if read_bytes_speed >= 1048576 else str(
		round((read_bytes_speed / 1024), 2)) + " KB/s"
	write_bytes_speed = (write_bytes2 - write_bytes) / interval
	write_bytes_speed = str(round((write_bytes_speed / 1048576), 2)) + " MB/s" if write_bytes_speed >= 1048576 else str(
		round((write_bytes_speed / 1024), 2)) + " KB/s"
	return read_count_speed, write_count_speed, read_bytes_speed, write_bytes_speed


def get_net_speed(interval):
	net_msg = psutil.net_io_counters()
	bytes_sent, bytes_recv = net_msg.bytes_sent, net_msg.bytes_recv
	time.sleep(interval)
	net_msg = psutil.net_io_counters()
	bytes_sent2, bytes_recv2 = net_msg.bytes_sent, net_msg.bytes_recv
	sent_speed = (bytes_sent2 - bytes_sent) / interval
	sent_speed = str(round((sent_speed / 1048576), 2)) + " MB/s" if sent_speed >= 1048576 else str(
		round((sent_speed / 1024), 2)) + " KB/s"
	recv_speed = (bytes_recv2 - bytes_recv) / interval
	recv_speed = str(round((recv_speed / 1048576), 2)) + " MB/s" if recv_speed >= 1048576 else str(
		round(recv_speed / 1024, 2)) + " KB/s"

	return sent_speed, recv_speed


def main():


	server_info = {}
	print('-----------------------------系统信息-------------------------------------')

	os_info = {}
	os_name = platform.platform()
	pc_name = platform.node()
	processor = platform.processor()
	processor_bit = platform.architecture()[0]
	myname = socket.gethostname()
	myaddr = socket.gethostbyname(myname)

	print(f"{'系统信息:':<15s}{os_name}")
	print(f"{'机器名称:':<15s}{pc_name}")
	print(f"{'处理器:':<15s}{processor}")
	print(f"{'处理器位数:':<15s}{processor_bit}")
	print(f"{'IP地址:':<15s}{myaddr}")

	# print(f"系统信息:{os_name:>6s}\n机器名称:{pc_name}\n处理器:{processor}\n处理器位数:{bit_msg}\nIP:{myaddr}")
	now_time = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))
	boot_time = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(int(psutil.boot_time())))

	users_count = len(psutil.users())
	users_list = ",".join([u.name for u in psutil.users()])
	print(f"{'当前用户数量:':<15s}{users_count}")
	print(f"{'n当前用户名:':<15s}{users_list}")

	boot_time_seconds = time.strptime(boot_time, "%Y-%m-%d %H:%M:%S")
	boot_time_seconds = int(time.mktime(boot_time_seconds))
	boot_hours = str(round((int(time.time()) - boot_time_seconds) / (60 * 60), 1)) + "小时"

	print(f"{'系统启动时间:':<15s}{boot_time}")
	print(f"{'系统当前时间:':<15s}{now_time}")
	print(f"{'系统已经运行:':<15s}{boot_hours}")
	ip = myaddr[myaddr.rfind(".")+1:]

	os_info['os_ip'] = ip
	os_info['os_name'] = os_name
	os_info['os_pcname'] = pc_name
	os_info['os_processor'] = processor
	os_info['os_processor_bit'] = processor_bit
	os_info['os_boot_hours'] = boot_hours
	os_info['os_users_count'] = users_count

	server_info["os_info"] = os_info

	print('-----------------------------cpu信息-------------------------------------')
	cpu_info = {}
	cpu_cores = psutil.cpu_count(logical=False)
	cpu_logic_cores = psutil.cpu_count(logical=True)
	cpu_used_percent = str(psutil.cpu_percent(interval = 1,percpu=False)) + '%'
	# cpu_used_average = 0
	# for i in psutil.cpu_percent(interval = 1,percpu=True):
	# 	cpu_used_average += i
	# cpu_used_average = cpu_used_average/len(psutil.cpu_percent(interval = 1,percpu=True))
	# print(cpu_used_average)
	print(f"{'cpu使用率:':<15s}{cpu_used_percent}")
	print(f"{'物理cpu数量:':<15s}{cpu_cores}")
	print(f"{'逻辑cpu数量:':<15s}{cpu_logic_cores}")

	cpu_info['cpu_used_percent'] = cpu_used_percent
	cpu_info['cpu_cores'] = cpu_cores
	cpu_info['cpu_logic_cores'] = cpu_logic_cores


	server_info["cpu_info"] = cpu_info

	print('-----------------------------内存信息-------------------------------------')

	memory_info = {}
	memory = psutil.virtual_memory()
	mem_total = str(round(memory.total / (1024.0 * 1024.0 * 1024.0), 2)) + "Gb"
	mem_free = str(round(memory.free / (1024.0 * 1024.0 * 1024.0), 2)) + "Gb"
	mem_available = str(round(memory.available / (1024.0 * 1024.0 * 1024.0), 2)) + "Gb"
	mem_used_percent = str(memory.percent) + "%"
	mem_used = str(round(memory.used / (1024.0 * 1024.0 * 1024.0), 2)) + "Gb"
	try:
		buffers = str(round(memory.buffers / (1024.0 * 1024.0 * 1024.0), 2)) + "Gb"
		cached = str(round(memory.cached / (1024.0 * 1024.0 * 1024.0), 2)) + "Gb"
	except:
		buffers = cached = ""
	print(f"{'内存使用率:':<15s}{mem_used_percent}")
	print(f"{'总内存:':<15s}{mem_total}")
	print(f"{'已使用内存:':<15s}{mem_used}")
	print(f"{'剩余内存:':<15s}{mem_free}")
	print(f"{'available内存:':<15s}{mem_available}")

	print(f"{'cached使用的内存:':<15s}{cached}")
	print(f"{'buffers使用的内存:':<15s}{buffers}")

	memory_info['mem_used_percent'] = mem_used_percent
	memory_info['mem_total'] = mem_total
	memory_info['mem_used'] = mem_used
	memory_info['mem_free'] = mem_free
	memory_info['mem_cached'] = cached
	memory_info['mem_buffers'] = buffers


	server_info["memory_info"] = memory_info

	print('-----------------------------磁盘信息---------------------------------------')

	# disk_msg = psutil.disk_usage("")
	# disk_total = str(int(disk_msg.total / (1024.0 * 1024.0 * 1024.0))) + "G"
	# disk_used = str(int(disk_msg.used / (1024.0 * 1024.0 * 1024.0))) + "G"
	# disk_free = str(int(disk_msg.free / (1024.0 * 1024.0 * 1024.0))) + "G"
	# disk_percent = float(disk_msg.percent)
	# print(f"磁盘总容量:{disk_total},已用容量:{disk_used},空闲容量:{disk_free},使用率:{disk_percent}%")
	# print("系统磁盘信息:" + str(io))
	disk_info = {}
	disk_partitons = psutil.disk_partitions()

	for disk in disk_partitons:
		print(disk)
		try:
			o = psutil.disk_usage(disk.mountpoint)
			path = disk.device
			total = str(int(o.total / (1024.0 * 1024.0 * 1024.0))) + "G"
			used = str(int(o.used / (1024.0 * 1024.0 * 1024.0))) + "G"
			free = str(int(o.free / (1024.0 * 1024.0 * 1024.0))) + "G"
			percent = o.percent
			print(f"磁盘路径:{path},总容量:{total},已用容量{used},空闲容量:{free},使用率:{percent}%")

			if disk.mountpoint == "/":
				disk_info["total"] = total
				disk_info["used"] = used
				disk_info["free"] = free
				disk_info["percent"] = percent


		except:
			print("获取异常", disk)
	read_count_speed, write_count_speed, read_bytes_speed, write_bytes_speed = get_disk_speed(3)
	print("硬盘实时IO")
	print(f"读取次数:{read_count_speed} 写入次数:{write_count_speed}")
	print(f"读取速度:{read_bytes_speed} 写入速度:{write_bytes_speed}")
	disk_info['disk_read_count_speed'] = read_count_speed
	disk_info['disk_write_count_speed'] = write_count_speed
	disk_info['disk_read_bytes_speed'] = read_bytes_speed
	disk_info['disk_write_bytes_speed'] = write_bytes_speed

	server_info["disk_info"] = disk_info

	print('-----------------------------网络信息-------------------------------------')

	net_info = {}
	sent_speed, recv_speed = get_net_speed(1)
	print(f"网络实时IO\n上传速度:{sent_speed}\n下载速度:{recv_speed}")
	net = psutil.net_io_counters()
	sent_bytes = net.bytes_recv / 1024 / 1024
	recv_bytes = net.bytes_sent / 1024 / 1024

	sent_bytes = str(round(sent_bytes, 2)) + "MB" if sent_bytes < 1024 else str(round(sent_bytes / 1024, 2)) + "GB"
	recv_bytes = str(round(recv_bytes, 2)) + "MB" if recv_bytes < 1024 else str(round(recv_bytes / 1024, 2)) + "GB"

	print(f"网卡总接收流量{recv_bytes}\n总发送流量{sent_bytes}")

	net_info['net_sent_speed'] = sent_speed
	net_info['net_recv_speed'] = recv_speed

	net_info['net_recv_bytes'] = recv_bytes
	net_info['net_sent_bytes'] = sent_bytes


	server_info["net_info"] = net_info

    print('-----------------------------进程信息-------------------------------------')
	# 查看系统全部进程


	processes_info = {}
	processes_info['process_running_num'] = 0
	processes_info['process_sleeping_num'] = 0
	processes_info['process_stopped_num'] = 0

	for process_type in monitor_process_types:
		processes_info[process_type] = 0

	processes_info["process_sleeping_mem_percent"] = 0
	processes_info["process_stopped_mem_percent"] = 0
	processes_info["process_running_mem_percent"] = 0

	processes_info['process_num'] = 0

	processes_info['process_memory_used_top10'] = []
	process_list = []

	for pnum in psutil.pids():

		try:
			p = psutil.Process(pnum)

			#print("====================================")
			process = {}
			process['name'] = p.name()
			process['cmdline'] = p.cmdline()
			process['exe'] = p.exe()
			process['status'] = p.status()
			process['create_time'] = str(datetime.datetime.fromtimestamp(p.create_time()))[:19]
			process['terminal'] = p.terminal()
			#process['cpu_times'] = p.cpu_times()
			#process['cpu_affinity'] = p.cpu_affinity()
			#process['memory_info'] = p.memory_info()
			process['memory_percent'] = p.memory_percent()
			process['open_files'] = p.open_files()
			#process['connections'] = p.connections()

			process['io_counters'] = p.io_counters()
			process['num_threads'] = p.num_threads()
			cal_process_msg(processes_info,process)

			process_list.append(process)
			#print(process)

		# print(f"进程名: {p.name()}  进程状态: {p.status()}  命令: {p.cmdline()}  进程号: {p.pid}  路径1: {p.exe()}  路径2: {p.cwd()}  内存占比: {round(p.memory_percent(),2)}%")
		except:
			pass
	processes_info["process_sleeping_mem_percent"] = str(processes_info["process_sleeping_mem_percent"])[:5]  + "%"
	processes_info["process_stopped_mem_percent"] = str(processes_info["process_stopped_mem_percent"])[:5]  + "%"
	processes_info["process_running_mem_percent"] = str(processes_info["process_running_mem_percent"] )[:5]  + "%"

	process_list = sorted(process_list, key=lambda x: (-int(x['memory_percent'])), reverse=False)
	print(process_list[:10])
	for i in process_list[:10]:
		top_10_info = i.get("cmdline")[0] + " " + i.get("cmdline")[1] + " " +  str(i.get("memory_percent"))[:5] + "%"
		processes_info['process_memory_used_top10'].append(top_10_info)

	print(processes_info)

	server_info["processes_info"] = processes_info

	server_info_json = json.dumps(server_info,ensure_ascii = False,indent=4)
	print(server_info_json)
	pool = redis.ConnectionPool(host='ip', port=6379, decode_responses=True,
	                            password='password',
	                            db=2)  # host是redis主机,需要redis服务端和客户端都起着 redis默认端口是6379

	r = redis.Redis(connection_pool=pool)
	r.hset("server_info",ip,server_info_json)
if __name__ == "__main__":
	main()
	print(sys.argv[0], os.getpid())

jvm_top

监视每个JVM的负载情况,直接使用jvmtop.sh命令即可

参考文档

image-20230408212043272

迭代计算

image-20230411102140355

elasecutor

image-20230411102316237

配置集群

创建用户

useradd jaken
passwd jaken

增加权限

[root@hadoop100 ~]# vim /etc/sudoers

## Allows people in group wheel to run all commands
%wheel  ALL=(ALL)       ALL
jaken   ALL=(ALL)     NOPASSWD:ALL

image-20230514183540053

虚拟机配置

image-20230929151251097

image-20230929151323685

image-20230929151339221

网络配置

10.0.4.243.143.103.171
10.0.12.1743.139.163.74
172.16.0.15129.204.194.101
10.0.20.12119.29.244.191
10.0.12.13114.132.62.39
192.168.195.184 jk102
192.168.195.163 jk103
192.168.195.225 jk104
192.168.195.68 feizi102
192.168.195.49 feizi103
192.168.195.125 feizi104

修改主机名

这一步也非常重要 hadoop依赖hostname

sudo vim /etc/hostname

image-20230423222145945

​ 设置IP别名

sudo vim /etc/hosts

如果是本机,用内网IP,否则用外网IP!!!!!!!!

image-20230424225034624

windows下

C:\Windows\System32\drivers\etc

拷贝host文件到桌面,添加对应的IP别名后在粘贴回去

环境变量

先放一台主机 后面同步即可

/etc/profile

sudo vim /etc/profile最后加入以下内容
export JAVA_HOME=/opt/module/jdk1.8.0_371
export HADOOP_HOME=/opt/module/hadoop-3.1.3
export SPARK_HOME=/opt/module/spark3.1
export PYSPARK_PYTHON=/opt/module/anaconda3/envs/pyspark/bin/python3.8
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export ZK_HOME=/opt/module/zookeeper
export SCALA_HOME=/opt/module/scala/scala-2.12.15
export MAVEN_HOME=/opt/module/maven/apache-maven-3.8.6
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZK_HOME/bin:$SCALA_HOME/bin:$MAVEN_HOME/bin:$PATH

~/.bashrc

sudo vim ~/.bashrc
# 添加下面内容
export JAVA_HOME=/opt/module/jdk1.8.0_371
export PYSPARK_PYTHON=/opt/module/anaconda3/envs/pyspark/bin/python3.8

my_env.sh

sudo vim /etc/profile.d/my_env.sh
添加如下内容
#JAVA_HOME
export JAVA_HOME=/opt/module/jdk1.8.0_371
export PATH=$PATH:$JAVA_HOME/bin
#HADOOP_HOME
export HADOOP_HOME=/opt/module/hadoop-3.1.3
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
# 使用root用户需要注意----绝不推荐使用root用户
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root



source /etc/profile

脚本

复制粘贴脚本文件在~/bin下

确保命令路径在下面的输出中

[atguigu@hadoop102 ~]$ echo $PATH
/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/atguigu/.local/bin:/home/atguigu/bin:/opt/module/jdk1.8.0_212/bin

粘贴 cluster_conf 下的脚本

D:\OneDrive - stu.csust.edu.cn\16cluster_conf\脚本

添加执行权限

chmod +x jpsall jvmtop myhadoop.sh myspark.sh restartcluster.sh shutdowncluster.sh xsync zkstatus

端口配置

image-20230607204817485

SSH免密登录

默认在当前用户的家目录下有.ssh目录

ll -al 可以查看

image-20230513162016965

image-20230513162136223

生成秘钥并拷贝

cd ~
rm -rf .ssh
ssh-keygen -t rsa
ssh-copy-id jk102
ssh-copy-id jk103
ssh-copy-id jk104
ssh-copy-id feizi102
ssh-copy-id feizi103
ssh-copy-id feizi104
05200570.
Matmat0000

然后敲(三个回车),就会生成两个文件id_rsa(私钥)、id_rsa.pub(公钥)

image-20230423224816293

所有机器都要生成秘钥并拷贝!!!

安装JDK1.8

一般新的机器没有自带JDK

# 卸载自带的JDK
rpm -qa | grep -i java | xargs -n1 rpm -e --nodeps 

D:\OneDrive - stu.csust.edu.cn\16cluster_conf

image-20230424152703200

解压到/opt/module/

tar -zxvf jdk-8u371-linux-x64.tar.gz -C /opt/module/
java -version

快速安装ALL

D:\OneDrive - stu.csust.edu.cn\16cluster_conf

进入/opt/software中

tar -zxvf scala.tar.gz -C /opt/module/
tar -zxvf hadoop-3.1.3.tar.gz -C /opt/module/
tar -zxvf spark3.1.tar.gz -C /opt/module/

因为之前我们的环境变量已经配好了 所以scala可以直接使用

hadoop 需要先删除 data和logs目录

cd /opt/module/hadoop-3.1.3/
rm -rf data/ logs
vim core-site.xml
将HDFS的用户名修改为jaken

安装hadoop3.1.3

Hadoop下载地址:https://archive.apache.org/dist/hadoop/common/hadoop-3.1.3/

hadoop-3.1.3.tar.gz 分发压缩包

tar -zxvf hadoop-3.1.3.tar.gz -C /opt/module/
# 检查是否安装成功
hadoop version

配置hadoop

修改副本数

https://blog.csdn.net/Kevinwen0228/article/details/124239219

vim hdfs-site.xml
<property>
	<name>dfs.replication</name>
    <value>3</value>
</property>

需要修改以下文件

image-20230424200459503

xsync /opt/module/hadoop-3.1.3/etc/

启动集群

第一次启动

如果集群是第一次启动,需要在hadoop102节点格式化NameNode(注意:格式化NameNode,会产生新的集群id,导致NameNode和DataNode的集群id不一致,集群找不到已往数据。如果集群在运行过程中报错,需要重新格式化NameNode的话,一定要先停止namenode和datanode进程,并且要删除所有机器的datalogs目录,然后再进行格式化。)

第一次启动需要格式化NameNode,也就是lab001

hdfs namenode -format
myhadoop.sh start

启动HDFS lab001

sbin/start-dfs.sh

启动yarn lab002

sbin/start-yarn.sh

WEB查看

HDFS

http://lab001:9870

YARN

http://lab002:8088

启动yarn历史服务器

启动历史服务器

mapred --daemon start historyserver

WEB查看

http://lab001:19888/jobhistory

安装maven

apache-maven-3.8.6.tar.gz

在/opt/module 创建maven文件夹,进入文件夹,创建maven-repo文件夹, 然后将上面的文件拷贝下来,解压即可,配置项看Hi

tar -zxvf apache-maven-3.8.6.tar.gz -C /opt/module/maven

注意环境变量一定配置好

安装python

mkdir /opt/software
将Anaconda3-2023.03-Linux-x86_64.sh放到上面的文件夹中

Anaconda3-2023.03-Linux-x86_64.sh

# 执行脚本
sh Anaconda3-2023.03-Linux-x86_64.sh

自定义安装目录

image-20230425163441345

换源(推荐)

jaken用户下:

vim ~/.condarc   #新文件

#文件内容如下:
channels:
  - defaults
show_channel_urls: true
default_channels:
  - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
  - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r
  - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2
custom_channels:
  conda-forge: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
  msys2: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
  bioconda: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
  menpo: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
  pytorch: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
  simpleitk: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud

记得分发一下

创建并进入pyspark空间

conda create -n pyspark python=3.8

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

PYSPARK_PYTHON和 JAVA_HOME 需要同样配置在: ~/.bashrc

sudo vim ~/.bashrc

export JAVA_HOME=/opt/module/jdk1.8.0_371
export PYSPARK_PYTHON=/opt/module/anaconda3/envs/pyspark/bin/python3.8
source ~/.bashrc

安装spark

Spark历史服务器

注意先在HDFS中创建日志存储的目录

image-20230426161055025

才可以开启历史服务器

如果创建失败 需要修改登入的用户名

image-20230513170056380

spark3.1.tar.gz

放在/opt/module下 解压

配置下面文件

image-20230425150942434

开启动态资源调度

[参考博客]((261条消息) spark任务动态资源分配_spark动态资源配置_YF_raaiiid的博客-CSDN博客)(采用第二种开启外部shuffle服务的方法)

修改yarn-site.xml

原文件需修改的内容

<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle,spark_shuffle</value>
</property>

原文件需增加的内容

<property>
    <name>yarn.nodemanager.aux-services.spark_shuffle.class</name>
    <value>org.apache.spark.network.yarn.YarnShuffleService</value>
</property>
<property>
    <name>spark.shuffle.service.port</name>
    <value>7337</value>
</property>

$SPARK_HOME/yarn/ spark-<version>-yarn-shuffle.jar拷贝到每台NodeManager下的${HADOOP_HOME}/share/hadoop/yarn/lib/目录,然后重启所有修改过配置的节点。

image-20230426103031340

配置$SPARK_HOME/conf/spark-defaults.conf,增加以下参数

# 启用External shuffle Service服务
spark.shuffle.service.enabled true
# Shuffle Service默认服务端口,必须和yarn-site中的一致
spark.shuffle.service.port 7337
# 开启动态资源分配
spark.dynamicAllocation.enabled true
# 每个Application最小分配的executor数
spark.dynamicAllocation.minExecutors 2
# 每个Application最大并发分配的executor数
spark.dynamicAllocation.maxExecutors 10
# schedulerBacklogTimeout秒内有任务请求则开启申请
spark.dynamicAllocation.schedulerBacklogTimeout 1s
# 有任务在任务队列中持续了sustainedSchedulerBacklogTimeout秒则继续申请
spark.dynamicAllocation.sustainedSchedulerBacklogTimeout 5s
# executor 空闲超过60s 则释放
spark.dynamicAllocation.executorIdleTimeout 60s
#  如果启用动态分配,则要运行executor的初始数量。如果设置了“–num-executors”(或“spark.executor.instances”)并且大于这个值,则会使用这个值进行初始化。 如:max(initialExecuor = 3, –num-executors = 10) 取最大
spark.dynamicAllocation.initialExecutors 2
# 如果启用了动态分配,并且缓存数据块的executor已经空闲了超过这个时间,executor将被释放
spark.dynamicAllocation.cachedExecutorIdleTimeout 60s
/opt/module/spark3.1/bin/spark-submit  --properties-file /opt/module/hibench/HiBench-master/HiBench-master/report/terasort/spark/conf/sparkbench/spark.conf --class com.intel.hibench.sparkbench.micro.ScalaTeraSort --master yarn --num-executors 3 --executor-cores 2 --executor-memory 6g /opt/module/hibench/HiBench-master/HiBench-master/sparkbench/assembly/target/sparkbench-assembly-8.0-SNAPSHOT-dist.jar hdfs://hadoop102:8020/hibench_test/HiBench/Terasort/Input hdfs://hadoop102:8020/hibench_test/HiBench/Terasort/Output

启动任务命令

/opt/module/spark3.1/bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --executor-memory 6G --executor-cores 2 /opt/module/spark3.1/examples/jars/spark-examples_2.12-3.1.3.jar 1000

/opt/module/spark3.1/bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://hadoop102:7077 --deploy-mode cluster --executor-memory 6G --executor-cores 2 /opt/module/spark3.1/examples/jars/spark-examples_2.12-3.1.3.jar 1000

/opt/module/spark3.1/bin/spark-submit --master yarn --deploy-mode cluster --executor-memory 8G --executor-cores 2  /opt/module/spark3.1/examples/src/main/python/pi.py 1000

hibench

/opt/module/spark3.1/bin/spark-submit  --class com.intel.hibench.sparkbench.micro.ScalaTeraSort --master yarn  --executor-memory 6G --executor-cores 2 /opt/module/hibench/HiBench-master/HiBench-master/sparkbench/assembly/target/sparkbench-assembly-8.0-SNAPSHOT-dist.jar hdfs://hadoop102:8020/hibench_test/HiBench/Terasort/Input hdfs://hadoop102:8020/hibench_test/HiBench/Terasort/Output

运行结果

准备数据和jar包阶段

image-20230514211541101

hadoop MapReduce运行

image-20230610125348669

虚拟局域网

zerotier

https://my.zerotier.com/network

如果使用xshell工具 记得Windows也要下载

https://www.zerotier.com/download/

192.168.195.184 jk102
192.168.195.163 jk103
192.168.195.225 jk104
192.168.195.68 feizi102
192.168.195.49 feizi103
192.168.195.125 feizi104

连接

sudo zerotier-cli join 856127940c63df82

iperf 带宽测试

安装

sudo yum install iperf

使用

iperf -s
// iperf -c 192.168.195.184
iperf -c <服务器IP地址>

出现的问题

传输文件太慢,网络带宽引起

systemctl stop NetworkManager                      临时关闭
systemctl disable NetworkManager                   永久关闭网络管理命令
systemctl start network.service                    开启网络服务

控制台输出内容

23/04/26 14:28:52 INFO DataStreamer: Slow ReadProcessor read fields for block BP-2104837750-10.0.4.2-1682347630921:blk_1073741886_1062 took 42664ms (threshold=30000ms); ack: seqno: 1206 reply: SUCCESS reply: SUCCESS reply: SUCCESS downstreamAckTimeNanos: 42663295164 flag: 0 flag: 0 flag: 0, targets: [DatanodeInfoWithStorage[10.0.4.2:9866,DS-57566ce7-f785-4cb8-b191-7ba233c7a17a,DISK], DatanodeInfoWithStorage[129.204.194.101:9866,DS-ca81dc10-4c88-4713-830e-07d582cee8cf,DISK], DatanodeInfoWithStorage[43.139.163.74:9866,DS-23a6defa-ae70-4ad6-88db-5703dc31bb5c,DISK]]
2023-04-25 08:57:48,081 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(43.139.163.74:9866, datanodeUuid=f01ec0ce-cbe0-4e8c-bb96
-8beab9adf74d, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-ea7287bd-3b77-4206-910b-6f3ffb7e51a0;nsid=515203780;c=168234763092
1) Starting thread to transfer BP-2104837750-10.0.4.2-1682347630921:blk_1073741840_1016 to 10.0.4.2:9866
2023-04-25 08:57:48,082 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(43.139.163.74:9866, datanodeUuid=f01ec0ce-cbe0-4e8c-bb96
-8beab9adf74d, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-ea7287bd-3b77-4206-910b-6f3ffb7e51a0;nsid=515203780;c=168234763092
1) Starting thread to transfer BP-2104837750-10.0.4.2-1682347630921:blk_1073741843_1019 to 10.0.4.2:9866
2023-04-25 08:58:48,141 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(43.139.163.74:9866, datanodeUuid=f01ec0ce-cbe0-4e8c-bb96
-8beab9adf74d, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-ea7287bd-3b77-4206-910b-6f3ffb7e51a0;nsid=515203780;c=168234763092
1):Failed to transfer BP-2104837750-10.0.4.2-1682347630921:blk_1073741840_1016 to 10.0.4.2:9866 got
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel
[connection-pending remote=/10.0.4.2:9866]
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
	at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:2529)
	at java.lang.Thread.run(Thread.java:750)

修改hdfs-site,添加以下内容

<!-- datanode 通信是否使用域名,默认为false,改为true -->
    <property>
        <name>dfs.client.use.datanode.hostname</name>
        <value>true</value>
        <description>Whether datanodes should use datanode hostnames when
                   connecting to other datanodes for data transfer.
        </description>
    </property>

一开始我认为是内外网的IP设置问题,其实并不是,而是lab003的网络带宽问题

image-20230426200318929

端口对集群内全部开放

spark-submit提交后,集群的所有服务jps都无法查看到,但却是正常运行的

image-20230511094132214

SSHD病毒

image-20230514172921848

https://blog.csdn.net/liujinghu/article/details/125288926

查看进程

ll /proc/{pid}

查看定时任务

crontab -l

删除定时任务

crontab -r

查看定时服务状态并关闭

/bin/systemctl status  crond.service
service crond stop

删除文件夹

sudo rm -rf /var/tmp/*
sudo rm -rf /tmp

限制校园网登录

su
# 迅速登录几次服务器
# 查看登录的ip,就是校园网的服务器ip
tail -n 50 secure
# 实时查看
tail -f /var/log/secure

image-20230514113048041

在安全组中设置限制

image-20230514113135490

不使用启动脚本

# 开启 ===================================
lab001 "/opt/module/hadoop-3.1.3/sbin/start-dfs.sh"
lab001 "/opt/module/hadoop-3.1.3/bin/mapred --daemon start historyserver"
lab001 "/opt/module/spark3.1/sbin/start-history-server.sh"
lab002 "/opt/module/hadoop-3.1.3/sbin/start-yarn.sh"
# 关闭 ===================================
lab001 "/opt/module/spark3.1/sbin/stop-history-server.sh"
lab001 "/opt/module/hadoop-3.1.3/bin/mapred --daemon stop historyserver"
lab001 "/opt/module/hadoop-3.1.3/sbin/stop-dfs.sh"
lab002 "/opt/module/hadoop-3.1.3/sbin/stop-yarn.sh"

开启防火墙(不能开,开了就连不起来了)

安全组充当了服务器的虚拟防火墙

systemctl stop firewalld.service        #关闭防火墙服务
systemctl disable firewalld.service     #禁止防火墙开启启动
 
systemctl restart firewalld.service      #重启防火墙使配置生效
systemctl enable firewalld.service       #设置防火墙开机启动


firewall-cmd --state		#检查防火墙状态
not running				    #返回值,未运行

43.139.163.74:22

image-20230514203835245
定时任务

crontab -r

查看定时服务状态并关闭

/bin/systemctl status  crond.service
service crond stop

删除文件夹

sudo rm -rf /var/tmp/*
sudo rm -rf /tmp

限制校园网登录

su
# 迅速登录几次服务器
# 查看登录的ip,就是校园网的服务器ip
tail -n 50 secure
# 实时查看
tail -f /var/log/secure

[外链图片转存中…(img-xRLDSqds-1696143438913)]

在安全组中设置限制

[外链图片转存中…(img-RvhOeP6F-1696143438914)]

不使用启动脚本

# 开启 ===================================
lab001 "/opt/module/hadoop-3.1.3/sbin/start-dfs.sh"
lab001 "/opt/module/hadoop-3.1.3/bin/mapred --daemon start historyserver"
lab001 "/opt/module/spark3.1/sbin/start-history-server.sh"
lab002 "/opt/module/hadoop-3.1.3/sbin/start-yarn.sh"
# 关闭 ===================================
lab001 "/opt/module/spark3.1/sbin/stop-history-server.sh"
lab001 "/opt/module/hadoop-3.1.3/bin/mapred --daemon stop historyserver"
lab001 "/opt/module/hadoop-3.1.3/sbin/stop-dfs.sh"
lab002 "/opt/module/hadoop-3.1.3/sbin/stop-yarn.sh"

开启防火墙(不能开,开了就连不起来了)

安全组充当了服务器的虚拟防火墙

systemctl stop firewalld.service        #关闭防火墙服务
systemctl disable firewalld.service     #禁止防火墙开启启动
 
systemctl restart firewalld.service      #重启防火墙使配置生效
systemctl enable firewalld.service       #设置防火墙开机启动


firewall-cmd --state		#检查防火墙状态
not running				    #返回值,未运行

43.139.163.74:22

[外链图片转存中…(img-p9D87IM3-1696143438914)]

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1054977.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

CSS详细基础(一)选择器基础

本帖开始&#xff0c;我们来介绍CSS——即&#xff0c;层叠样式表~ 层叠样式表是一种用来表现HTML&#xff08;标准通用标记语言的一个应用&#xff09;或XML&#xff08;标准通用标记语言的一个子集&#xff09;等文件样式的计算机语言。简单的说&#xff0c;层叠就是对一个元…

激光雷达:自动驾驶的眼睛

激光雷达&#xff1a;自动驾驶的眼睛 文章目录 引言激光雷达的原理自动驾驶中的应用激光雷达的优势激光雷达的挑战结论结论 2023星火培训【专项营】Apollo开发者社区布道师倾力打造&#xff0c;包含PnC、新感知等的全新专项课程上线了。理论与实践相结合&#xff0c;全新的PnC培…

TRB爆仓分析,套利分析,行情判断!

毫无疑问昨日TRB又成为涨幅榜的明星&#xff0c;总结下来&#xff0c;多军赚麻&#xff0c;空头爆仓&#xff0c;套利爽歪歪&#xff01; 先说风险最小的套利情况&#xff0c;这里两种套利都能实现收益。 现货与永续合约的资金费率套利年化资金费率达到惊人的3285%——DeFi的…

熔断、限流、降级 —— SpringCloud Alibaba Sentinel

Sentinel 简介 Sentinel 是阿里中间件团队开源的&#xff0c;面向分布式服务架构的高可用流量防护组件&#xff0c;主要以流量为切入点&#xff0c;从限流、流量整形、熔断降级、系统负载保护、热点防护等多个维度来帮助开发者保障微服务的稳定性 Sentinel 提供了两个服务组件…

雷达实测数据的信噪比

最近处理了一段时间的雷达实测数据&#xff0c;其中有一个很重要的指标就是信号的信噪比&#xff0c;信噪比定义很简单就是信号的功率除以噪声功率&#xff0c;一般取dB &#xff08;其中如果分子分母是功率的话&#xff0c;那么公式为10log10(X)&#xff0c;如果分子分母是幅度…

ciscn_2019_s_9

ciscn_2019_s_9 Arch: i386-32-little RELRO: Partial RELRO Stack: No canary found NX: NX disabled PIE: No PIE (0x8048000) RWX: Has RWX segments32位&#xff0c;啥也没开&#xff0c;开心愉悦写shellcode int pwn() {char s[24]; // [esp8…

【Django笔记】 登录功能

1. post请求 sign/templates/index.html 其中 &#xff1a; <form method"post"> <!DOCTYPE html> <html lang"en"> <head><meta charset"UTF-8"><title>Title</title></head> <bod…

[Linux] 5.Linux虚拟机和Windows文件共享

一、拖拽 如果安装了VMware Tool可以从Windows直接拖进Linux中共享文件&#xff0c;通过拖拽的方式可以把文件从Linux 传输到Windows 二、 文件共享 需要安装VMware Tool点击添加&#xff0c;选择Windows文件的路径&#xff0c;名称作为Linux访问的路径 cd什么都不加&#xff…

进阶指针(四)—— 加强对指针,数组名,sizeof,strlen的理解

✨博客主页&#xff1a;小钱编程成长记 &#x1f388;博客专栏&#xff1a;进阶C语言 &#x1f388;推荐相关博文&#xff1a;进阶C语言&#xff08;一&#xff09;、进阶C语言&#xff08;二&#xff09;、进阶C语言&#xff08;三&#xff09; 进阶指针&#xff08;四&#x…

【数据结构】链表与LinkedList

作者主页&#xff1a;paper jie 的博客 本文作者&#xff1a;大家好&#xff0c;我是paper jie&#xff0c;感谢你阅读本文&#xff0c;欢迎一建三连哦。 本文录入于《JAVA数据结构》专栏&#xff0c;本专栏是针对于大学生&#xff0c;编程小白精心打造的。笔者用重金(时间和精…

Nodejs搭建本地http服务器,通过【内网穿透】实现远程访问

目录 前言 1.Node.js下载 2.创建node服务器 3.打开node服务器 4.内网穿透实现远程访问 4.1安装cpolar 4.2创建隧道&#xff08;网站&#xff09; 4.3固定公网地址 前言 今天我们学习一个有意思的东西&#xff0c;那就是内网穿透。在此之前我有一些关于html、CSS和Java…

c++---I/o操作

5、文件操作 程序运行时产生的数据都属于临时数据&#xff0c;程序一旦运行结束都会被释放。 我们可以通过文件将数据持久化 C中对文件操作需要包含头文件 <fstream> 文件类型分为两种&#xff1a; 文本文件 - 文件以文本的ASCII码形式存储在计算机中二进制文件 - 文…

机器学习之SGD, Batch, and Mini Batch的简单介绍

文章目录 总述SGD(Stochastic Gradient Descent)(随机梯度下降&#xff09;Batch &#xff08;批量&#xff09;mini Batch (迷你批量&#xff09; 总述 SGD, Batch, and Mini Batch是可用于神经网络的监督学习计算权重更新的方案&#xff0c;即∆wij。 SGD(Stochastic Gradi…

三等分功分器[波导]设计详细教程

想必大家通过阅读相关文献可以发现三等分实现可以有很多不同的方法&#xff0c;这里采用的是先不等分再等分的方式&#xff0c;仅供参考。 主要指标 中心频率为280GHz&#xff0c;采用WR-3频段的标准波导&#xff0c;将2:1不等功率分配耦合器与3dB等功率分配耦合器级联&#…

JUC——并发编程—第三部分

四大函数式接口(必须掌握) 函数式接口:只有一个方法的接口 FunctionalInterface public interface Runnable {public abstract void run(); }//简化编程模型&#xff0c;在新版本的框架底层大量应用 //foreach(消费者的函数式接口)Function接口(函数型接口) /*** Function 函…

中睿天下参展2023海军工程大学首届网络安全文化周并发表主题演讲

2023年9月3日至9月8日&#xff0c;海军工程大学首届网络安全文化周活动于武汉举办。本次活动以“守护蓝疆网安有我”为主题&#xff0c;设有特邀嘉宾前沿讲座、网络安全圆桌交流论坛、网络安全科技展、网络对抗实战竞技、网络安全保密视频创作和信息安全知识竞赛等系列活动。 海…

聊聊网络编程中的粘包、拆包、半包、编解码

聊聊网络编程中的粘包、拆包、半包、编解码 文章目录 1. 引言2. 粘包、拆包、半包现象解析2.1. 粘包现象2.2. 拆包现象2.3. 半包现象 3. 解决粘包、拆包、半包问题的方法4. netty 是如何解决解决粘包、拆包、半包问题参考文档 1. 引言 在网络编程中&#xff0c;TCP/IP 协议是…

uboot启动流程-涉及s_init汇编函数

一. uboot启动涉及函数 本文简单分析uboot启动流程中&#xff0c;涉及的汇编函数&#xff1a; lowlevel_init函数调用的函数&#xff1a;s_init 函数 save_boot_params_ret函数调用的函数&#xff1a; _main 函数 本文继上一篇文章的学习&#xff0c;地址如下&#xff1a;…

第七章 查找 八、B树

目录 一、定义 二、B树的核心特性 1、B树各个结点的子树数和关键字数 2、子树高度 3、关键字的值 4、B树高度 三、B树的插入 四、B树的删除 一、定义 B树&#xff0c;又称多路平衡查找树&#xff0c;B树中所有结点的孩子个数的最大值称为B树的阶&#xff0c;通常用m表示…

Apache Commons Pool2 池化技术

对象池是一种设计模式&#xff0c;用于管理和重用对象&#xff0c;以提高性能和资源利用率。对象池的概念在许多应用程序中都有广泛应用&#xff0c;特别是在需要频繁创建和销毁对象的情况下&#xff0c;例如数据库连接、线程、HTTP连接等 对象池通过预先创建一组对象并将它们存…