安装依赖
这部分不过于详细介绍,如果有现成环境也可以直接拿来使用。
Java
下载 java 安装包,需要登录 oracle,请自行下载。
cd /mnt
tar zxvf jdk-8u202-linux-x64.tar.gz
配置环境变量到 /etc/bashrc
,并执行 source /etc/bashrc
。启动包含了 Hadoop、Hive 的环境变量。
export JAVA_HOME=/mnt/jdk1.8.0_202
export PATH=$JAVA_HOME/bin:HIVE_HOME/bin:HADOOP-HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export HADOOP_HOME=/mnt/hadoop-3.3.2
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export HDFS_NAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
Hadoop
下载 Hadoop 3.3.2
wget https://archive.apache.org/dist/hadoop/core/hadoop-3.3.2/hadoop-3.3.2.tar.gz
tar zxvf hadoop-3.3.2.tar.gz
配置本机免密
ssh-keygen -t rsa
cd ~/.ssh/
cat id_rsa.pub >> authorized_keys
修改配置文件
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.proxyuser.work.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.work.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/mnt/hadoop-3.3.2/tmp</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/mnt/hadoop-3.3.2/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/mnt/hadoop-3.3.2/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
格式化
/mnt/hadoop-3.3.2/bin/hdfs namenode -format
启动
/mnt/hadoop-3.3.2/sbin/start-dfs.sh
MySQL
通过 yum 安装 MySQL 8
yum install -y ca-certificates
wget https://dev.mysql.com/get/mysql80-community-release-el7-2.noarch.rpm
yum -y install mysql80-community-release-el7-2.noarch.rpm
yum -y install mysql-community-server --nogpgcheck
# 启动 mysql
systemctl start mysqld
修改密码
# 查看初始 mysql 密码
grep "password" /var/log/mysqld.log
# 登录 mysql 后修改 root 密码
ALTER USER 'root'@'localhost' IDENTIFIED BY 'AAAaaa111~';
# 修改 mysql 密码策略和长度限制
set global validate_password.policy=0;
set global validate_password.length=4;
创建 Hive 所需数据库
CREATE USER 'hive'@'%' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%';
DELETE FROM mysql.user WHERE user='';
flush privileges;
CREATE DATABASE hive charset=utf8;
Hive
下载 Hive 3.1.2
wget https://downloads.apache.org/hive/hive-3.1.2/apache-hive-3.1.2-bin.tar.gz
tar zxvf apache-hive-3.1.2-bin.tar.gz
mv apache-hive-3.1.2-bin hive-3.1.2
修改配置文件
hive-site.xml
<configuration>
<property>
<name>hive.metastore.dml.events</name>
<value>true</value>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/mnt/hive-3.1.2/scratchdir</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/mnt/hive-3.1.2/warehouse</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://localhost:9083</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true&useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>AAAaaa111~</value>
</property>
<property>
<name>hive.metastore.event.db.notification.api.auth</name>
<value>false</value>
</property>
<property>
<name>hive.server2.active.passive.ha.enable</name>
<value>true</value>
</property>
</configuration>
初始化schema
/mnt/hive-3.1.2/bin/schematool -dbType mysql -initSchema
启动
/mnt/hive-3.1.2/bin/hive --service metastore
安装核心组件
Kudu
下载 Kudu 1.15.0
wget https://github.com/MartinWeindel/kudu-rpm/releases/download/v1.15.0-1/kudu-1.15.0-1.x86_64.rpm
安装 ntp 服务
yum install ntpd
systemctl start ntpd
systemctl enable ntpd
修改配置文件(也可不修改)
master.gflagfile
--log_dir=/mnt/kudu
--fs_wal_dir=/mnt/kudu/master
--fs_data_dirs=mnt/kudu/master
tserver.gflagfile
--tserver_master_addrs=127.0.0.1:7051
--log_dir=/mnt/kudu
--fs_wal_dir=/mnt/kudu/tserver
--fs_data_dirs=/mnt/kudu/tserver
启动
kudu-master --flagfile /etc/kudu/conf/master.gflagfile &
kudu-tserver --flagfile /etc/kudu/conf/tserver.gflagfile &
Impala
Impala 4.1.2 是通过源码编译的,编译时需要注意要添加 export USE_APACHE_HIVE=true
,这样编译完才能兼容 Hive 3.1.2。否则会在创建库的时候报错:
ERROR: ImpalaRuntimeException: Error making 'createDatabase' RPC to Hive Metastore:
CAUSED BY: TApplicationException: Invalid method name: 'get_database_req'
编译完成后,自行打的 RPM 包进行安装。可以参考 impala-rpm 自行修改。
修改配置文件
创建 hive-site.xml
和 core-site.xml
的软链到 /etc/impala/conf/
路径下。
ln -s /mnt/hive-3.1.2/conf/hive-site.xml hive-site.xml
ln -s /mnt/hadoop-3.3.2/etc/hadoop/core-site.xml core-site.xml
impala-conf.xml
<configuration>
<property>
<name>catalog_service_enabled</name>
<value>true</value>
</property>
<property>
<name>catalog_topic_mode</name>
<value>minimal</value>
</property>
<property>
<name>kudu_master_hosts</name>
<value>localhost:7051</value>
</property>
<property>
<name>default_storage_engine</name>
<value>kudu</value>
</property>
</configuration>
启动
impalad &
catalogd &
statestored &
验证
[root@bogon ~] impala-shell
Starting Impala Shell with no authentication using Python 2.7.5
Opened TCP connection to localhost.localdomain:21050
Connected to localhost.localdomain:21050
Server version: impalad version 4.1.2-RELEASE RELEASE (build 1d7b63102ebc8974e8133c964917ea8052148088)
***********************************************************************************
Welcome to the Impala shell.
(Impala Shell v4.1.2-RELEASE (1d7b631) built on Thu Jul 6 05:44:12 UTC 2023)
To see live updates on a query's progress, run 'set LIVE_SUMMARY=1;'.
***********************************************************************************
[localhost.localdomain:21050] default> CREATE TABLE test
> (
> id BIGINT,
> name STRING,
> PRIMARY KEY(id)
> )
> PARTITION BY HASH PARTITIONS 16
> STORED AS KUDU
> TBLPROPERTIES (
> 'kudu.master_addresses' = 'localhost:7051',
> 'kudu.num_tablet_replicas' = '1'
> );
+-------------------------+
| summary |
+-------------------------+
| Table has been created. |
+-------------------------+
Fetched 1 row(s) in 9.89s
[localhost.localdomain:21050] default> insert into test values (1, 'xiedeyantu');
Query: insert into test values (1, 'xiedeyantu')
Query submitted at: 2023-07-07 03:50:41 (Coordinator: http://bogon:25000)
Query progress can be monitored at: http://bogon:25000/query_plan?query_id=b94595ef56094a6e:05654dec00000000
Modified 1 row(s), 0 row error(s) in 0.22s
[localhost.localdomain:21050] default> select * from test;
Query: select * from test
Query submitted at: 2023-07-07 03:50:44 (Coordinator: http://bogon:25000)
Query progress can be monitored at: http://bogon:25000/query_plan?query_id=a74db79af051b646:81c486ed00000000
+----+------------+
| id | name |
+----+------------+
| 1 | xiedeyantu |
+----+------------+
Fetched 1 row(s) in 0.15s
通过 Web 页面看一下 Kudu,地址为:http://127.0.0.1:8051。为了方便也可以使用 w3m 来进行访问:w3m http://127.0.0.1:8051
。
通过 Web 页面看一下 Impala,端口分别为:
组件名称 | Web端口 |
---|---|
statestored | 25010 |
catalogd | 25020 |
impalad | 25000 |
打开:http://127.0.0.1:25020/catalog
打开:http://127.0.0.1:25000/backends
打开:http://127.0.0.1:25010/metrics
至此,所有的安装验证就完成了。