【环境搭建】使用Dockerfile构建容器搭建Kylin特定版本

news2024/11/26 13:28:26

Kylin的有些版本官方已经下架了,Docker Hub上也没镜像了,所以需要自己搭建以下,为了以后更方便快捷地使用,就编写了一个更轻量级的Dockerfile。

准备工作

本次搭建使用的源码包来自华为云镜像站,里面有Kylin各个版本的二进制包:

 https://repo.huaweicloud.com:8443/artifactory/apache-local/kylin/

以3.1.1版本为例,进入Kylin的官网可以找到安装指南Installation Guide | Apache Kylin,根据要求准备Kylin的依赖环境:

检查 Hive 的可用版本。可以访问 Apache Hive 官方存档页面,查看最新的可用版本:

 https://archive.apache.org/dist/hive/

以此类推,找到HBase和Hadoop的下载链接:

 https://archive.apache.org/dist/hbase/
 https://archive.apache.org/dist/hadoop/common/

需要注意的是一定要根据官方指南进行版本对齐。

因为需要反复调试,所以记录下一些基本的调试命令:

 # 构建容器
 docker build -t kylin:3.1.1 .
 # 运行
 sudo docker run -d --name kylin -p 7070:7070 kylin:3.1.1
 # 进入容器
 docker exec -it kylin /bin/bash
 # 查看日志
 docker logs kylin
 # 停止容器
 sudo docker stop kylin
 # 删除容器
 sudo docker rm kylin
 # 查看镜像
 docker images
 # 删除镜像
 docker rmi ID
 # 删除所有镜像
 docker rmi $(docker images -q) --force
 # 清除缓存
 docker system prune -a

踩坑记录

问题:JAVA路径不对

解决:

 # 使用 openjdk:8-jdk-alpine 作为基础镜像
 FROM openjdk:8-jdk-alpine
 # 设置工作目录
 WORKDIR /root
 # 默认运行 bash
 CMD ["/bin/sh"]

构建并进入容器:

 docker build -t openjdk-alpine .
 sudo docker run -it openjdk-alpine

查找JAVA的真实路径并修改好Dockerfile。

问题:Failed to create hdfs:///kylin/spark-history

解决:修改/conf/kylin.properties:

 # 使用本地文件系统路径
 kylin.engine.spark-conf.spark.eventLog.dir=file:///tmp/kylin/spark-history
 kylin.engine.spark-conf.spark.history.fs.logDirectory=file:///tmp/kylin/spark-history

重新打包:

 tar -czvf apache-kylin-3.1.1-bin-hbase1x.tar.gz apache-kylin-3.1.1-bin-hbase1x

别忘了创建相应的目录:

mkdir -p /tmp/kylin/spark-history && \
chmod -R 777 /tmp/kylin/spark-history

问题:运行报错Will not attempt to authenticate using SASL (unknown error)

 2024-11-08 10:35:03,089 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:975 : Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
 2024-11-08 10:35:03,091 WARN  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:1102 : Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
 java.net.ConnectException: Connection refused
     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
     at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
     at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)

解决:HBase没启动成功,要把hbase放在启动命令CMD里。

设置容器启动时执行的命令,先启动hbase,再起hadoop,最后起kylin,最后使用死循环挂起容器

问题:Hive报错hive-site.xml不存在于/opt/hive/conf/

解决:复制一份默认配置到同目录下。

#解决缺少文件报错
cd /opt/hive/conf/
cp hive-default.xml.template hive-site.xml

问题:ERROR: Unknown error. Please check full log.

 2024-11-08 10:41:12,357 INFO  [close-hbase-conn] hbase.HBaseConnection:137 : Closing HBase connections...
 2024-11-08 10:41:12,357 INFO  [close-hbase-conn] client.ConnectionManager$HConnectionImplementation:1676 : Closing zookeeper sessionid=0x0
 2024-11-08 10:41:12,457 INFO  [close-hbase-conn] zookeeper.ZooKeeper:684 : Session: 0x0 closed
 2024-11-08 10:41:12,457 INFO  [main-EventThread] zookeeper.ClientCnxn:512 : EventThread shut down
 ERROR: Unknown error. Please check full log.

解决:替换hive-site.xml中的占位符为本地路径,否则Hive还是启动不了

vi hive-site.xml

#修改内容如下
#hive.querylog.location ==> /opt/hive/iotmp
#hive.exec.local.scratchdir ==> /opt/hive/iotmp
#hive.downloaded.resources.dir ==> /opt/hive/iotmp

当然如果觉得麻烦,还可以用awk命令一键执行上述操作:

#去掉占位符
awk '{
    gsub(/\${system:java.io.tmpdir}\/\${system:user.name}/, "/opt/hive/iotmp");
    gsub(/\${system:java.io.tmpdir}\/\${hive.session.id}_resources/, "/opt/hive/iotmp");
    print
}' hive-site.xml > hive-site.xml.new && mv hive-site.xml.new hive-site.xml

最后别忘了创建相应的临时目录:

cd ..
mkdir iotmp

问题:ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... FAILED TO START

解决:版本不对,确认版本对齐。

Dockerfile

如果还是不成功,可以直接使用我的Dockerfile:

# 使用 Ubuntu 16.04 作为基础镜像
FROM ubuntu:16.04

# 更新并安装必要的工具和软件包
RUN apt-get update && apt-get install -y \
    wget \
    curl \
    tar \
    vim \
    procps \
    findutils \
    openjdk-8-jdk \
    && apt-get clean

# 设置环境变量
ENV KYLIN_HOME /opt/kylin
ENV HADOOP_HOME /opt/hadoop
ENV HIVE_HOME /opt/hive
ENV HBASE_HOME /opt/hbase
ENV ZOOKEEPER_HOME /opt/zookeeper
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV HADOOP_CONF_DIR $HADOOP_HOME/etc/hadoop
ENV HADOOP_COMMON_LIB_NATIVE_DIR $HADOOP_HOME/lib/native
ENV HADOOP_OPTS "-Duser.country=US -Duser.language=en"
ENV PATH $KYLIN_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH

# 使用 root 用户
USER root

# 下载并安装 Hadoop
RUN wget https://archive.apache.org/dist/hadoop/common/hadoop-2.7.0/hadoop-2.7.0.tar.gz -P /opt/ && \
    tar -zxvf /opt/hadoop-2.7.0.tar.gz -C /opt/ && \
    mv /opt/hadoop-2.7.0 /opt/hadoop

# 下载并安装 Hive
RUN wget https://archive.apache.org/dist/hive/hive-1.2.0/apache-hive-1.2.0-bin.tar.gz -P /opt/ && \
    tar -zxvf /opt/apache-hive-1.2.0-bin.tar.gz -C /opt/ && \
    mv /opt/apache-hive-1.2.0-bin /opt/hive

# 下载并解压 HBase
RUN wget https://archive.apache.org/dist/hbase/1.1.2/hbase-1.1.2-bin.tar.gz -P /opt/ && \
    tar -zxvf /opt/hbase-1.1.2-bin.tar.gz -C /opt/ && \
    mv /opt/hbase-1.1.2 /opt/hbase

# 解压 Kylin 安装包
RUN wget https://repo.huaweicloud.com:8443/artifactory/apache-local/kylin/apache-kylin-3.1.1/apache-kylin-3.1.1-bin-hbase1x.tar.gz -P /opt/ && \
    mkdir -p $KYLIN_HOME && \
    tar -zxvf /opt/apache-kylin-3.1.1-bin-hbase1x.tar.gz -C $KYLIN_HOME --strip-components=1

# 创建并设置 Spark 临时目录权限
RUN mkdir -p /tmp/kylin/spark-history && \
    chmod -R 777 /tmp/kylin/spark-history

# 创建 Kylin 目录并设置权限
RUN mkdir -p /kylin && chmod -R 777 /kylin

# 如果你有自定义的配置文件,可以复制到容器内
# COPY conf/ $KYLIN_HOME/conf/

# 修改Kylin的配置文件,否则报错Failed to create hdfs:///kylin/spark-history
RUN echo "kylin.engine.spark-conf.spark.eventLog.dir=file:///tmp/kylin/spark-history" >> $KYLIN_HOME/conf/kylin.properties && \
    echo "kylin.engine.spark-conf.spark.history.fs.logDirectory=file:///tmp/kylin/spark-history" >> $KYLIN_HOME/conf/kylin.properties

# 复制一份默认配置,否则Hive会报错hive-site.xml不存在于/opt/hive/conf/
RUN cp /opt/hive/conf/hive-default.xml.template /opt/hive/conf/hive-site.xml

# 替换hive-site.xml中的占位符为本地路径,否则Hive还是启动不了
RUN awk '{gsub(/\${system:java.io.tmpdir}\/\${system:user.name}/, "/opt/hive/iotmp"); gsub(/\${system:java.io.tmpdir}\/\${hive.session.id}_resources/, "/opt/hive/iotmp"); print}' /opt/hive/conf/hive-site.xml > /opt/hive/conf/hive-site.xml.new && \
    mv /opt/hive/conf/hive-site.xml.new /opt/hive/conf/hive-site.xml && \
    mkdir -p /opt/hive/iotmp

# 设置容器启动时执行的命令,先启动hbase,再起hadoop,最后起kylin,最后使用死循环挂起容器
CMD ["sh", "-c", "/opt/hbase/bin/start-hbase.sh && /opt/hadoop/sbin/start-all.sh && $KYLIN_HOME/bin/kylin.sh start && tail -f /dev/null"]
#CMD ["sh", "-c", "sleep 65535"]

Docker Logs

最终成功执行的、完整的Docker Logs,仅供参考:

$ docker logs kylin
starting master, logging to /opt/hbase/logs/hbase--master-a4709eb6289e.out
OpenJDK 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
24/11/08 11:03:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: /opt/hadoop/sbin/slaves.sh: line 60: ssh: command not found
localhost: /opt/hadoop/sbin/slaves.sh: line 60: ssh: command not found
Starting secondary namenodes [0.0.0.0]
0.0.0.0: /opt/hadoop/sbin/slaves.sh: line 60: ssh: command not found
24/11/08 11:03:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
chown: missing operand after '/opt/hadoop/logs'
Try 'chown --help' for more information.
starting resourcemanager, logging to /opt/hadoop/logs/yarn--resourcemanager-a4709eb6289e.out
localhost: /opt/hadoop/sbin/slaves.sh: line 60: ssh: command not found
Retrieving hadoop conf dir...
...................................................[PASS]
KYLIN_HOME is set to /opt/kylin
Checking HBase
...................................................[PASS]
Checking hive
...................................................[PASS]
Checking hadoop shell
...................................................[PASS]
Checking hdfs working dir
24/11/08 11:03:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
...................................................[PASS]
Retrieving Spark dependency...
24/11/08 11:03:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
24/11/08 11:03:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Optional dependency spark not found, if you need this; set SPARK_HOME, or run bin/download-spark.sh
...................................................[PASS]
Retrieving Flink dependency...
Optional dependency flink not found, if you need this; set FLINK_HOME, or run bin/download-flink.sh
...................................................[PASS]
Retrieving kafka dependency...
Couldn't find kafka home. If you want to enable streaming processing, Please set KAFKA_HOME to the path which contains kafka dependencies.
...................................................[PASS]
/opt/kylin/bin/check-port-availability.sh: line 30: netstat: command not found

Checking environment finished successfully. To check again, run 'bin/check-env.sh' manually.
Retrieving hive dependency...

Logging initialized using configuration in jar:file:/opt/hive/lib/hive-common-1.2.0.jar!/hive-log4j.properties
export hiveWarehouseDir=/user/hive/warehouse
Retrieving hbase dependency...
Retrieving hadoop conf dir...
Retrieving kafka dependency...
Couldn't find kafka home. If you want to enable streaming processing, Please set KAFKA_HOME to the path which contains kafka dependencies.
Retrieving Spark dependency...
24/11/08 11:04:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
24/11/08 11:04:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Optional dependency spark not found, if you need this; set SPARK_HOME, or run bin/download-spark.sh
Retrieving Flink dependency...
Optional dependency flink not found, if you need this; set FLINK_HOME, or run bin/download-flink.sh
Start to check whether we need to migrate acl tables
Using cached dependency...
skip spark_dependency
skip flink_dependency
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/kylin/tool/kylin-tool-3.1.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2024-11-08 11:04:15,227 INFO  [main] common.KylinConfig:118 : Loading kylin-defaults.properties from file:/opt/kylin/tool/kylin-tool-3.1.1.jar!/kylin-defaults.properties
2024-11-08 11:04:15,251 INFO  [main] common.KylinConfig:352 : Use KYLIN_HOME=/opt/kylin
2024-11-08 11:04:15,254 INFO  [main] common.KylinConfig:153 : Initialized a new KylinConfig from getInstanceFromEnv : 280884709
2024-11-08 11:04:15,318 INFO  [main] persistence.ResourceStore:90 : Using metadata url kylin_metadata@hbase for resource store
2024-11-08 11:04:15,786 WARN  [main] util.NativeCodeLoader:62 : Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2024-11-08 11:04:16,029 INFO  [main] hbase.HBaseConnection:267 : connection is null or closed, creating a new one
2024-11-08 11:04:16,124 INFO  [main] zookeeper.RecoverableZooKeeper:120 : Process identifier=hconnection-0x4d5650ae connecting to ZooKeeper ensemble=localhost:2181
2024-11-08 11:04:16,130 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2024-11-08 11:04:16,130 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:host.name=a4709eb6289e
2024-11-08 11:04:16,130 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:java.version=1.8.0_292
2024-11-08 11:04:16,131 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:java.vendor=Private Build
2024-11-08 11:04:16,131 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
2024-11-08 11:04:16,131 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:java.class.path=/opt/kylin/tool/kylin-tool-3.1.1.jar:/opt/kylin/conf:/opt/kylin/lib/kylin-jdbc-3.1.1.jar:/opt/kylin/lib/kylin-coprocessor-3.1.1.jar:/opt/kylin/lib/kylin-job-3.1.1.jar:/opt/kylin/lib/kylin-datasource-sdk-3.1.1.jar:/opt/kylin/ext/*:/opt/kylin/bin/../tomcat/bin/bootstrap.jar:/opt/kylin/bin/../tomcat/bin/tomcat-juli.jar:/opt/kylin/bin/../tomcat/lib/jasper-el.jar:/opt/kylin/bin/../tomcat/lib/catalina-ant.jar:/opt/kylin/bin/../tomcat/lib/tomcat-i18n-zh-CN.jar:/opt/kylin/bin/../tomcat/lib/tomcat-api.jar:/opt/kylin/bin/../tomcat/lib/tomcat-i18n-ko.jar:/opt/kylin/bin/../tomcat/lib/tomcat-i18n-ru.jar:/opt/kylin/bin/../tomcat/lib/tomcat-i18n-de.jar:/opt/kylin/bin/../tomcat/lib/catalina.jar:/opt/kylin/bin/../tomcat/lib/tomcat-util.jar:/opt/kylin/bin/../tomcat/lib/tomcat-dbcp.jar:/opt/kylin/bin/../tomcat/lib/ecj-4.4.2.jar:/opt/kylin/bin/../tomcat/lib/jsp-api.jar:/opt/kylin/bin/../tomcat/lib/tomcat-coyote.jar:/opt/kylin/bin/../tomcat/lib/tomcat7-websocket.jar:/opt/kylin/bin/../tomcat/lib/websocket-api.jar:/opt/kylin/bin/../tomcat/lib/catalina-ha.jar:/opt/kylin/bin/../tomcat/lib/annotations-api.jar:/opt/kylin/bin/../tomcat/lib/tomcat-i18n-ja.jar:/opt/kylin/bin/../tomcat/lib/kylin-tomcat-ext-3.1.1.jar:/opt/kylin/bin/../tomcat/lib/catalina-tribes.jar:/opt/kylin/bin/../tomcat/lib/servlet-api.jar:/opt/kylin/bin/../tomcat/lib/tomcat-i18n-fr.jar:/opt/kylin/bin/../tomcat/lib/tomcat-jdbc.jar:/opt/kylin/bin/../tomcat/lib/el-api.jar:/opt/kylin/bin/../tomcat/lib/jasper.jar:/opt/kylin/bin/../tomcat/lib/tomcat-i18n-es.jar:/opt/kylin/conf:/opt/kylin/lib/kylin-jdbc-3.1.1.jar:/opt/kylin/lib/kylin-coprocessor-3.1.1.jar:/opt/kylin/lib/kylin-job-3.1.1.jar:/opt/kylin/lib/kylin-datasource-sdk-3.1.1.jar:/opt/kylin/ext/*::/opt/hbase/conf:/usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar:/opt/hbase:/opt/hbase/lib/activation-1.1.jar:/opt/hbase/lib/aopalliance-1.0.jar:/opt/hbase/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hbase/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hbase/lib/api-asn1-api-1.0.0-M20.jar:/opt/hbase/lib/api-util-1.0.0-M20.jar:/opt/hbase/lib/asm-3.1.jar:/opt/hbase/lib/avro-1.7.4.jar:/opt/hbase/lib/commons-beanutils-1.7.0.jar:/opt/hbase/lib/commons-beanutils-core-1.8.0.jar:/opt/hbase/lib/commons-cli-1.2.jar:/opt/hbase/lib/commons-codec-1.9.jar:/opt/hbase/lib/commons-collections-3.2.1.jar:/opt/hbase/lib/commons-compress-1.4.1.jar:/opt/hbase/lib/commons-configuration-1.6.jar:/opt/hbase/lib/commons-daemon-1.0.13.jar:/opt/hbase/lib/commons-digester-1.8.jar:/opt/hbase/lib/commons-el-1.0.jar:/opt/hbase/lib/commons-httpclient-3.1.jar:/opt/hbase/lib/commons-io-2.4.jar:/opt/hbase/lib/commons-lang-2.6.jar:/opt/hbase/lib/commons-logging-1.2.jar:/opt/hbase/lib/commons-math-2.2.jar:/opt/hbase/lib/commons-math3-3.1.1.jar:/opt/hbase/lib/commons-net-3.1.jar:/opt/hbase/lib/disruptor-3.3.0.jar:/opt/hbase/lib/findbugs-annotations-1.3.9-1.jar:/opt/hbase/lib/guava-12.0.1.jar:/opt/hbase/lib/guice-3.0.jar:/opt/hbase/lib/guice-servlet-3.0.jar:/opt/hbase/lib/hadoop-annotations-2.5.1.jar:/opt/hbase/lib/hadoop-auth-2.5.1.jar:/opt/hbase/lib/hadoop-client-2.5.1.jar:/opt/hbase/lib/hadoop-common-2.5.1.jar:/opt/hbase/lib/hadoop-hdfs-2.5.1.jar:/opt/hbase/lib/hadoop-mapreduce-client-app-2.5.1.jar:/opt/hbase/lib/hadoop-mapreduce-client-common-2.5.1.jar:/opt/hbase/lib/hadoop-mapreduce-client-core-2.5.1.jar:/opt/hbase/lib/hadoop-mapreduce-client-jobclient-2.5.1.jar:/opt/hbase/lib/hadoop-mapreduce-client-shuffle-2.5.1.jar:/opt/hbase/lib/hadoop-yarn-api-2.5.1.jar:/opt/hbase/lib/hadoop-yarn-client-2.5.1.jar:/opt/hbase/lib/hadoop-yarn-common-2.5.1.jar:/opt/hbase/lib/hadoop-yarn-server-common-2.5.1.jar:/opt/hbase/lib/hbase-annotations-1.1.2-tests.jar:/opt/hbase/lib/hbase-annotations-1.1.2.jar:/opt/hbase/lib/hbase-client-1.1.2.jar:/opt/hbase/lib/hbase-common-1.1.2-tests.jar:/opt/hbase/lib/hbase-common-1.1.2.jar:/opt/hbase/lib/hbase-examples-1.1.2.jar:/opt/hbase/lib/hbase-hadoop-compat-1.1.2.jar:/opt/hbase/lib/hbase-hadoop2-compat-1.1.2.jar:/opt/hbase/lib/hbase-it-1.1.2-tests.jar:/opt/hbase/lib/hbase-it-1.1.2.jar:/opt/hbase/lib/hbase-prefix-tree-1.1.2.jar:/opt/hbase/lib/hbase-procedure-1.1.2.jar:/opt/hbase/lib/hbase-protocol-1.1.2.jar:/opt/hbase/lib/hbase-resource-bundle-1.1.2.jar:/opt/hbase/lib/hbase-rest-1.1.2.jar:/opt/hbase/lib/hbase-server-1.1.2-tests.jar:/opt/hbase/lib/hbase-server-1.1.2.jar:/opt/hbase/lib/hbase-shell-1.1.2.jar:/opt/hbase/lib/hbase-thrift-1.1.2.jar:/opt/hbase/lib/htrace-core-3.1.0-incubating.jar:/opt/hbase/lib/httpclient-4.2.5.jar:/opt/hbase/lib/httpcore-4.1.3.jar:/opt/hbase/lib/jackson-core-asl-1.9.13.jar:/opt/hbase/lib/jackson-jaxrs-1.9.13.jar:/opt/hbase/lib/jackson-mapper-asl-1.9.13.jar:/opt/hbase/lib/jackson-xc-1.9.13.jar:/opt/hbase/lib/jamon-runtime-2.3.1.jar:/opt/hbase/lib/jasper-compiler-5.5.23.jar:/opt/hbase/lib/jasper-runtime-5.5.23.jar:/opt/hbase/lib/java-xmlbuilder-0.4.jar:/opt/hbase/lib/javax.inject-1.jar:/opt/hbase/lib/jaxb-api-2.2.2.jar:/opt/hbase/lib/jaxb-impl-2.2.3-1.jar:/opt/hbase/lib/jcodings-1.0.8.jar:/opt/hbase/lib/jersey-client-1.9.jar:/opt/hbase/lib/jersey-core-1.9.jar:/opt/hbase/lib/jersey-guice-1.9.jar:/opt/hbase/lib/jersey-json-1.9.jar:/opt/hbase/lib/jersey-server-1.9.jar:/opt/hbase/lib/jets3t-0.9.0.jar:/opt/hbase/lib/jettison-1.3.3.jar:/opt/hbase/lib/jetty-6.1.26.jar:/opt/hbase/lib/jetty-sslengine-6.1.26.jar:/opt/hbase/lib/jetty-util-6.1.26.jar:/opt/hbase/lib/joni-2.1.2.jar:/opt/hbase/lib/jruby-complete-1.6.8.jar:/opt/hbase/lib/jsch-0.1.42.jar:/opt/hbase/lib/jsp-2.1-6.1.14.jar:/opt/hbase/lib/jsp-api-2.1-6.1.14.jar:/opt/hbase/lib/jsr305-1.3.9.jar:/opt/hbase/lib/junit-4.11.jar:/opt/hbase/lib/leveldbjni-all-1.8.jar:/opt/hbase/lib/libthrift-0.9.0.jar:/opt/hbase/lib/log4j-1.2.17.jar:/opt/hbase/lib/metrics-core-2.2.0.jar:/opt/hbase/lib/netty-3.2.4.Final.jar:/opt/hbase/lib/netty-all-4.0.23.Final.jar:/opt/hbase/lib/paranamer-2.3.jar:/opt/hbase/lib/protobuf-java-2.5.0.jar:/opt/hbase/lib/servlet-api-2.5-6.1.14.jar:/opt/hbase/lib/servlet-api-2.5.jar:/opt/hbase/lib/slf4j-api-1.7.7.jar:/opt/hbase/lib/slf4j-log4j12-1.7.5.jar:/opt/hbase/lib/snappy-java-1.0.4.1.jar:/opt/hbase/lib/spymemcached-2.11.6.jar:/opt/hbase/lib/xmlenc-0.52.jar:/opt/hbase/lib/xz-1.0.jar:/opt/hbase/lib/zookeeper-3.4.6.jar:/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/opt/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/opt/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.0.jar:/opt/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.0.jar:/opt/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/opt/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/opt/hadoop/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/common/lib/junit-4.11.jar:/opt/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/opt/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/opt/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.7.0.jar:/opt/hadoop/share/hadoop/common/hadoop-common-2.7.0-tests.jar:/opt/hadoop/share/hadoop/common/hadoop-nfs-2.7.0.jar:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/opt/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/opt/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.0-tests.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.0.jar:/opt/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.0.jar:/opt/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/opt/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/opt/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/opt/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.0.jar:/opt/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/opt/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.0-tests.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.0.jar:/opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.0.jar:/opt/hadoop/contrib/capacity-scheduler/*.jar::/opt/hive/conf:/opt/hive/lib/jcommander-1.32.jar:/opt/hive/lib/zookeeper-3.4.6.jar:/opt/hive/lib/geronimo-jta_1.1_spec-1.1.1.jar:/opt/hive/lib/hive-shims-scheduler-1.2.0.jar:/opt/hive/lib/super-csv-2.2.0.jar:/opt/hive/lib/curator-client-2.6.0.jar:/opt/hive/lib/commons-compiler-2.7.6.jar:/opt/hive/lib/commons-io-2.4.jar:/opt/hive/lib/commons-pool-1.5.4.jar:/opt/hive/lib/ant-1.9.1.jar:/opt/hive/lib/antlr-runtime-3.4.jar:/opt/hive/lib/jetty-all-server-7.6.0.v20120127.jar:/opt/hive/lib/hive-accumulo-handler-1.2.0.jar:/opt/hive/lib/hive-hwi-1.2.0.jar:/opt/hive/lib/bonecp-0.8.0.RELEASE.jar:/opt/hive/lib/hive-contrib-1.2.0.jar:/opt/hive/lib/geronimo-jaspic_1.0_spec-1.0.jar:/opt/hive/lib/hamcrest-core-1.1.jar:/opt/hive/lib/datanucleus-core-3.2.10.jar:/opt/hive/lib/hive-common-1.2.0.jar:/opt/hive/lib/plexus-utils-1.5.6.jar:/opt/hive/lib/libthrift-0.9.2.jar:/opt/hive/lib/groovy-all-2.1.6.jar:/opt/hive/lib/hive-jdbc-1.2.0.jar:/opt/hive/lib/hive-shims-0.20S-1.2.0.jar:/opt/hive/lib/maven-scm-api-1.4.jar:/opt/hive/lib/janino-2.7.6.jar:/opt/hive/lib/opencsv-2.3.jar:/opt/hive/lib/hive-ant-1.2.0.jar:/opt/hive/lib/avro-1.7.5.jar:/opt/hive/lib/commons-digester-1.8.jar:/opt/hive/lib/accumulo-start-1.6.0.jar:/opt/hive/lib/stax-api-1.0.1.jar:/opt/hive/lib/maven-scm-provider-svn-commons-1.4.jar:/opt/hive/lib/activation-1.1.jar:/opt/hive/lib/libfb303-0.9.2.jar:/opt/hive/lib/velocity-1.5.jar:/opt/hive/lib/commons-lang-2.6.jar:/opt/hive/lib/commons-dbcp-1.4.jar:/opt/hive/lib/antlr-2.7.7.jar:/opt/hive/lib/accumulo-core-1.6.0.jar:/opt/hive/lib/parquet-hadoop-bundle-1.6.0.jar:/opt/hive/lib/json-20090211.jar:/opt/hive/lib/datanucleus-rdbms-3.2.9.jar:/opt/hive/lib/commons-compress-1.4.1.jar:/opt/hive/lib/httpcore-4.4.jar:/opt/hive/lib/jetty-all-7.6.0.v20120127.jar:/opt/hive/lib/paranamer-2.3.jar:/opt/hive/lib/commons-httpclient-3.0.1.jar:/opt/hive/lib/jta-1.1.jar:/opt/hive/lib/hive-exec-1.2.0.jar:/opt/hive/lib/asm-tree-3.1.jar:/opt/hive/lib/maven-scm-provider-svnexe-1.4.jar:/opt/hive/lib/hive-service-1.2.0.jar:/opt/hive/lib/commons-codec-1.4.jar:/opt/hive/lib/apache-log4j-extras-1.2.17.jar:/opt/hive/lib/guava-14.0.1.jar:/opt/hive/lib/hive-hbase-handler-1.2.0.jar:/opt/hive/lib/eigenbase-properties-1.1.5.jar:/opt/hive/lib/asm-commons-3.1.jar:/opt/hive/lib/ivy-2.4.0.jar:/opt/hive/lib/commons-beanutils-1.7.0.jar:/opt/hive/lib/commons-vfs2-2.0.jar:/opt/hive/lib/commons-cli-1.2.jar:/opt/hive/lib/servlet-api-2.5.jar:/opt/hive/lib/hive-shims-1.2.0.jar:/opt/hive/lib/stringtemplate-3.2.1.jar:/opt/hive/lib/hive-shims-common-1.2.0.jar:/opt/hive/lib/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/opt/hive/lib/xz-1.0.jar:/opt/hive/lib/hive-serde-1.2.0.jar:/opt/hive/lib/geronimo-annotation_1.0_spec-1.1.1.jar:/opt/hive/lib/hive-testutils-1.2.0.jar:/opt/hive/lib/jdo-api-3.0.1.jar:/opt/hive/lib/hive-cli-1.2.0.jar:/opt/hive/lib/commons-configuration-1.6.jar:/opt/hive/lib/accumulo-trace-1.6.0.jar:/opt/hive/lib/curator-framework-2.6.0.jar:/opt/hive/lib/joda-time-2.5.jar:/opt/hive/lib/snappy-java-1.0.5.jar:/opt/hive/lib/httpclient-4.4.jar:/opt/hive/lib/datanucleus-api-jdo-3.2.6.jar:/opt/hive/lib/jpam-1.1.jar:/opt/hive/lib/netty-3.7.0.Final.jar:/opt/hive/lib/commons-beanutils-core-1.8.0.jar:/opt/hive/lib/commons-logging-1.1.3.jar:/opt/hive/lib/junit-4.11.jar:/opt/hive/lib/mail-1.4.1.jar:/opt/hive/lib/oro-2.0.8.jar:/opt/hive/lib/jsr305-3.0.0.jar:/opt/hive/lib/log4j-1.2.16.jar:/opt/hive/lib/hive-metastore-1.2.0.jar:/opt/hive/lib/hive-shims-0.23-1.2.0.jar:/opt/hive/lib/ant-launcher-1.9.1.jar:/opt/hive/lib/regexp-1.3.jar:/opt/hive/lib/commons-collections-3.2.1.jar:/opt/hive/lib/hive-beeline-1.2.0.jar:/opt/hive/lib/hive-jdbc-1.2.0-standalone.jar:/opt/hive/lib/tempus-fugit-1.1.jar:/opt/hive/lib/jline-2.12.jar:/opt/hive/lib/commons-math-2.1.jar:/opt/hive/lib/ST4-4.0.4.jar:/opt/hive/lib/accumulo-fate-1.6.0.jar:/opt/hive/lib/curator-recipes-2.6.0.jar:/opt/hive/hcatalog/share/hcatalog/hive-hcatalog-core-1.2.0.jar::::/opt/hive/conf:/opt/hive/lib/jcommander-1.32.jar:/opt/hive/lib/zookeeper-3.4.6.jar:/opt/hive/lib/geronimo-jta_1.1_spec-1.1.1.jar:/opt/hive/lib/hive-shims-scheduler-1.2.0.jar:/opt/hive/lib/super-csv-2.2.0.jar:/opt/hive/lib/curator-client-2.6.0.jar:/opt/hive/lib/commons-compiler-2.7.6.jar:/opt/hive/lib/commons-io-2.4.jar:/opt/hive/lib/commons-pool-1.5.4.jar:/opt/hive/lib/ant-1.9.1.jar:/opt/hive/lib/antlr-runtime-3.4.jar:/opt/hive/lib/jetty-all-server-7.6.0.v20120127.jar:/opt/hive/lib/hive-accumulo-handler-1.2.0.jar:/opt/hive/lib/hive-hwi-1.2.0.jar:/opt/hive/lib/bonecp-0.8.0.RELEASE.jar:/opt/hive/lib/hive-contrib-1.2.0.jar:/opt/hive/lib/geronimo-jaspic_1.0_spec-1.0.jar:/opt/hive/lib/hamcrest-core-1.1.jar:/opt/hive/lib/datanucleus-core-3.2.10.jar:/opt/hive/lib/hive-common-1.2.0.jar:/opt/hive/lib/plexus-utils-1.5.6.jar:/opt/hive/lib/libthrift-0.9.2.jar:/opt/hive/lib/groovy-all-2.1.6.jar:/opt/hive/lib/hive-jdbc-1.2.0.jar:/opt/hive/lib/hive-shims-0.20S-1.2.0.jar:/opt/hive/lib/maven-scm-api-1.4.jar:/opt/hive/lib/janino-2.7.6.jar:/opt/hive/lib/opencsv-2.3.jar:/opt/hive/lib/hive-ant-1.2.0.jar:/opt/hive/lib/avro-1.7.5.jar:/opt/hive/lib/commons-digester-1.8.jar:/opt/hive/lib/accumulo-start-1.6.0.jar:/opt/hive/lib/stax-api-1.0.1.jar:/opt/hive/lib/maven-scm-provider-svn-commons-1.4.jar:/opt/hive/lib/activation-1.1.jar:/opt/hive/lib/libfb303-0.9.2.jar:/opt/hive/lib/velocity-1.5.jar:/opt/hive/lib/commons-lang-2.6.jar:/opt/hive/lib/commons-dbcp-1.4.jar:/opt/hive/lib/antlr-2.7.7.jar:/opt/hive/lib/accumulo-core-1.6.0.jar:/opt/hive/lib/parquet-hadoop-bundle-1.6.0.jar:/opt/hive/lib/json-20090211.jar:/opt/hive/lib/datanucleus-rdbms-3.2.9.jar:/opt/hive/lib/commons-compress-1.4.1.jar:/opt/hive/lib/httpcore-4.4.jar:/opt/hive/lib/jetty-all-7.6.0.v20120127.jar:/opt/hive/lib/paranamer-2.3.jar:/opt/hive/lib/commons-httpclient-3.0.1.jar:/opt/hive/lib/jta-1.1.jar:/opt/hive/lib/hive-exec-1.2.0.jar:/opt/hive/lib/asm-tree-3.1.jar:/opt/hive/lib/maven-scm-provider-svnexe-1.4.jar:/opt/hive/lib/hive-service-1.2.0.jar:/opt/hive/lib/commons-codec-1.4.jar:/opt/hive/lib/apache-log4j-extras-1.2.17.jar:/opt/hive/lib/guava-14.0.1.jar:/opt/hive/lib/hive-hbase-handler-1.2.0.jar:/opt/hive/lib/eigenbase-properties-1.1.5.jar:/opt/hive/lib/asm-commons-3.1.jar:/opt/hive/lib/ivy-2.4.0.jar:/opt/hive/lib/commons-beanutils-1.7.0.jar:/opt/hive/lib/commons-vfs2-2.0.jar:/opt/hive/lib/commons-cli-1.2.jar:/opt/hive/lib/servlet-api-2.5.jar:/opt/hive/lib/hive-shims-1.2.0.jar:/opt/hive/lib/stringtemplate-3.2.1.jar:/opt/hive/lib/hive-shims-common-1.2.0.jar:/opt/hive/lib/pentaho-aggdesigner-algorithm-5.1.5-jhyde.jar:/opt/hive/lib/xz-1.0.jar:/opt/hive/lib/hive-serde-1.2.0.jar:/opt/hive/lib/geronimo-annotation_1.0_spec-1.1.1.jar:/opt/hive/lib/hive-testutils-1.2.0.jar:/opt/hive/lib/jdo-api-3.0.1.jar:/opt/hive/lib/hive-cli-1.2.0.jar:/opt/hive/lib/commons-configuration-1.6.jar:/opt/hive/lib/accumulo-trace-1.6.0.jar:/opt/hive/lib/curator-framework-2.6.0.jar:/opt/hive/lib/joda-time-2.5.jar:/opt/hive/lib/snappy-java-1.0.5.jar:/opt/hive/lib/httpclient-4.4.jar:/opt/hive/lib/datanucleus-api-jdo-3.2.6.jar:/opt/hive/lib/jpam-1.1.jar:/opt/hive/lib/netty-3.7.0.Final.jar:/opt/hive/lib/commons-beanutils-core-1.8.0.jar:/opt/hive/lib/commons-logging-1.1.3.jar:/opt/hive/lib/junit-4.11.jar:/opt/hive/lib/mail-1.4.1.jar:/opt/hive/lib/oro-2.0.8.jar:/opt/hive/lib/jsr305-3.0.0.jar:/opt/hive/lib/log4j-1.2.16.jar:/opt/hive/lib/hive-metastore-1.2.0.jar:/opt/hive/lib/hive-shims-0.23-1.2.0.jar:/opt/hive/lib/ant-launcher-1.9.1.jar:/opt/hive/lib/regexp-1.3.jar:/opt/hive/lib/commons-collections-3.2.1.jar:/opt/hive/lib/hive-beeline-1.2.0.jar:/opt/hive/lib/hive-jdbc-1.2.0-standalone.jar:/opt/hive/lib/tempus-fugit-1.1.jar:/opt/hive/lib/jline-2.12.jar:/opt/hive/lib/commons-math-2.1.jar:/opt/hive/lib/ST4-4.0.4.jar:/opt/hive/lib/accumulo-fate-1.6.0.jar:/opt/hive/lib/curator-recipes-2.6.0.jar:/opt/hive/hcatalog/share/hcatalog/hive-hcatalog-core-1.2.0.jar:::
2024-11-08 11:04:16,131 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
2024-11-08 11:04:16,132 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:java.io.tmpdir=/tmp
2024-11-08 11:04:16,132 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:java.compiler=<NA>
2024-11-08 11:04:16,133 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:os.name=Linux
2024-11-08 11:04:16,133 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:os.arch=amd64
2024-11-08 11:04:16,134 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:os.version=5.4.0-54-generic
2024-11-08 11:04:16,134 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:user.name=root
2024-11-08 11:04:16,134 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:user.home=/root
2024-11-08 11:04:16,134 INFO  [main] zookeeper.ZooKeeper:100 : Client environment:user.dir=/
2024-11-08 11:04:16,135 INFO  [main] zookeeper.ZooKeeper:438 : Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x4d5650ae0x0, quorum=localhost:2181, baseZNode=/hbase
2024-11-08 11:04:16,153 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:975 : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2024-11-08 11:04:16,158 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:852 : Socket connection established to localhost/127.0.0.1:2181, initiating session
2024-11-08 11:04:16,165 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:1235 : Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1930b7196420007, negotiated timeout = 40000
2024-11-08 11:04:16,646 INFO  [main] common.KylinConfigBase:238 : Kylin Config was updated with kylin.server.cluster-name : kylin_metadata
2024-11-08 11:04:16,744 INFO  [main] util.ZKUtil:165 : zookeeper connection string: localhost:2181 with namespace /kylin/kylin_metadata
2024-11-08 11:04:16,804 INFO  [main] imps.CuratorFrameworkImpl:235 : Starting
2024-11-08 11:04:16,806 INFO  [main] zookeeper.ZooKeeper:438 : Initiating client connection, connectString=localhost:2181/kylin/kylin_metadata sessionTimeout=120000 watcher=org.apache.curator.ConnectionState@7f0d96f2
2024-11-08 11:04:16,808 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:975 : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2024-11-08 11:04:16,809 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:852 : Socket connection established to localhost/127.0.0.1:2181, initiating session
2024-11-08 11:04:16,812 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:1235 : Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1930b7196420008, negotiated timeout = 40000
2024-11-08 11:04:16,817 INFO  [main] util.ZKUtil:169 : new zookeeper Client start: localhost:2181
2024-11-08 11:04:16,820 INFO  [main-EventThread] state.ConnectionStateManager:228 : State change: CONNECTED
2024-11-08 11:04:16,828 INFO  [main] imps.CuratorFrameworkImpl:235 : Starting
2024-11-08 11:04:16,829 INFO  [main] zookeeper.ZooKeeper:438 : Initiating client connection, connectString=localhost:2181 sessionTimeout=120000 watcher=org.apache.curator.ConnectionState@3fc9504b
2024-11-08 11:04:16,830 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:975 : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2024-11-08 11:04:16,846 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:852 : Socket connection established to localhost/127.0.0.1:2181, initiating session
2024-11-08 11:04:16,850 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:1235 : Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1930b7196420009, negotiated timeout = 40000
2024-11-08 11:04:16,850 INFO  [main-EventThread] state.ConnectionStateManager:228 : State change: CONNECTED
2024-11-08 11:04:16,864 INFO  [Curator-Framework-0] imps.CuratorFrameworkImpl:821 : backgroundOperationsLoop exiting
2024-11-08 11:04:16,869 INFO  [main] zookeeper.ZooKeeper:684 : Session: 0x1930b7196420009 closed
2024-11-08 11:04:16,870 INFO  [main-EventThread] zookeeper.ClientCnxn:512 : EventThread shut down
2024-11-08 11:04:16,883 INFO  [main] zookeeper.ZookeeperDistributedLock:114 : 7294@a4709eb6289e acquired lock at /create_htable/kylin_metadata/lock
2024-11-08 11:04:18,300 INFO  [main] client.HBaseAdmin:669 : Created kylin_metadata
2024-11-08 11:04:18,305 INFO  [main] zookeeper.ZookeeperDistributedLock:289 : 7294@a4709eb6289e released lock at /create_htable/kylin_metadata/lock
2024-11-08 11:04:18,436 INFO  [main] hbase.HBaseConnection:267 : connection is null or closed, creating a new one
2024-11-08 11:04:18,437 INFO  [main] zookeeper.RecoverableZooKeeper:120 : Process identifier=hconnection-0x3e10dc6 connecting to ZooKeeper ensemble=localhost:2181
2024-11-08 11:04:18,437 INFO  [main] zookeeper.ZooKeeper:438 : Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x3e10dc60x0, quorum=localhost:2181, baseZNode=/hbase
2024-11-08 11:04:18,442 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:975 : Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2024-11-08 11:04:18,443 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:852 : Socket connection established to localhost/127.0.0.1:2181, initiating session
2024-11-08 11:04:18,445 INFO  [main-SendThread(localhost:2181)] zookeeper.ClientCnxn:1235 : Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1930b719642000a, negotiated timeout = 40000
2024-11-08 11:04:18,461 INFO  [Thread-5] util.ZKUtil:93 : Going to remove 1 cached curator clients
2024-11-08 11:04:18,463 INFO  [Thread-5] util.ZKUtil:78 : CuratorFramework for zkString localhost:2181 is removed due to EXPLICIT
2024-11-08 11:04:18,463 INFO  [Curator-Framework-0] imps.CuratorFrameworkImpl:821 : backgroundOperationsLoop exiting
2024-11-08 11:04:18,464 INFO  [close-hbase-conn] hbase.HBaseConnection:137 : Closing HBase connections...
2024-11-08 11:04:18,465 INFO  [close-hbase-conn] client.ConnectionManager$HConnectionImplementation:1676 : Closing zookeeper sessionid=0x1930b719642000a
2024-11-08 11:04:18,466 INFO  [Thread-5] zookeeper.ZooKeeper:684 : Session: 0x1930b7196420008 closed
2024-11-08 11:04:18,466 INFO  [main-EventThread] zookeeper.ClientCnxn:512 : EventThread shut down
2024-11-08 11:04:18,467 INFO  [main-EventThread] zookeeper.ClientCnxn:512 : EventThread shut down
2024-11-08 11:04:18,468 INFO  [close-hbase-conn] zookeeper.ZooKeeper:684 : Session: 0x1930b719642000a closed
2024-11-08 11:04:18,568 INFO  [close-hbase-conn] client.ConnectionManager$HConnectionImplementation:2068 : Closing master protocol: MasterService
2024-11-08 11:04:18,571 INFO  [close-hbase-conn] client.ConnectionManager$HConnectionImplementation:1676 : Closing zookeeper sessionid=0x1930b7196420007
2024-11-08 11:04:18,573 INFO  [close-hbase-conn] zookeeper.ZooKeeper:684 : Session: 0x1930b7196420007 closed
2024-11-08 11:04:18,573 INFO  [main-EventThread] zookeeper.ClientCnxn:512 : EventThread shut down

A new Kylin instance is started by . To stop it, run 'kylin.sh stop'
Check the log at /opt/kylin/logs/kylin.log
Web UI is at http://a4709eb6289e:7070/kylin

模板套用

刚才只是以3.1.1版本为例,其实其他版本也是同理,例如调试后可运行的2.3.1版本的Dockerfile:

# 使用 Ubuntu 16.04 作为基础镜像
FROM ubuntu:16.04

# 更新并安装必要的工具和软件包
RUN apt-get update && apt-get install -y \
    wget \
    curl \
    tar \
    vim \
    procps \
    findutils \
    openjdk-8-jdk \
    && apt-get clean

# 设置环境变量
ENV KYLIN_HOME /opt/kylin
ENV HADOOP_HOME /opt/hadoop
ENV HIVE_HOME /opt/hive
ENV HBASE_HOME /opt/hbase
ENV ZOOKEEPER_HOME /opt/zookeeper
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV HADOOP_CONF_DIR $HADOOP_HOME/etc/hadoop
ENV HADOOP_COMMON_LIB_NATIVE_DIR $HADOOP_HOME/lib/native
ENV HADOOP_OPTS "-Duser.country=US -Duser.language=en"
ENV PATH $KYLIN_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH

# 使用 root 用户
USER root

# 下载并安装 Hadoop
RUN wget https://archive.apache.org/dist/hadoop/common/hadoop-2.7.0/hadoop-2.7.0.tar.gz -P /opt/ && \
    tar -zxvf /opt/hadoop-2.7.0.tar.gz -C /opt/ && \
    mv /opt/hadoop-2.7.0 /opt/hadoop

# 下载并安装 Hive
RUN wget https://archive.apache.org/dist/hive/hive-1.2.0/apache-hive-1.2.0-bin.tar.gz -P /opt/ && \
    tar -zxvf /opt/apache-hive-1.2.0-bin.tar.gz -C /opt/ && \
    mv /opt/apache-hive-1.2.0-bin /opt/hive

# 下载并解压 HBase
RUN wget https://archive.apache.org/dist/hbase/1.1.2/hbase-1.1.2-bin.tar.gz -P /opt/ && \
    tar -zxvf /opt/hbase-1.1.2-bin.tar.gz -C /opt/ && \
    mv /opt/hbase-1.1.2 /opt/hbase

# 解压 Kylin 安装包
RUN wget https://repo.huaweicloud.com:8443/artifactory/apache-local/kylin/apache-kylin-2.3.1/apache-kylin-2.3.1-hbase1x-bin.tar.gz -P /opt/ && \
    mkdir -p $KYLIN_HOME && \
    tar -zxvf /opt/apache-kylin-2.3.1-hbase1x-bin.tar.gz -C $KYLIN_HOME --strip-components=1

# 创建并设置 Spark 临时目录权限
RUN mkdir -p /tmp/kylin/spark-history && \
    chmod -R 777 /tmp/kylin/spark-history

# 创建 Kylin 目录并设置权限
RUN mkdir -p /kylin && chmod -R 777 /kylin

# 如果你有自定义的配置文件,可以复制到容器内
# COPY conf/ $KYLIN_HOME/conf/

# 修改Kylin的配置文件,否则报错Failed to create hdfs:///kylin/spark-history
RUN echo "kylin.engine.spark-conf.spark.eventLog.dir=file:///tmp/kylin/spark-history" >> $KYLIN_HOME/conf/kylin.properties && \
    echo "kylin.engine.spark-conf.spark.history.fs.logDirectory=file:///tmp/kylin/spark-history" >> $KYLIN_HOME/conf/kylin.properties

# 复制一份默认配置,否则Hive会报错hive-site.xml不存在于/opt/hive/conf/
RUN cp /opt/hive/conf/hive-default.xml.template /opt/hive/conf/hive-site.xml

# 替换hive-site.xml中的占位符为本地路径,否则Hive还是启动不了
RUN awk '{gsub(/\${system:java.io.tmpdir}\/\${system:user.name}/, "/opt/hive/iotmp"); gsub(/\${system:java.io.tmpdir}\/\${hive.session.id}_resources/, "/opt/hive/iotmp"); print}' /opt/hive/conf/hive-site.xml > /opt/hive/conf/hive-site.xml.new && \
    mv /opt/hive/conf/hive-site.xml.new /opt/hive/conf/hive-site.xml && \
    mkdir -p /opt/hive/iotmp

# 设置容器启动时执行的命令,先启动hbase,再起hadoop,最后起kylin,最后使用死循环挂起容器
CMD ["sh", "-c", "/opt/hbase/bin/start-hbase.sh && /opt/hadoop/sbin/start-all.sh && $KYLIN_HOME/bin/kylin.sh start && tail -f /dev/null"]
#CMD ["sh", "-c", "sleep 65535"]

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2237537.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【图】图学习

0 回顾数据结构逻辑 1 图的定义和基本术语 必须有顶点&#xff0c;可以没有边。 Cn2和2*Cn2&#xff08;数学上的&#xff0c;n个顶点取2个顶点&#xff09; 概念有些多。。。。。。 2 图的定义 3 图的存储结构 无向图的邻接矩阵 有向图的邻接矩阵 网&#xff08;有权图&#…

基于RMD算法模型的信号传输统计特性的matlab模拟仿真

目录 1.程序功能描述 2.测试软件版本以及运行结果展示 3.核心程序 4.本算法原理 5.完整程序 1.程序功能描述 基于RMD算法模型的信号传输统计特性的matlab模拟仿真。参考的文献如下&#xff1a; 即通过RMD随机中点位置模型算法&#xff0c;实现上述文献的几个仿真图。 2.…

【React】React 生命周期完全指南

&#x1f308;个人主页: 鑫宝Code &#x1f525;热门专栏: 闲话杂谈&#xff5c; 炫酷HTML | JavaScript基础 ​&#x1f4ab;个人格言: "如无必要&#xff0c;勿增实体" 文章目录 React 生命周期完全指南一、生命周期概述二、生命周期的三个阶段2.1 挂载阶段&a…

软件工程 软考

开发大型软件系统适用螺旋模型或者RUP模型 螺旋模型强调了风险分析&#xff0c;特别适用于庞大而复杂的、高风险的管理信息系统的开发。喷泉模型是一种以用户需求为动力&#xff0c;以对象为为驱动的模型&#xff0c;主要用于描述面向对象的软件开发过程。该模型的各个阶段没有…

C++20 概念与约束(2)—— 初识概念与约束

1、概念 C20 中引入新的编译期关键字 concept 用于创建概念。个人认为将其翻译为“构思”更为贴切。直接使用时&#xff0c;它更像一个只能用于模板的布尔类型关键字。 而如果用于模板中&#xff0c;他会将模板类型先带入自身&#xff0c;当自身条件为 true 才会实例化模板&…

Everything软件实现FTP功能

Windows的文件共享和ftp实在难用&#xff0c;这里介绍一种新的局域网内共享文件的方法 下载 Everything 选择想要共享的文件&#xff0c;选择包含到数据库&#xff0c;注意&#xff1a;要在对应的分卷设置&#xff0c;共享文件夹名称不要包含中文字符&#xff0c;因为Windows底…

系统管理与规划师

综合 工业化、信息化两化融合&#xff1a;战略、资源、经济、设备和技术的融合 诺兰6时期&#xff1a;&#xff08;初普控&#xff0c;整数成&#xff09;初始、普及、控制、整合、数据管理、成熟期&#xff1b;技术转型期介于控制和整合间 IT战略规划 IT战略制定&#xff1a;使…

初始MQ(安装使用RabbitMQ,了解交换机)

目录 初识MQ一&#xff1a;同步调用二&#xff1a;异步调用三&#xff1a;技术选型 RabbitMQ一&#xff1a;安装部署二&#xff1a;快速入门三&#xff1a;数据隔离 java客户端一&#xff1a;快速入门二&#xff1a;workqueues三&#xff1a;Fanout交换机四&#xff1a;Direct交…

[C++11] 类中新特性的添加

默认的移动构造和移动赋值 在 C11 之前&#xff0c;编译器会为每个类自动生成默认的构造函数、析构函数、拷贝构造函数、拷贝赋值运算符等函数&#xff0c;以实现对象的创建、销毁和拷贝操作。但拷贝操作会复制整个对象的数据&#xff0c;效率低&#xff0c;尤其是在处理大对象…

emr上使用sparkrunner运行beam数据流水线

参考资料 https://time.geekbang.org/column/intro/167?tabcatalog Apache Beam和其他开源项目不太一样&#xff0c;它并不是一个数据处理平台&#xff0c;本身也无法对数据进行处理。Beam所提供的是一个统一的编程模型思想&#xff0c;而我们可以通过这个统一出来的接口来编…

github高分项目 WGCLOUD - 运维实时管理工具

GitHub - tianshiyeben/wgcloud: Linux运维监控工具&#xff0c;支持系统硬件信息&#xff0c;内存&#xff0c;CPU&#xff0c;温度&#xff0c;磁盘空间及IO&#xff0c;硬盘smart&#xff0c;GPU&#xff0c;防火墙&#xff0c;网络流量速率等监控&#xff0c;服务接口监测&…

MyBatisPlus 用法详解

文章目录 一、快速入门1.1 引入依赖&#xff1a;1.2 定义 Mappper&#xff1a;1.3 使用演示&#xff1a;1.4 常见注解&#xff1a;1.4.1 TableName:1.4.2 TableId&#xff1a;1.4.3 TableField&#xff1a; 1.5 常见配置&#xff1a; 二、核心功能2.1 条件构造器&#xff1a;2.…

Python小游戏23——捕鱼达人

首先&#xff0c;你需要安装Pygame库。如果你还没有安装&#xff0c;可以使用以下命令进行安装&#xff1a; 【bash】 pip install pygame 运行效果展示 接下来是示例代码&#xff1a; 【python】 import pygame import random # 初始化Pygame pygame.init() # 屏幕尺寸 SCREEN…

库打包工具 rollup

库打包工具 rollup 摘要 **概念&#xff1a;**rollup是一个模块化的打包工具 注&#xff1a;实际应用中&#xff0c;rollup更多是一个库打包工具 与Webpack的区别&#xff1a; 文件处理&#xff1a; rollup 更多专注于 JS 代码&#xff0c;并针对 ES Module 进行打包webpa…

基于SSM+VUE小学生素质成长记录平台JAVA|VUE|Springboot计算机毕业设计源代码+数据库+LW文档+开题报告+答辩稿+部署教+代码讲解

源代码数据库LW文档&#xff08;1万字以上&#xff09;开题报告答辩稿 部署教程代码讲解代码时间修改教程 一、开发工具、运行环境、开发技术 开发工具 1、操作系统&#xff1a;Window操作系统 2、开发工具&#xff1a;IntelliJ IDEA或者Eclipse 3、数据库存储&#xff1a…

【架构设计常见技术】

EJB EJB是服务器端的组件模型&#xff0c;使开发者能够构建可扩展、分布式的业务逻辑组件。这些组件运行在EJB容器中&#xff0c;EJB将各功能模块封装成独立的组件&#xff0c;能够被不同的客户端应用程序调用&#xff0c;简化开发过程&#xff0c;支持分布式应用开发。 IOC …

优选算法 - 1 ( 双指针 移动窗口 8000 字详解 )

一&#xff1a;双指针 1.1 移动零 题目链接&#xff1a;283.移动零 class Solution {public void moveZeroes(int[] nums) {for(int cur 0, dest -1 ; cur < nums.length ; cur){if(nums[cur] 0){}else{dest; // dest 先向后移动⼀位int tmp nums[cur];nums[cur] num…

鸿蒙操作系统是什么?与安卓系统有什么区别?

鸿蒙操作系统 鸿蒙操作系统&#xff08;HarmonyOS&#xff09;是华为公司发布的一款基于微内核的面向全场景的分布式操作系统。 发展历程&#xff1a; 早期规划&#xff1a;华为从2012 年开始规划自有操作系统&#xff0c;并在芬兰赫尔辛基设立智能手机研发中心&#xff0c;招…

现场工程师日记-MSYS2迅速部署PostgreSQL主从备份数据库

文章目录 一、概要二、整体架构流程1. 安装 MSYS2 环境2. 安装postgresql 三、技术名词解释1.MSYS22.postgresql 四、技术细节1. 创建主数据库2.添加从数据库复制权限3. 按需修改参数&#xff08;1&#xff09;WAL保留空间&#xff08;2&#xff09;监听地址 4. 启动主服务器5.…

第二届计算机网络技术与电子信息工程国际学术会议(CNTEIE 2024,12月6-8日)

第二届计算机网络技术与电子信息工程国际学术会议&#xff08;CNTEIE 2024&#xff09; 2024 2nd International Conference on Computer Network Technology and Electronic and Information Engineering 重要信息 会议官网&#xff1a;www.cnteie.org 2024 2nd Internation…