添加节点场景
1、集群扩容
2、节点损坏后进行了删除操作,之后又要求恢复删除节点
环境和需求说明
由于3节点RAC,其中节点3因为本地盘损坏,导致系统完全损坏,系统需要重新安装。将损坏的3节点删除后再进行添加。
数据库版本: 11.2.0.4 RAC 3节点
虚拟环境系统: Centos 7.9
需求:将已删除的损坏的第3个节点racdb03重新加入集群中
主机名称 | 网卡名称 | 地址类型 | IP地址 | 备注 | 域 |
racdb01 | ens32 public ip | 公网 | 192.168.40.165 | racdb01 | 无 |
ens34 private vip | 私网 | 192.168.183.165 | racdb01_privatevip | ||
ens32 vitureip | 虚拟网卡 | 192.168.40.16 | racdb01_vitureip | ||
racdb02 | ens32 public ip | 公网 | 192.168.40.175 | racdb02 | |
ens34 private vip | 私网 | 192.168.183.175 | racdb02_privatevip | ||
ens32 vitureip | 虚拟网卡 | 192.168.40.17 | racdb02_vitureip | ||
racdb03 | ens32 public ip | 公网 | 192.168.40.185 | racdb03 | |
ens34 private vip | 私网 | 192.168.183.185 | racdb03_privatevip | ||
vitureip | 虚拟网卡 | 192.168.40.18 | racdb03_vitureip | ||
cluster01-scan | 虚拟地址 | 公网 | 192.168.40.200 | racdbscan01 |
添加前准备
节点3的Linux服务器配置
添加节点前需对节点3的服务器参数进行配置,仅在节点3上操作。
重新安装操作系统
操作系统版本,内存、CPU、硬盘等配置和其他2个保留节点保持一致或更高。
重新配置服务器环境
时区设置
根据客户标准设置 OS 时区,国内通常为东八区"Asia/Shanghai".
在安装 GRID 之前,一定要先修改好 OS 时区,否则 GRID 将引用一个错误的 OS 时区,导致 DB 的时区,监听的时区等不正确。
--新增加节点要和原有2个节点保持一致
[root@racdb03 ~]# timedatectl status
Local time: Fri 2024-05-24 16:05:46 CST
Universal time: Fri 2024-05-24 08:05:46 UTC
RTC time: Fri 2024-05-24 08:05:46
Time zone: Asia/Shanghai (CST, +0800)
NTP enabled: yes
NTP synchronized: no
RTC in local TZ: no
DST active: n/a
修改OS时区:
timedatectl set-timezone "Asia/Shanghai"
配置主机名
保证新增主机的主机名和其他两台保留节点主机/etc/hosts中的节点3主机名一致。
--查看节点1和节点2的/etc/hosts
[root@racdb01 install]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.40.165 racdb01
192.168.40.175 racdb02
192.168.40.185 racdb03
192.168.183.165 racdb01_privatevip
192.168.183.175 racdb02_privatevip
192.168.183.185 racdb03_privatevip
192.168.40.16 racdb01_vitureip
192.168.40.17 racdb02_vitureip
192.168.40.18 racdb03_vitureip
192.168.40.200 racdbscan01
--配置节点3的主机名
hostnamectl set-hostname racdb03
exec bash
hosts文件配置
保证新增的主机/etc/hosts文件和其他两台一致
cat >> /etc/hosts << EOF
192.168.40.165 racdb01
192.168.40.175 racdb02
192.168.40.185 racdb03
192.168.183.165 racdb01_privatevip
192.168.183.175 racdb02_privatevip
192.168.183.185 racdb03_privatevip
192.168.40.16 racdb01_vitureip
192.168.40.17 racdb02_vitureip
192.168.40.18 racdb03_vitureip
192.168.40.200 racdbscan01 ##安装时注意集群名不要超过 15 个字符,也不能有大写主机名。
EOF
配置语言环境
echo "export LANG=en_US" >> ~/.bash_profile
source ~/.bash_profile
创建用户、组、目录
用户id、用户组id、目录保持和其他节点一致。
--创建用户、组
/usr/sbin/groupadd -g 50001 oinstall
/usr/sbin/groupadd -g 50002 dba
/usr/sbin/groupadd -g 50003 oper
/usr/sbin/groupadd -g 50004 asmadmin
/usr/sbin/groupadd -g 50005 asmoper
/usr/sbin/groupadd -g 50006 asmdba
/usr/sbin/useradd -u 60001 -g oinstall -G dba,asmdba,oper oracle
/usr/sbin/useradd -u 60002 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
echo grid | passwd --stdin grid
echo oracle | passwd --stdin oracle
--创建目录
mkdir -p /oracle/app/grid
mkdir -p /oracle/app/11.2.0/grid
chown -R grid:oinstall /oracle
mkdir -p /oracle/app/oraInventory
chown -R grid:oinstall /oracle/app/oraInventory
mkdir -p /oracle/app/oracle
chown -R oracle:oinstall /oracle/app/oracle
chmod -R 775 /oracle
配置yum软件安装环境及软件包安装
#配置本地yum源
mount /dev/cdrom /mnt
cd /etc/yum.repos.d
mkdir bk
mv *.repo bk/
cat > /etc/yum.repos.d/Centos7.repo << "EOF"
[local]
name=Centos7
baseurl=file:///mnt
gpgcheck=0
enabled=1
EOF
cat /etc/yum.repos.d/Centos7.repo
#安装所需的软件
yum -y install autoconf
yum -y install automake
yum -y install binutils
yum -y install binutils-devel
yum -y install bison
yum -y install cpp
yum -y install dos2unix
yum -y install gcc
yum -y install gcc-c++
yum -y install lrzsz
yum -y install python-devel
yum -y install compat-db*
yum -y install compat-gcc-34
yum -y install compat-gcc-34-c++
yum -y install compat-libcap1
yum -y install compat-libstdc++-33
yum -y install compat-libstdc++-33.i686
yum -y install glibc-*
yum -y install glibc-*.i686
yum -y install libXpm-*.i686
yum -y install libXp.so.6
yum -y install libXt.so.6
yum -y install libXtst.so.6
yum -y install libXext
yum -y install libXext.i686
yum -y install libXtst
yum -y install libXtst.i686
yum -y install libX11
yum -y install libX11.i686
yum -y install libXau
yum -y install libXau.i686
yum -y install libxcb
yum -y install libxcb.i686
yum -y install libXi
yum -y install libXi.i686
yum -y install libXtst
yum -y install libstdc++-docs
yum -y install libgcc_s.so.1
yum -y install libstdc++.i686
yum -y install libstdc++-devel
yum -y install libstdc++-devel.i686
yum -y install libaio
yum -y install libaio.i686
yum -y install libaio-devel
yum -y install libaio-devel.i686
yum -y install libXp
yum -y install libaio-devel
yum -y install numactl
yum -y install numactl-devel
yum -y install make
yum -y install sysstat
yum -y install unixODBC
yum -y install unixODBC-devel
yum -y install elfutils-libelf-devel-0.97
yum -y install elfutils-libelf-devel
yum -y install redhat-lsb-core
yum -y install unzip
yum -y install *vnc*
yum install perl-Env
# 安装Linux图像界面
yum groupinstall -y "X Window System"
yum groupinstall -y "GNOME Desktop" "Graphical Administration Tools"
#检查包的安装情况
rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n'
安装依赖包
rpm -ivh compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm
rpm -ivh rpm -ivh pdksh-5.2.14-37.el5_8.1.x86_64.rpm
如果提示和ksh冲突执行如下操作先卸载ksh然后再安装pdksh依赖包
rpm -evh ksh-20120801-139.el7.x86_64
rpm -ivh pdksh-5.2.14-37.el5.x86_64.rpm
修改系统相关参数
修改系统资源限制参数
vi /etc/security/limits.conf
#ORACLE SETTING
grid soft nproc 16384
grid hard nproc 16384
grid soft nofile 65536
grid hard nofile 65536
grid soft stack 32768
grid hard stack 32768
oracle soft nproc 16384
oracle hard nproc 16384
oracle soft nofile 65536
oracle hard nofile 65536
oracle soft stack 32768
oracle hard stack 32768
oracle hard memlock 2000000
oracle soft memlock 2000000
ulimit -a
# nproc 操作系统对用户创建进程数的限制
# nofile 文件描述符 一个文件同时打开的会话数 也就是一个进程能够打开多少个文件
# memlock 内存锁,给oracle用户使用的最大内存,单位是KB
当前环境的物理内存为4G(grid 1g,操作系统 1g,我们给oracle留2g),memlock<物理内存
修改nproc参数
echo "* - nproc 16384" > /etc/security/limits.d/90-nproc.conf
控制给用户分配的资源
echo "session required pam_limits.so" >> /etc/pam.d/login
cat /etc/pam.d/login
修改内核参数
vi /etc/sysctl.conf
#ORACLE SETTING
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
kernel.panic_on_oops = 1
vm.nr_hugepages = 868
kernel.shmmax = 1610612736
kernel.shmall = 393216
kernel.shmmni = 4096
sysctl -p
参数说明:
--kernel.panic_on_oops = 1
程序出问题,是否继续
--vm.nr_hugepages = 1000
大内存页,物理内存超过8g,必设
经验值:sga_max_size/2m+(100~500)=1536/2m+100=868
>sga_max_size
--kernel.shmmax = 1610612736
定义单个共享内存段的最大值,一定要存放下整个SGA,>SGA
SGA+PGA <物理内存的80%
SGA_max<物理内存的80%的80%
PGA_max<物理内存的80%的20%
kernel.shmall = 393216
--控制共享内存的页数 =kernel.shmmax/PAGESIZE
getconf PAGESIZE --获取内存页大小 4096
kernel.shmmni = 4096
--共享内存段的数量,一个实例就是一个内存共享段
--物理内存(KB)
os_memory_total=$(awk '/MemTotal/{print $2}' /proc/meminfo)
--获取系统页面大小,用于计算内存总量
pagesize=$(getconf PAGE_SIZE)
min_free_kbytes = $os_memory_total / 250
shmall = ($os_memory_total - 1) * 1024 / $pagesize
shmmax = $os_memory_total * 1024 - 1
# 如果 shmall 小于 2097152,则将其设为 2097152
(($shmall < 2097152)) && shmall=2097152
# 如果 shmmax 小于 4294967295,则将其设为 4294967295
(($shmmax < 4294967295)) && shmmax=4294967295
关闭透明页
cat /proc/meminfo
cat /sys/kernel/mm/transparent_hugepage/defrag
[always] madvise never
cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
vi /etc/rc.d/rc.local
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
chmod +x /etc/rc.d/rc.local
关闭numa功能
numactl --hardware
vim /etc/default/grub
GRUB_CMDLINE_LINUX="crashkernel=auto rhgb quiet numa=off"
grub2-mkconfig -o /boot/grub2/grub.cfg
#vi /boot/grub/grub.conf
#kernel /boot/vmlinuz-2.6.18-128.1.16.0.1.el5 root=LABEL=DBSYS ro bootarea=dbsys rhgb quiet console=ttyS0,115200n8 console=tty1 crashkernel=128M@16M numa=off
设置字符界面启动操作系统
systemctl set-default multi-user.target
共享内存段
[root@racdb03 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 93G 1.9G 91G 2% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
#/dev/shm 默认是操作系统物理内存的一半,我们设置大一点
echo "tmpfs /dev/shm tmpfs defaults,size=3072m 0 0" >>/etc/fstab
mount -o remount /dev/shm
[root@racdb03 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 93G 2.3G 91G 3% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 3.0G 0 3.0G 0% /dev/shm
检查或配置交换空间
若swap>=2G,跳过该步骤,
若swap=0,则执行以下操作
# 创建指定大小的空文件 /swapfile,并将其格式化为交换分区
dd if=/dev/zero of=/swapfile bs=2G count=1
# 设置文件权限为 0600
chmod 600 /swapfile
# 格式化文件为 Swap 分区
mkswap /swapfile
# 启用 Swap 分区
swapon /swapfile
# 将 Swap 分区信息添加到 /etc/fstab 文件中,以便系统重启后自动加载
echo "/swapfile none swap sw 0 0" >>/etc/fstab
mount -a
--查看内存 已经有swap了
[root@racdb03 tmp]# free -g
total used free shared buff/cache available
Mem: 3 1 1 0 0 1
Swap: 3 0 3
配置selinux和防火墙
#1、禁用SELINUX
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
setenforce 0
#2、关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
禁用NTP
--停止NTP服务
systemctl stop ntpd
systemctl disable ntpd
--删除配置文件
mv /etc/ntp.conf /etc/ntp.conf_bak_20240521
--设置时间三台主机的时间要一样,如果一样就不用再设置
date -s 'Sat Aug 26 23:18:15 CST 2023'
禁用DNS
因为测试环境,没有使用DNS,删除resolv.conf文件即可。或者直接忽略该失败
mv /etc/resolv.conf /etc/resolv.conf_bak
配置grid/oracle 用户环境变量
grid用户
su - grid
#节点3:
cat >> ~/.bash_profile << "EOF"
PS1="[`whoami`@`hostname`:"'$PWD]$'
export PS1
umask 022
#alias sqlplus="rlwrap sqlplus"
export TMP=/tmp
export LANG=en_US
export TMPDIR=$TMP
ORACLE_SID=+ASM3; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
ORACLE_BASE=/oracle/app/grid; export ORACLE_BASE
ORACLE_HOME=/oracle/app/11.2.0/grid; export ORACLE_HOME
NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT
PATH=.:$PATH:$HOME/bin:$ORACLE_HOME/bin; export PATH
THREADS_FLAG=native; export THREADS_FLAG
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF
oracle用户
su - oracle
#节点3:
cat >> ~/.bash_profile << "EOF"
PS1="[`whoami`@`hostname`:"'$PWD]$'
#alias sqlplus="rlwrap sqlplus"
#alias rman="rlwrap rman"
export PS1
export TMP=/tmp
export LANG=en_US
export TMPDIR=$TMP
export ORACLE_UNQNAME=rac_db
ORACLE_BASE=/oracle/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_SID=racdb3; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT
NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;export NLS_LANG
PATH=.:$PATH:$HOME/bin:$ORACLE_BASE/product/11.2.0/db_1/bin:$ORACLE_HOME/bin; export PATH
THREADS_FLAG=native; export THREADS_FLAG
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF
配置SSH信任关系
在新增节点上做如下操作,racdb01和racdb02由于已经有ssh的等效性,所以需要把authorized_keys文件分别拷贝到racdb01和racdb02,包含grid用户和oracle用户。
如果未进行配置,后面在任一保留节点执行并生成校验准备命令时会有PRVF-4007的错误发生。
配置grid用户的SSH信任关系
su - grid
mkdir /home/grid/.ssh
cd /home/grid/.ssh
ssh-keygen -t rsa
scp grid@racdb01:/home/grid/.ssh/authorized_keys /home/grid/.ssh/
cat id_rsa.pub >> authorized_keys
scp authorized_keys grid@racdb01:/home/grid/.ssh/authorized_keys
scp authorized_keys grid@racdb02:/home/grid/.ssh/authorized_keys
ssh racdb01
ssh racdb02
配置oracle用户的SSH信任关系
su - oracle
mkdir /home/oracle/.ssh
cd /home/oracle/.ssh
ssh-keygen -t rsa
scp oracle@racdb01:/home/oracle/.ssh/authorized_keys /home/oracle/.ssh/
cat id_rsa.pub >> authorized_keys
scp authorized_keys oracle@racdb01:/home/oracle/.ssh/authorized_keys
scp authorized_keys oracle@racdb02:/home/oracle/.ssh/authorized_keys
ssh racdb01
ssh racdb02
执行并生成校验准备命令(在任一保留节点)
本文档在节点1上即racdb01节点执行并生成校验准备命令
grid用户下操作
[grid@racdb01:/home/grid]$cluvfy stage -pre nodeadd -n racdb03 -fixup
Performing pre-checks for node addition
Checking node reachability...
Node reachability check passed from node "racdb01"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Checking shared resources...
Checking CRS home location...
"/oracle/app/11.2.0/grid" is not shared
Shared resources check for node addition passed
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity for interface "ens32"
Node connectivity passed for interface "ens32"
TCP connectivity check passed for subnet "192.168.40.0"
Check: Node connectivity for interface "ens34"
Node connectivity passed for interface "ens34"
TCP connectivity check passed for subnet "192.168.183.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.40.0".
Subnet mask consistency check passed for subnet "192.168.183.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.40.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.40.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.183.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.183.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "racdb03:/oracle/app/11.2.0/grid,racdb03:/tmp"
Free disk space check passed for "racdb01:/oracle/app/11.2.0/grid,racdb01:/tmp"
Check for multiple users with UID value 60002 passed
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "elfutils-libelf(x86_64)"
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for "glibc-common"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "glibc-headers"
Package existence check passed for "gcc-c++(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "pdksh"
Package existence check passed for "expat(x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Checking OCR integrity...
OCR integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
No NTP Daemons or Services were found to be running
Clock synchronization check using Network Time Protocol(NTP) passed
User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes
File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racdb01,racdb03
File "/etc/resolv.conf" is not consistent across nodes
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Pre-check for node addition was unsuccessful on all the nodes.
补充说明:
1、以上检查部分报错可以忽略,如/etc/resolv.conf报错,若没有用DNS所以忽略,DNS服务器超时也可以忽略。
2、如果内核参数校验失败会输出让在racdb03上用root身份运行/tmp/CVU_11.2.0.4.0_grid/runfixup.sh自动修复一些内核参数的失败项的内容。如下所示:
Fixup information has been generated for following node(s):
host03
Please run the following script on each node as "root" user to execute the fixups:
'/tmp/CVU_11.2.0.4.0_grid/runfixup.sh'
3、如果SSH等效性配置有问题会输出如下报错:
PRVF-4007的错误发生,此时应该进行SSH等效性配置。在racdb03 作如下操作,racdb01和racdb02由于已经有ssh的等效性,所以需要最后把authorized_keys文件拷贝到racdb01和racdb02,grid用户和oracle用户分别都操作。
--配置grid用户的SSH信任关系
su - grid
mkdir /home/grid/.ssh
cd /home/grid/.ssh
ssh-keygen -t rsa
scp grid@racdb01:/home/grid/.ssh/authorized_keys /home/grid/.ssh/
cat id_rsa.pub >> authorized_keys
scp authorized_keys grid@racdb01:/home/grid/.ssh/authorized_keys
scp authorized_keys grid@racdb02:/home/grid/.ssh/authorized_keys
ssh racdb01
ssh racdb02
--配置oracle用户的SSH信任关系
su - oracle
mkdir /home/oracle/.ssh
cd /home/oracle/.ssh
ssh-keygen -t rsa
scp oracle@racdb01:/home/oracle/.ssh/authorized_keys /home/oracle/.ssh/
cat id_rsa.pub >> authorized_keys
scp authorized_keys oracle@racdb01:/home/oracle/.ssh/authorized_keys
scp authorized_keys oracle@racdb02:/home/oracle/.ssh/authorized_keys
ssh racdb01
ssh racdb02
添加过程
Grid Infrastructure层面添加新节点
导入ignore_preaddnode_checks环境变量(在任一保留节点)
保留节点上执行添加节点,拷贝软件信息
本文档在节点1上即racdb01节点导入ignore_preaddnode_checks环境变量
grid用户下操作,前提所有节点间已做好grid用户的ssh互信配置,不然添加过程中报错。
导入ignore_preaddnode_checks环境变量,否则后面的命令无法安装
[grid@racdb01:/home/grid]$export IGNORE_PREADDNODE_CHECKS=Y
保留节点上执行添加节点,拷贝软件信息
本文档在节点1上即racdb01节点执行添加节点,拷贝软件信息
[grid@racdb01:/home/grid]$$ORACLE_HOME/oui/bin/addNode.sh "CLUSTER_NEW_NODES={racdb03}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racdb03_vitureip}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4005 MB Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
Performing tests to see whether nodes RACDB02,racdb03 are available
............................................................... 100% Done.
.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /oracle/app/11.2.0/grid
New Nodes
Space Requirements
New Nodes
racdb03
/: Required 5.29GB : Available 22.29GB
Installed Products
Product Names
Oracle Grid Infrastructure 11g 11.2.0.4.0
Java Development Kit 1.5.0.51.10
Installer SDK Component 11.2.0.4.0
Oracle One-Off Patch Installer 11.2.0.3.4
Oracle Universal Installer 11.2.0.4.0
Oracle RAC Required Support Files-HAS 11.2.0.4.0
Oracle USM Deconfiguration 11.2.0.4.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Enterprise Manager Common Core Files 10.2.0.4.5
Oracle DBCA Deconfiguration 11.2.0.4.0
Oracle RAC Deconfiguration 11.2.0.4.0
Oracle Quality of Service Management (Server) 11.2.0.4.0
Installation Plugin Files 11.2.0.4.0
Universal Storage Manager Files 11.2.0.4.0
Oracle Text Required Support Files 11.2.0.4.0
Automatic Storage Management Assistant 11.2.0.4.0
Oracle Database 11g Multimedia Files 11.2.0.4.0
Oracle Multimedia Java Advanced Imaging 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Oracle Multimedia Locator RDBMS Files 11.2.0.4.0
Oracle Core Required Support Files 11.2.0.4.0
Bali Share 1.1.18.0.0
Oracle Database Deconfiguration 11.2.0.4.0
Oracle Quality of Service Management (Client) 11.2.0.4.0
Expat libraries 2.0.1.0.1
Oracle Containers for Java 11.2.0.4.0
Perl Modules 5.10.0.0.1
Secure Socket Layer 11.2.0.4.0
Oracle JDBC/OCI Instant Client 11.2.0.4.0
Oracle Multimedia Client Option 11.2.0.4.0
LDAP Required Support Files 11.2.0.4.0
Character Set Migration Utility 11.2.0.4.0
Perl Interpreter 5.10.0.0.2
PL/SQL Embedded Gateway 11.2.0.4.0
OLAP SQL Scripts 11.2.0.4.0
Database SQL Scripts 11.2.0.4.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.4.0
SQL*Plus Files for Instant Client 11.2.0.4.0
Oracle Net Required Support Files 11.2.0.4.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client 11.2.0.4.0
RDBMS Required Support Files Runtime 11.2.0.4.0
XML Parser for Java 11.2.0.4.0
Oracle Security Developer Tools 11.2.0.4.0
Oracle Wallet Manager 11.2.0.4.0
Enterprise Manager plugin Common Files 11.2.0.4.0
Platform Required Support Files 11.2.0.4.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.4.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.5
Deinstallation Tool 11.2.0.4.0
Oracle Java Client 11.2.0.4.0
Cluster Verification Utility Files 11.2.0.4.0
Oracle Notification Service (eONS) 11.2.0.4.0
Oracle LDAP administration 11.2.0.4.0
Cluster Verification Utility Common Files 11.2.0.4.0
Oracle Clusterware RDBMS Files 11.2.0.4.0
Oracle Locale Builder 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Buildtools Common Files 11.2.0.4.0
HAS Common Files 11.2.0.4.0
SQL*Plus Required Support Files 11.2.0.4.0
XDK Required Support Files 11.2.0.4.0
Agent Required Support Files 10.2.0.4.5
Parser Generator Required Support Files 11.2.0.4.0
Precompiler Required Support Files 11.2.0.4.0
Installation Common Files 11.2.0.4.0
Required Support Files 11.2.0.4.0
Oracle JDBC/THIN Interfaces 11.2.0.4.0
Oracle Multimedia Locator 11.2.0.4.0
Oracle Multimedia 11.2.0.4.0
Assistant Common Files 11.2.0.4.0
Oracle Net 11.2.0.4.0
PL/SQL 11.2.0.4.0
HAS Files for DB 11.2.0.4.0
Oracle Recovery Manager 11.2.0.4.0
Oracle Database Utilities 11.2.0.4.0
Oracle Notification Service 11.2.0.3.0
SQL*Plus 11.2.0.4.0
Oracle Netca Client 11.2.0.4.0
Oracle Advanced Security 11.2.0.4.0
Oracle JVM 11.2.0.4.0
Oracle Internet Directory Client 11.2.0.4.0
Oracle Net Listener 11.2.0.4.0
Cluster Ready Services Files 11.2.0.4.0
Oracle Database 11g 11.2.0.4.0
-----------------------------------------------------------------------------
Instantiating scripts for add node (Saturday, May 25, 2024 10:59:04 AM CST)
. 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Saturday, May 25, 2024 10:59:07 AM CST)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Saturday, May 25, 2024 11:01:19 AM CST)
. 100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/oracle/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'racdb03'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/oracle/app/oraInventory/orainstRoot.sh #On nodes racdb03
/oracle/app/11.2.0/grid/root.sh #On nodes racdb03
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /oracle/app/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
执行脚本(racdb03上)
--执行/oracle/app/oraInventory/orainstRoot.sh
[root@racdb03 ~]# /oracle/app/oraInventory/orainstRoot.sh
Changing permissions of /oracle/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /oracle/app/oraInventory to oinstall.
The execution of the script is complete.
--执行/oracle/app/11.2.0/grid/root.sh
[root@racdb03 ~]# /oracle/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node racdb01, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages...
cvuqdisk-1.0.9-1.x86_64
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
配置成功,检查节点是否正常添加到GI,在任意一台主机上执行,查看后台进程是否正常。
增加了以下节点racdb03的内容,说明成功把GI推送安装到了新增节点racdb03主机上
ora.racdb03.vip
1 ONLINE ONLINE racdb03
[grid@racdb01:/home/grid]$crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE racdb01
ONLINE ONLINE racdb02
ONLINE ONLINE racdb03
ora.LISTENER.lsnr
ONLINE ONLINE racdb01
ONLINE ONLINE racdb02
ONLINE ONLINE racdb03
ora.OCR.dg
ONLINE ONLINE racdb01
ONLINE ONLINE racdb02
ONLINE ONLINE racdb03
ora.asm
ONLINE ONLINE racdb01 Started
ONLINE ONLINE racdb02 Started
ONLINE ONLINE racdb03 Started
ora.gsd
OFFLINE OFFLINE racdb01
OFFLINE OFFLINE racdb02
OFFLINE OFFLINE racdb03
ora.net1.network
ONLINE ONLINE racdb01
ONLINE ONLINE racdb02
ONLINE ONLINE racdb03
ora.ons
ONLINE ONLINE racdb01
ONLINE ONLINE racdb02
ONLINE ONLINE racdb03
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE racdb02
ora.cvu
1 ONLINE ONLINE racdb01
ora.oc4j
1 ONLINE ONLINE racdb01
ora.racdb.db
1 ONLINE ONLINE racdb01 Open
2 ONLINE ONLINE racdb02 Open
ora.racdb01.vip
1 ONLINE ONLINE racdb01
ora.racdb02.vip
1 ONLINE ONLINE racdb02
ora.racdb03.vip
1 ONLINE ONLINE racdb03
ora.scan1.vip
1 ONLINE ONLINE racdb02
问题处理
执行root.sh脚本报错
--执行root.sh脚本报错
[root@racdb03 ~]# /oracle/app/11.2.0/grid/root.sh
......
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2024-05-24 13:22:36.710:
[mdnsd(2141)]CRS-5602:mDNS service stopping by request.
2024-05-24 13:22:36.715:
[ctssd(2314)]CRS-2405:The Cluster Time Synchronization Service on host racdb03 is shutdown by user
2024-05-24 13:22:47.024:
[cssd(2222)]CRS-1603:CSSD on node racdb03 shutdown by user.
2024-05-24 13:22:47.138:
[ohasd(1656)]CRS-2767:Resource state recovery not attempted for 'ora.cssdmonitor' as its target state is OFFLINE
2024-05-24 13:22:47.139:
[ohasd(1656)]CRS-2769:Unable to failover resource 'ora.cssdmonitor'.
2024-05-24 13:22:49.396:
[gpnpd(2152)]CRS-2329:GPNPD on node racdb03 shutdown.
[client(16231)]CRS-10001:24-May-24 14:39 ACFS-9459: ADVM/ACFS is not supported on this OS version: 'centos-release-7-9.2009.0.el7.centos.x86_64
'
[client(16233)]CRS-10001:24-May-24 14:39 ACFS-9201: Not Supported
[client(16989)]CRS-10001:24-May-24 15:00 ACFS-9459: ADVM/ACFS is not supported on this OS version: 'centos-release-7-9.2009.0.el7.centos.x86_64
'
[client(16991)]CRS-10001:24-May-24 15:00 ACFS-9201: Not Supported
2024-05-25 11:09:29.764:
[client(7962)]CRS-2101:The OLR was formatted using version 3.
/oracle/app/11.2.0/grid/perl/bin/perl -I/oracle/app/11.2.0/grid/perl/lib -I/oracle/app/11.2.0/grid/crs/install /oracle/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
--问题原因
因为centos7 使用的sysemd而不时initd运行继承和重启进程,而root.sh通过传统的initd运行ohasd进程
--解决办法
在centos7中ohasd需要被设置为一个服务,在运行脚本root.sh之前。
#以root用户创建服务文件
cat > /usr/lib/systemd/system/ohas.service << "EOF"
[Unit]
Description=Oracle High Availability Services
After=syslog.target
[Service]
ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Restart=always
[Install]
WantedBy=multi-user.target
EOF
chmod 777 /usr/lib/systemd/system/ohas.service
systemctl daemon-reload
systemctl enable ohas.service
systemctl start ohas.service
systemctl status ohas.service
#查看ohas服务状态
[root@racdb01 ~]# systemctl status ohas.service
* ohas.service - Oracle High Availability Services
Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2023-08-27 18:48:57 CST; 6s ago
Main PID: 91992 (init.ohasd)
Tasks: 1
CGroup: /system.slice/ohas.service
`-91992 /bin/sh /etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Aug 27 18:48:57 racdb01 systemd[1]: Started Oracle High Availability Services.
[root@racdb01 ~]#
#重新执行root脚本
[root@racdb03 ~]# /oracle/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node racdb01, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages...
cvuqdisk-1.0.9-1.x86_64
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
RAC层面添加节点(在任一保留节点)
导入ignore_preaddnode_checks环境变量
保留节点上执行添加节点,拷贝软件信息
本文档在节点1上即racdb01节点导入ignore_preaddnode_checks环境变量
oracle用户下操作,前提所有节点间已做好oracle用户的ssh互信配置,不然添加过程中报错。
导入ignore_preaddnode_checks环境变量,否则后面的命令无法安装
[grid@racdb01:/home/grid]$su - oracle
Password:
Last login: Sat May 25 10:33:20 CST 2024 from 192.168.40.185 on pts/2
[oracle@racdb01:/home/oracle]$export IGNORE_PREADDNODE_CHECKS=Y
保留节点上执行添加节点,拷贝软件信息
本文档在节点1上即racdb01节点执行添加节点,拷贝软件信息
[oracle@racdb01:/home/oracle]$$ORACLE_HOME/oui/bin/addNode.sh "CLUSTER_NEW_NODES={racdb03}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 3943 MB Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
Performing tests to see whether nodes racdb02,racdb03 are available
............................................................... 100% Done.
.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /oracle/app/oracle/product/11.2.0/db_1
New Nodes
Space Requirements
New Nodes
racdb03
/: Required 4.35GB : Available 21.54GB
Installed Products
Product Names
Oracle Database 11g 11.2.0.4.0
Java Development Kit 1.5.0.51.10
Installer SDK Component 11.2.0.4.0
Oracle One-Off Patch Installer 11.2.0.3.4
Oracle Universal Installer 11.2.0.4.0
Oracle USM Deconfiguration 11.2.0.4.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Oracle DBCA Deconfiguration 11.2.0.4.0
Oracle RAC Deconfiguration 11.2.0.4.0
Oracle Database Deconfiguration 11.2.0.4.0
Oracle Configuration Manager Client 10.3.2.1.0
Oracle Configuration Manager 10.3.8.1.0
Oracle ODBC Driverfor Instant Client 11.2.0.4.0
LDAP Required Support Files 11.2.0.4.0
SSL Required Support Files for InstantClient 11.2.0.4.0
Bali Share 1.1.18.0.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
Oracle Real Application Testing 11.2.0.4.0
Oracle Database Vault J2EE Application 11.2.0.4.0
Oracle Label Security 11.2.0.4.0
Oracle Data Mining RDBMS Files 11.2.0.4.0
Oracle OLAP RDBMS Files 11.2.0.4.0
Oracle OLAP API 11.2.0.4.0
Platform Required Support Files 11.2.0.4.0
Oracle Database Vault option 11.2.0.4.0
Oracle RAC Required Support Files-HAS 11.2.0.4.0
SQL*Plus Required Support Files 11.2.0.4.0
Oracle Display Fonts 9.0.2.0.0
Oracle Ice Browser 5.2.3.6.0
Oracle JDBC Server Support Package 11.2.0.4.0
Oracle SQL Developer 11.2.0.4.0
Oracle Application Express 11.2.0.4.0
XDK Required Support Files 11.2.0.4.0
RDBMS Required Support Files for Instant Client 11.2.0.4.0
SQLJ Runtime 11.2.0.4.0
Database Workspace Manager 11.2.0.4.0
RDBMS Required Support Files Runtime 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Exadata Storage Server 11.2.0.1.0
Provisioning Advisor Framework 10.2.0.4.3
Enterprise Manager Database Plugin -- Repository Support 11.2.0.4.0
Enterprise Manager Repository Core Files 10.2.0.4.5
Enterprise Manager Database Plugin -- Agent Support 11.2.0.4.0
Enterprise Manager Grid Control Core Files 10.2.0.4.5
Enterprise Manager Common Core Files 10.2.0.4.5
Enterprise Manager Agent Core Files 10.2.0.4.5
RDBMS Required Support Files 11.2.0.4.0
regexp 2.1.9.0.0
Agent Required Support Files 10.2.0.4.5
Oracle 11g Warehouse Builder Required Files 11.2.0.4.0
Oracle Notification Service (eONS) 11.2.0.4.0
Oracle Text Required Support Files 11.2.0.4.0
Parser Generator Required Support Files 11.2.0.4.0
Oracle Database 11g Multimedia Files 11.2.0.4.0
Oracle Multimedia Java Advanced Imaging 11.2.0.4.0
Oracle Multimedia Annotator 11.2.0.4.0
Oracle JDBC/OCI Instant Client 11.2.0.4.0
Oracle Multimedia Locator RDBMS Files 11.2.0.4.0
Precompiler Required Support Files 11.2.0.4.0
Oracle Core Required Support Files 11.2.0.4.0
Sample Schema Data 11.2.0.4.0
Oracle Starter Database 11.2.0.4.0
Oracle Message Gateway Common Files 11.2.0.4.0
Oracle XML Query 11.2.0.4.0
XML Parser for Oracle JVM 11.2.0.4.0
Oracle Help For Java 4.2.9.0.0
Installation Plugin Files 11.2.0.4.0
Enterprise Manager Common Files 10.2.0.4.5
Expat libraries 2.0.1.0.1
Deinstallation Tool 11.2.0.4.0
Oracle Quality of Service Management (Client) 11.2.0.4.0
Perl Modules 5.10.0.0.1
JAccelerator (COMPANION) 11.2.0.4.0
Oracle Containers for Java 11.2.0.4.0
Perl Interpreter 5.10.0.0.2
Oracle Net Required Support Files 11.2.0.4.0
Secure Socket Layer 11.2.0.4.0
Oracle Universal Connection Pool 11.2.0.4.0
Oracle JDBC/THIN Interfaces 11.2.0.4.0
Oracle Multimedia Client Option 11.2.0.4.0
Oracle Java Client 11.2.0.4.0
Character Set Migration Utility 11.2.0.4.0
Oracle Code Editor 1.2.1.0.0I
PL/SQL Embedded Gateway 11.2.0.4.0
OLAP SQL Scripts 11.2.0.4.0
Database SQL Scripts 11.2.0.4.0
Oracle Locale Builder 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
SQL*Plus Files for Instant Client 11.2.0.4.0
Required Support Files 11.2.0.4.0
Oracle Database User Interface 2.2.13.0.0
Oracle ODBC Driver 11.2.0.4.0
Oracle Notification Service 11.2.0.3.0
XML Parser for Java 11.2.0.4.0
Oracle Security Developer Tools 11.2.0.4.0
Oracle Wallet Manager 11.2.0.4.0
Cluster Verification Utility Common Files 11.2.0.4.0
Oracle Clusterware RDBMS Files 11.2.0.4.0
Oracle UIX 2.2.24.6.0
Enterprise Manager plugin Common Files 11.2.0.4.0
HAS Common Files 11.2.0.4.0
Precompiler Common Files 11.2.0.4.0
Installation Common Files 11.2.0.4.0
Oracle Help for the Web 2.0.14.0.0
Oracle LDAP administration 11.2.0.4.0
Buildtools Common Files 11.2.0.4.0
Assistant Common Files 11.2.0.4.0
Oracle Recovery Manager 11.2.0.4.0
PL/SQL 11.2.0.4.0
Generic Connectivity Common Files 11.2.0.4.0
Oracle Database Gateway for ODBC 11.2.0.4.0
Oracle Programmer 11.2.0.4.0
Oracle Database Utilities 11.2.0.4.0
Enterprise Manager Agent 10.2.0.4.5
SQL*Plus 11.2.0.4.0
Oracle Netca Client 11.2.0.4.0
Oracle Multimedia Locator 11.2.0.4.0
Oracle Call Interface (OCI) 11.2.0.4.0
Oracle Multimedia 11.2.0.4.0
Oracle Net 11.2.0.4.0
Oracle XML Development Kit 11.2.0.4.0
Oracle Internet Directory Client 11.2.0.4.0
Database Configuration and Upgrade Assistants 11.2.0.4.0
Oracle JVM 11.2.0.4.0
Oracle Advanced Security 11.2.0.4.0
Oracle Net Listener 11.2.0.4.0
Oracle Enterprise Manager Console DB 11.2.0.4.0
HAS Files for DB 11.2.0.4.0
Oracle Text 11.2.0.4.0
Oracle Net Services 11.2.0.4.0
Oracle Database 11g 11.2.0.4.0
Oracle OLAP 11.2.0.4.0
Oracle Spatial 11.2.0.4.0
Oracle Partitioning 11.2.0.4.0
Enterprise Edition Options 11.2.0.4.0
-----------------------------------------------------------------------------
Instantiating scripts for add node (Saturday, May 25, 2024 12:51:52 PM CST)
. 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Saturday, May 25, 2024 12:51:57 PM CST)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Saturday, May 25, 2024 12:55:48 PM CST)
. 100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/oracle/app/oracle/product/11.2.0/db_1/root.sh #On nodes racdb03
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /oracle/app/oracle/product/11.2.0/db_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
执行脚本(racdb03)
[root@racdb03 ~]# /oracle/app/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oracle/app/oracle/product/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
添加实例(在任一保留节点上)
在任一保留节点上进行添加实例。
如果有图形界面支持,则运行dbca进行添加,不然进行静默添加实例
su - oracle
dbca -silent -addInstance -nodeList racdb03 -gdbName racdb -instanceName racdb3 -sysDBAUserName sys -sysDBAPassword oracle
以下是详细过程
[oracle@racdb01:/home/oracle]$dbca -silent -addInstance -nodeList racdb03 -gdbName racdb -instanceName racdb3 -sysDBAUserName sys -sysDBAPassword oracle
Adding instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
66% complete
Completing instance management.
76% complete
100% complete
Look at the log file "/oracle/app/oracle/cfgtoollogs/dbca/racdb/racdb.log" for further details.
查看状态
--检测集群状态 增加了racdb03的gi和db信息
[grid@racdb01:/home/grid]$crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE racdb01
ONLINE ONLINE racdb02
ONLINE ONLINE racdb03
ora.LISTENER.lsnr
ONLINE ONLINE racdb01
ONLINE ONLINE racdb02
ONLINE ONLINE racdb03
ora.OCR.dg
ONLINE ONLINE racdb01
ONLINE ONLINE racdb02
ONLINE ONLINE racdb03
ora.asm
ONLINE ONLINE racdb01 Started
ONLINE ONLINE racdb02 Started
ONLINE ONLINE racdb03 Started
ora.gsd
OFFLINE OFFLINE racdb01
OFFLINE OFFLINE racdb02
OFFLINE OFFLINE racdb03
ora.net1.network
ONLINE ONLINE racdb01
ONLINE ONLINE racdb02
ONLINE ONLINE racdb03
ora.ons
ONLINE ONLINE racdb01
ONLINE ONLINE racdb02
ONLINE ONLINE racdb03
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE racdb02
ora.cvu
1 ONLINE ONLINE racdb01
ora.oc4j
1 ONLINE ONLINE racdb01
ora.racdb.db
1 ONLINE ONLINE racdb01 Open
2 ONLINE ONLINE racdb02 Open
3 ONLINE ONLINE racdb03 Open
ora.racdb01.vip
1 ONLINE ONLINE racdb01
ora.racdb02.vip
1 ONLINE ONLINE racdb02
ora.racdb03.vip
1 ONLINE ONLINE racdb03
ora.scan1.vip
1 ONLINE ONLINE racdb02
--检测数据库状态,多了racdb3的信息
[grid@racdb01:/home/grid]$srvctl status database -d racdb
Instance racdb1 is running on node racdb01
Instance racdb2 is running on node racdb02
Instance racdb3 is running on node racdb03
整个完整的添加过程完毕。
若有疑问,可扫描以下二维码进行关注,留言加微信进行答疑解惑,公众号也将每天更新文章。
参考链接:oracle 11gR2 rac删除节点和增加节点_oracle11gr2 rac加减节点-CSDN博客
Adding and Deleting Cluster Nodes
Oracle 11g R2 RAC 增加节点详解(一)_sm31.vip-CSDN博客