签到成功

知道了

CNDBA社区CNDBA社区

RAC删除以及添加节点

2018-12-11 09:15 5489 2 原创 ORACLE
作者: Marvinn

某客户因为主机过保,需要进行RAC删除添加节点方式进行主机的替换,参照官方文档进行操作,一切顺利

一、生产RAC删除节点

官方文档

How to Add Node/Instance or Remove Node/Instance in 10gR2, 11gR1, 11gR2 and 12c Oracle Clusterware and RAC (文档 ID 1332451.1)

2.1 备份OCR

检测备份
grid@rac1:/home/grid>$ORACLE_HOME/bin/ocrconfig -showbackup

rac1     2018/12/10 11:34:57     /g01/app/grid/12.2.0/cdata/rac-cluster/backup00.ocr

rac1     2018/12/10 07:34:57     /g01/app/grid/12.2.0/cdata/rac-cluster/backup01.ocr

rac1     2018/12/10 03:34:56     /g01/app/grid/12.2.0/cdata/rac-cluster/backup02.ocr

rac1     2018/12/08 23:34:53     /g01/app/grid/12.2.0/cdata/rac-cluster/day.ocr

rac1     2018/12/03 02:50:24     /g01/app/grid/12.2.0/cdata/rac-cluster/week.ocr
PROT-25: Manual backups for the Oracle Cluster Registry are not available

手工备份(root用户)
[root@rac1 bin]# /g01/app/grid/12.2.0/bin/ocrconfig -manualbackup 

rac1     2018/12/10 14:27:15     /g01/app/grid/12.2.0/cdata/rac-cluster/backup_20181210_142715.ocr

dump出备份文件
[root@rac1 bin]# /g01/app/grid/12.2.0/bin/ocrdump /tmp/ocrdump_ocr.bak

[root@rac1 bin]# ls -l /tmp/ocrdump_ocr.bak 
-rw------- 1 root root 208527 Dec 10 14:28 /tmp/ocrdump_ocr.bak
[root@rac1 bin]# 

2.2、删除实例http://www.cndba.cn/Marvinn/article/3194

a,如果是正常删除节点,那么在要删除的节点上执行操作

    oracle用户:如果是节点损坏,则没有必要进行该操作 
    $ srvctl stop instance -d db_unique_name -n node_name        
  或者
  sqlplus / as sysdba
    shutdown immediate

  然后
  $ srvctl relocate server -n node_name -g Free  该条命令不一定执行成功,若失败可忽略,报错如下
  PRCR-1114 : Failed to relocate servers rac2 into server pool Free
    CRS-0217: Could not relocate resource 'rac2'.

  在保留节点上执行:
  SYS@orcl1>alter database disable thread 2;
    Database altered.

b,在保留节点上执行instance删除

如果有图形界面支持,则可以运行dbca进行删除
如果没有则采取静默删除
$ dbca -silent -deleteInstance -gdbName gdb_name -instanceName instance_name -nodeList node_name -sysDBAUserName sysdba -sysDBAPassword password

Deleting instance
20% complete
21% complete
22% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u01/oracle/cfgtoollogs/dbca/orcl.log" for further details.

加入删除实例报错如下:
Connection to the database cannot be established because the listener could be down. Please make sure that the service is registered with a listener and the listener is up.

Connection to the database failed. Please make sure that service "rac1-vip:1521:yunqu1" is registered with the listener, user name "sys" has SYSDBA privilege, password is correct and then try again.

主要是因为该数据库监听未注册,需要利用虚拟IP手动注册
测试
oracle@rac1:/u01/oracle/11.2.0/network/admin>sqlplus sys/yunq111@YUNQU as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Dec 10 14:53:19 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-12520: TNS:listener could not find available handler for requested type of
server


oracle@rac1:/u01/oracle/11.2.0/network/admin>lsnrctl status
部分输出省略
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "orcl" has 1 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 1 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
The command completed successfully

修改如下:
SYS@yunqu1>alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=172.41.176.103)(PORT=1521))';

System altered.

SYS@yunqu1>show parameter listener;

NAME                                 TYPE
------------------------------------ ---------------------------------
VALUE
------------------------------
listener_networks                    string

local_listener                       string
(ADDRESS=(PROTOCOL=TCP)(HOST=1
72.41.176.103)(PORT=1521))
remote_listener                      string
rac-scan:1521

SYS@yunqu1>alter system register;

System altered.

再次删除实例即可成功

c,确认实例删除完成

http://www.cndba.cn/Marvinn/article/3194

$ su - oracle
$ srvctl config database -d db_name

Database unique name: orcl
Database name: orcl
Oracle home: /u01/oracle/11.2.0
Oracle user: oracle
Spfile: +DATA/orcl/spfileorcl.ora
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: orcl1                --只剩下一个数据库实例
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Database is administrator managed

2.3、RAC层面移除节点

a,保留节点上执行,停止监听

# su - oracle
$ srvctl status listener_name                                                   --listener_name:监听名 默认:listener
$ srvctl disable listener_name  -n name_of_node_to_delete
$ srvctl stop listener_name  -n name_of_node_to_delete
或者
$ srvctl disable listener -l listener_name -n name_of_node_to_delete
$ srvctl stop listener -l listener_name -n name_of_node_to_delete

b,移除oracle home

1、如果是正常删除节点,需要在被删除的节点上执行一下操作,
如果是损坏则不需要 
su - oracle         
$ cd $ORACLE_HOME/oui/bin        
$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={db01(代表要删除的节点)}" -local   

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /g01/oraInventory
'UpdateNodeList' was successful.


2、诊断是否被删除的节点还存在其他需要移除的目录(被删除节点执行) 可以看出还有grid home未删除
$ ./runInstaller -detachHome ORACLE_HOME=$ORACLE_HOME
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /g01/oraInventory
'DetachHome' was successful.

3、删除oracle home(被删除节点执行)

$ $ORACLE_HOME/deinstall/deinstall -local
部分输出省略
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
The deinstall tool cannot determine the home type needed to deconfigure the selected home.  Please select the type of Oracle home you are trying to deinstall.
Single Instance database - Enter 1
Real Application Cluster database - Enter 2
Grid Infrastructure for a cluster - Enter 3
Grid Infrastructure for a stand-alone server - Enter 4
Client Oracle Home - Enter 5
Transparent Gateways Oracle Home - Enter 6

Invalid home type.
Single Instance database - Enter 1
Real Application Cluster database - Enter 2
Grid Infrastructure for a cluster - Enter 3
Grid Infrastructure for a stand-alone server - Enter 4
Client Oracle Home - Enter 5
Transparent Gateways Oracle Home - Enter 6
2            --表示RAC数据库
The product version number of the specified home cannot be determined. Is the product version at least 11.2.0.1.0 (y - yes, n - no)? [n]
y  --输入y

Specify a comma-separated list of nodes on which to perform the deinstallation task:rac2        --删除的节点

部分输出省略
Do you want to continue (y - yes, n - no)? [n]: y                --输入y
A log of this session will be written to: '/g01/oraInventory/logs/deinstall_deconfig2018-12-10_03-22-31-PM.out'
Any error messages from this session will be written to: '/g01/oraInventory/logs/deinstall_deconfig2018-12-10_03-22-31-PM.err'
...................

######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully deleted directory '/u01/oracle/11.2.0' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############



4、更新保留节点信息(保留节点执行)    "CLUSTER_NODES={node1-11gr2,node3-11gr2,node4-11gr2}" 存在多个保留节点
su - oracle         
$ cd $ORACLE_HOME/oui/bin         
$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={db02}" --CLUSTER_NODES={db02}输入的是(当前保留Node节点名)
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2032 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /g01/oraInventory
'UpdateNodeList' was successful.

c,Grid Infrastructure层面删除节点(删除grid home)http://www.cndba.cn/Marvinn/article/3194

a,确认被删除节点状态是否是Unpinned

su - grid
$ olsnodes -s -t

如果是pinned,请设为Unpinned
$ crsctl unpin css -n name_of_node_to_delete


b,在被删除节点禁用clusterware的application and daemons       
被删除节点上执行       
su - root       
# cd $GRID_HOME/crs/install       
#./rootcrs.pl -deconfig -force
报错如下:
Can't locate Env.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 . .) at crsconfig_lib.pm line 710.
BEGIN failed--compilation aborted at crsconfig_lib.pm line 710.
Compilation failed in require at ./rootcrs.pl line 305.
BEGIN failed--compilation aborted at ./rootcrs.pl line 305.

(Perl script (roothas.pl))不存在Env.pm
解决:
[root@rac2 install]# find / -name Env.pm -print
/g01/app/grid/12.2.0/perl/lib/5.10.0/Env.pm
/g01/app/grid/12.2.0/tfa/rac2/tfa_home/ext/darda/da/rda/RDA/Object/Env.pm
[root@rac2 install]# cp -p /g01/app/grid/12.2.0/perl/lib/5.10.0/Env.pm /usr/share/perl5/vendor_perl/

再次执行即可:
# ./rootcrs.pl -deconfig -force

Using configuration parameter file: ./crsconfig_params
Network exists: 1/172.41.176.0/255.255.255.128/ens160, type static
VIP exists: /rac1-vip/172.41.176.103/172.41.176.0/255.255.255.128/ens160, hosting node rac1
VIP exists: /rac2-vip/172.41.176.104/172.41.176.0/255.255.255.128/ens160, hosting node rac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.OCRVOTE.dg' on 'rac2'
CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.OCRVOTE.dg' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node
[root@rac2 install]# 


c,在保留节点上进行节点删除
su - root
[root@rac1 bin]# pwd
/g01/app/grid/12.2.0/bin
[root@rac1 bin]# ./crsctl delete node -n rac2       -- -n 被删除的节点名 rac2
CRS-4661: Node rac2 successfully deleted.
[root@rac1 bin]# 


d,被删除节点更新节点信息 (被删除节点执行)  
"CLUSTER_NODES={node1-11gr2,node3-11gr2,node4-11gr2}" 存在多个保留节点
--CLUSTER_NODES={rac2}  填写被删除的节点名 $ORACLE_HOME只GRID_HOME路径

su - grid       
$ cd $ORACLE_HOME/oui/bin       
$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac2}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /g01/oraInventory
'UpdateNodeList' was successful.

e,被删除节点上卸载GI,删除grid home
su - grid
$ cd $ORACLE_HOME/deinstall
$ ./deinstall -local

部分输出省略,按Enter默认即可

---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rac2".

/tmp/deinstall2018-12-10_03-43-37PM/perl/bin/perl -I/tmp/deinstall2018-12-10_03-43-37PM/perl/lib -I/tmp/deinstall2018-12-10_03-43-37PM/crs/install /tmp/deinstall2018-12-10_03-43-37PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2018-12-10_03-43-37PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------
手动root用户跑上面的脚本后再按Enter即可继续...

#######################################################################################################################

[root@rac2 ~]# /tmp/deinstall2018-12-10_03-43-37PM/perl/bin/perl -I/tmp/deinstall2018-12-10_03-43-37PM/perl/lib -I/tmp/deinstall2018-12-10_03-43-37PM/crs/install /tmp/deinstall2018-12-10_03-43-37PM/crs/install/rootcrs.pl -force  -deconfig -paramfile /tmp/deinstall2018-12-10_03-43-37PM/response/deinstall_Ora11g_gridinfrahome1.rsp
Using configuration parameter file: /tmp/deinstall2018-12-10_03-43-37PM/response/deinstall_Ora11g_gridinfrahome1.rsp

****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware          #
################################################################
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node
#########################################################################################################################
最后输出


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1
Oracle Clusterware is stopped and successfully de-configured on node "rac2"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/g01/app/grid/12.2.0' from the central inventory on the local node.
Successfully deleted directory '/g01/app/grid/12.2.0' on the local node.
Successfully deleted directory '/g01/grid' on the local node.
Oracle Universal Installer cleanup was successful.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac2' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

运行命令rm -rf /opt/ORCLfmap 即可完成该步骤
[root@rac2 ~]# rm -rf /opt/ORCLfmap
[root@rac2 ~]# 



f,保留节点更新节点信息   
--CLUSTER_NODES={rac1} 填写保留节点的名称   
"CLUSTER_NODES={node1-11gr2,node3-11gr2,node4-11gr2}" 存在多个保留节点

su - grid       
$ cd $ORACLE_HOME/oui/bin       
$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1}" CRS=TRUE -silent              
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2032 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /g01/oraInventory
'UpdateNodeList' was successful.

g,在保留节点上确认节点是否删除       
su - grid       
$ cluvfy stage -post nodedel -n rac2                  -- -n  指定被移除的Node名

Performing post-checks for node removal 
Checking CRS integrity...
Clusterware version consistency passed
CRS integrity check passed
Node removal check passed
Post-check for node removal was successful. 

h, 在保留节点看集群状态,可看见不存在了被删除节点RAC2资源
grid@rac1:/g01/app/grid/12.2.0/oui/bin>crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
ora.OCRVOTE.dg
               ONLINE  ONLINE       rac1                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.cvu
      1        ONLINE  ONLINE       rac1                                         
ora.oc4j
      1        ONLINE  ONLINE       rac1                                         
ora.orcl.db
      2        ONLINE  ONLINE       rac1                     Open                
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.yunqu.db
      1        ONLINE  ONLINE       rac1                     Open   

二、配置新节点环境

http://www.cndba.cn/Marvinn/article/3194

按原节点环境配置,在这之前直接编辑保留节点/etc/hosts文件,添加内容保持一直

http://www.cndba.cn/Marvinn/article/3194
http://www.cndba.cn/Marvinn/article/3194

172.41.176.101 rac1

172.41.176.109 rac2

10.10.10.2 rac1-priv

10.10.10.4 rac2-priv

172.41.176.103 rac1-vip

172.41.176.110 rac2-vip

172.41.176.122 rac-scan

1.1、配置Host文件http://www.cndba.cn/Marvinn/article/3194

修改需要被删除节点rac2的信息,公有IP、私有IP、虚拟IP

172.41.176.101 rac1
172.41.176.109 rac2                

10.10.10.2  rac1-priv
10.10.10.4  rac2-priv

172.41.176.103 rac1-vip
172.41.176.110 rac2-vip

172.41.176.122 rac-scan

1.2、配置网络

BOOTPROTO=none
ONBOOT=yes
IPADDR=10.10.10.4
NETMASK=255.255.255.128

部分输出省略

1.3、防火墙

1.4、关闭NetWorkerManager

service NetworkerManager stop
chkconfig NetworkerManager off

1.5、配置用户信息

oracle用户和grid用户环境配置,包括权限、用户组、密码、目录

1.6、配置互信

11G调用官方自带互信脚本互信,该脚本11g才有,但是可以拷到10g上用http://www.cndba.cn/Marvinn/article/3194

./sshUserSetup.sh -user 用户名 -hosts “主机名1 主机名2 主机名3….” -advanced -noPromptPassphrase

http://www.cndba.cn/Marvinn/article/3194

su - grid
$ cd /g01/grid/sshsetup
grid@rac1:/g01/grid/sshsetup>./sshUserSetup.sh -user grid -hosts "rac2" -advanced -noPromptPassphrase

su - oracle 若是该脚本oracle用户不存在,可直接从grid用户下拷贝过来,赋予可执行权限执行
因为我当前oracle用户下未存在该脚本,我直接拷贝过来的

$ /u01/sshUserSetup.sh -user grid -hosts "rac2" -advanced -noPromptPassphrase

1.7、配置共享存储

拷贝之前的环境的共享存储配置文件到该环境

[root@rac1 rules.d]# scp 99-my-asmdevices.rules 172.41.176.109:/etc/udev/rules.d/
The authenticity of host '172.41.176.109 (172.41.176.109)' can't be established.
ECDSA key fingerprint is 96:75:51:dd:da:e3:5e:63:97:11:83:04:7e:92:ca:30.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.41.176.109' (ECDSA) to the list of known hosts.
root@172.41.176.109's password: 
99-my-asmdevices.rules                                                                                                                                  100% 1339     1.3KB/s   00:00    

重启该Udev(root用户执行)
# /usr/sbin/udevadm control --reload-rules
systemctl status systemd-udevd.service
systemctl enable systemd-udevd.service
systemctl restart systemd-udev-trigger.service
systemctl enable systemd-udev-trigger.service

验证是否生效
[root@rac2 dev]# ls -l /dev/asm*
lrwxrwxrwx. 1 root root 3 Dec 10 16:13 /dev/asm-diskb -> sdb
lrwxrwxrwx. 1 root root 3 Dec 10 16:13 /dev/asm-diskc -> sdc
lrwxrwxrwx. 1 root root 3 Dec 10 16:13 /dev/asm-diskd -> sdd
lrwxrwxrwx. 1 root root 3 Dec 10 16:13 /dev/asm-diske -> sde
lrwxrwxrwx. 1 root root 3 Dec 10 16:13 /dev/asm-diskf -> sdf
lrwxrwxrwx. 1 root root 3 Dec 10 16:13 /dev/asm-diskg -> sdg
[root@rac2 dev]# 

上面输出结果,说明已生效....

二、节点添加http://www.cndba.cn/Marvinn/article/3194

注意点:删除节点后,如果新增加的节点主机名和删除节点一样,可能会遇到如下错误:

SEVERE:由于无法从命令行或响应文件中获取CLUSTER_NEW_NODES。。。。。。。。。。。。。

该问题是由于可能配置信息未完全清除,不能进行使用,可以换取另外一个nodename

1.1、确保环境是否一样

 a,确保所有节点用户组,用户id一致       
 id oracle       
 id grid           

 b,检查环境(用户grid和oracle下面都执行)  脚本位置位于$ORACLE_HOME/bin    -n rac2  添加节点名

 cluvfy stage -pre nodeadd -n rac2 -fixup -verbose       

 cluvfy stage -post hwos -n rac2     

 cluvfy comp peer -refnode rac2 -n db03 -orainv oinstall -osdba oinstall-verbose

1.2、Grid Infrastructure层面添加新节点

a,执行添加节点,拷贝软件信息       
su - grid       
$ cd $ORACLE_HOME/oui/bin       
$ export IGNORE_PREADDNODE_CHECKS=Y       
$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={rac2-priv}"           

部分输出省略...........
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system. 
To register the new inventory please run the script at '/g01/oraInventory/orainstRoot.sh' with root privileges on nodes 'rac2'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/g01/oraInventory/orainstRoot.sh #On nodes rac2
/g01/app/grid/12.2.0/root.sh #On nodes rac2
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /g01/app/grid/12.2.0 was successful.
Please check '/tmp/silentInstall.log' for more details.

b,新增加节点上运行提示的脚本       
su - root      
# /g01/oraInventory/orainstRoot.sh     ---On nodes rac2
# /g01/app/grid/12.2.0/root.sh             ---On nodes rac2

1.3、DB层面新节点添加实例

如果有图形界面支持,则运行dbca进行添加,否则:(当前保留节点执行)
su - oracle       
$ dbca -silent -addInstance -nodeList node_name -gdbName gdb_name -instanceName instance_name -sysDBAUserName sys -sysDBAPassword pasword

检查是否成功   
select * from gv$instance;

版权声明:本文为博主原创文章,未经博主允许不得转载。

用户评论
* 以下用户言论只代表其个人观点,不代表CNDBA社区的观点或立场
Marvinn

Marvinn

关注

路漫漫其修远兮、吾将上下而求索

  • 99
    原创
  • 0
    翻译
  • 2
    转载
  • 36
    评论
  • 访问:458416次
  • 积分:449
  • 等级:中级会员
  • 排名:第12名
精华文章
    最新问题
    查看更多+
    热门文章
      热门用户
      推荐用户
        Copyright © 2016 All Rights Reserved. Powered by CNDBA · 皖ICP备2022006297号-1·

        QQ交流群

        注册联系QQ