场景介绍
系统环境 oel 5.6
Oracle环境 10.2.0.1
磁盘绑定 裸设备+asmlib
Asm 使用
场景:2节点本地磁盘损坏,系统与oracle软件均丢失。实验使用虚拟机快照,恢复到二节点配置前的场合。
主机名 oel1,oel2
基础环境复原
- 复原hosts
vi /etc/hosts
192.168.7.221 oel1
192.168.7.222 oel2
192.168.7.223 oel1-vip
192.168.7.224 oel2-vip
192.168.1.1 oel1-priv
192.168.1.2 oel2-priv - 复原网卡ip
cd /etc/sysconfig/network-scripts
vi ifcfg-eth0
vi ifcfg-eth1 - 复原主机名
vi /etc/sysconfig/network - 复原sysctl.conf
vi /etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 4194304
kernel.shmmax = 17179869184
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
- 复原limits
vi /etc/security/limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
- 复原用户
获得1节点的用户信息
Id oracle
[oracle@oel1 dbs]$ id oracle
uid=1100(oracle) gid=1000(oinstall) groups=1000(oinstall)
二节点用户重建
groupadd -g 1000 oinstall
useradd -u 1100 -g oinstall oracle
- 复原目录
mkdir -p /u01/app/oracle
chown -R oracle:oinstall /u01
chmod -R 775 /u01
—重建日志目录
/u01/app/oracle/admin
下的目录全部重建 - 重建互信
————生成公钥-两个节点均执行
su - oracle
mkdir .ssh
chmod 700 .ssh
cd .ssh
ssh-keygen -t rsa
一路回车
ssh-keygen -t dsa
一路回车
———-合并传输
oel1:
cat *.pub > authorized_keys
scp authorized_keys oel2:/home/oracle/.ssh/keys
oel2:
cat *.pub > authorized_keys
cat keys >> authorized_keys
scp authorized_keys oel1:/home/oracle/.ssh/
————测试
第一次连入需要输入密码,需要多试几次
ssh oel1 date
ssh oel2 date
ssh oel1-priv date
ssh oel2-priv date
安装依赖包
yum install binutils-2 -y
yum install compat-libcap1-1 -y
yum install compat-libstdc++-3 -y
yum install gcc-4 -y
yum install gcc-c++-4 -y
yum install glibc-2 -y
yum install glibc-devel-2 -y
yum install libgcc-4 -y
yum install libstdc++-4 -y
yum install libstdc++-devel-4 -y
yum install libaio-0 -y
yum install libaio-devel-0 -y
yum install make-3 -y
yum install sysstat-9 -y
yum install unixODBC- -y
yum install elfutils复原asmlib与磁盘
安装oracleasm包,共3个
oracleasm-support
oracleasm
oracleasmlib
配置asmlib
/etc/init.d/oracleasm configure
oracleasm scandisks
oracleasm listdisks
[root@oel2 ~]# oracleasm listdisks
DATA
配置裸设备
vi /etc/udev/rules.d/60-raw.rules
ACTION==”add”, KERNEL==”sdb1”, RUN+=”/bin/raw /dev/raw/raw1 %N” OWNER=”oracle”,GROUP=”oinstall”,MODE=”660”
ACTION==”add”, KERNEL==”sdc1”, RUN+=”/bin/raw /dev/raw/raw2 %N” OWNER=”oracle”,GROUP=”oinstall”,MODE=”660”
- 同步时间
1节点开启time-stream,rdate同步
oel1:
chkconfig time-stream on
oel2:
rdate -s oel1卸载集群资源
移除上层资源
有顺序依赖关系,先移除上层,再移除底层,root下执行
crs_stop ora.oel2.vip -f
srvctl remove inst -d orcl -i orcl2
crs_unregister ora.oel2.LISTENER_OEL2.lsnr
srvctl remove asm -n oel2 -f
crs_unregister ora.oel2.gsd
crs_unregister ora.oel2.ons
crs_unregister ora.oel2.vip
清除集群注册
(CLUSTER_NODES=存活节点)
清理db注册
cd $ORACLE_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES=oel1”
清理crs注册
cd $CRS_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=$CRS_HOME “CLUSTER_NODES=oel1”
重新布置软件
1) 集群软件
cd $CRS_HOME/oui/bin
./addNode.sh
图形界面添加node。删除干净时oel2是不在的,若添加原节点,须将oel2删除干净
根据需求执行脚本
2节点的root.sh输出如下
[root@oel2 ~]# /home/oracle/oracle/product/10.2.0/crs/root.sh
WARNING: directory ‘/home/oracle/oracle/product/10.2.0’ is not owned by root
WARNING: directory ‘/home/oracle/oracle/product’ is not owned by root
WARNING: directory ‘/home/oracle/oracle’ is not owned by root
WARNING: directory ‘/home/oracle’ is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
OCR LOCATIONS = /dev/raw/raw1
OCR backup directory ‘/home/oracle/oracle/product/10.2.0/crs/cdata/crs’ does not exist. Creating now
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory ‘/home/oracle/oracle/product/10.2.0’ is not owned by root
WARNING: directory ‘/home/oracle/oracle/product’ is not owned by root
WARNING: directory ‘/home/oracle/oracle’ is not owned by root
WARNING: directory ‘/home/oracle’ is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname oel1 for node 1.
assigning default hostname oel2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: oel1 oel1-priv oel1
node 2: oel2 oel2-priv oel2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
oel1
oel2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
IP address “oel1-vip” has already been used. Enter an unused IP address.
2) 数据库软件
cd $ORACLE_HOME/oui/bin
./addNode.sh
图形化界面添加节点软件
3) ASM实例
拷贝ini+ASM1.ora 到init+ASM2.ora
srvctl add asm -n oel2 -i 2 -o $ORACLE_HOME -p /u01/app/oracle/product/10.2.0/db_1/dbs/init+ASM2.ora
4) DB实例
拷贝pfile到2节点
srvctl add instance -d orcl -i orcl2 -n oel2
5) 监听
cd $ORACLE_HOM/network/admin
vi listener.ora
LISTENER_OEL2 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = oel2-vip)(PORT = 1521)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.7.222)(PORT = 1521)(IP = FIRST))
)
)
SID_LIST_LISTENER_OEL2 =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(PROGRAM = extproc)
)
)
crs_register ora.oel2.LISTENER_OEL2.lsnr
资源启动,完成收工
版权声明:本文为博主原创文章,未经博主允许不得转载。
rac 10g 损毁重建 addNode.sh
- 上一篇:作为dba应该如何学习(个人经验)
- 下一篇:分区表自动分区procedure