ORACLE 12CR2RAC传统HOSTS文件SCAN解析转换为GNS解析SCAN全操作
最近,由于想弄下ORACLE 12CR2 新特性Hub+Leaf 新特性、可是最初我12C RAC架构使用HOSTS文件解析SCAN,导致不能使用Leaf特性,但是并不妨碍它成为Flex集群,这点,之前文章就已经说明就不在讲述,参考链接: http://www.cndba.cn/Marvinn/article/2760
....接下,就先把RAC架构从传统HOSTS文件SCAN解析转换为GNS解析SCAN,然后,再去添加Leaf节点
这其中,配置DNS以及DHCP时,我已经提前把LEAF节点所需要的IP、VIP以及私有IP,都已经包含了... 后续直接添加就好(不需要LEAF得同志们可以直接忽略其对应IP即可)....
关于这步骤...后续在实现添加Hub节点时会新开一篇文章...目测操作感觉大同小异
整体流程:
数据库OCR备份 >> DNS配置 >> DHCP配置 >> RAC数据库架构变更
花费时间:
网上关于这方面SCAN解析转换的参考文章没看到....不过好在牺牲周末搞定,文章新鲜出炉,哈哈...Bingo
最初原始RAC环境:
数据库版本: ORACLE 12CR2
操作系统: Centos 7
| 机器说明 | 主机名 | 私有IP | VIP |
|---|---|---|---|
| HUb节点1 | racl2chub1 172.16.10.116 | rac12chub1-priv 192.168.122.222 | rac12chub1-vip 172.16.10.134 |
| HUb节点1 | racl2chub2 172.16.10.142 | rac12chub2-priv 192.168.122.238 | rac12chub2-vip 172.16.10.135 |
| SCANIP | racnode-scanip 172.16.10.150 |
最终环境:
数据库版本: ORACLE 12CR2
操作系统: Centos 7
| 机器说明 | 主机名 | 私有IP |
|---|---|---|
| HUb节点1 | racl2chub1 racl2chub1.marvin.cn 172.16.10.116 | rac12chub1-priv rac12chub2-priv.marvin.cn 192.168.122.222 |
| HUb节点1 | racl2chub2 racl2chub2.marvin.cn 172.16.10.142 | rac12chub2-priv rac12chub2-priv.marvin.cn 192.168.122.238 |
| DNS/GNS | 同台服务器:server 服务器IP: 172.16.10.138 |
|---|---|
| GNSVIP | 172.16.10.139 |
| VIP/SCANIP | DHCP动态获取 172.16.10.100-172.16.10.136 |
1、备份OCR(root用户)
[root@rac12chub1 bin]# /g01/app/grid/12.2.0/bin/ocrconfig -manualbackup
rac12chub1 2018/05/04 15:46:13 +MGMT:/rac12ch-cluster/OCRBACKUP/backup_20180504_154613.ocr.262.975253575 0
查看备份
[root@rac12chub1 bin]# /g01/app/grid/12.2.0/bin/ocrconfig -showbackup
rac12chub1 2018/05/04 14:19:29 +MGMT:/rac12ch-cluster/OCRBACKUP/backup00.ocr.264.975248365 0
rac12chub1 2018/05/04 10:19:24 +MGMT:/rac12ch-cluster/OCRBACKUP/backup01.ocr.257.975233959 0
rac12chub1 2018/05/04 06:19:17 +MGMT:/rac12ch-cluster/OCRBACKUP/backup02.ocr.263.975219553 0
rac12chub1 2018/05/03 06:18:45 +MGMT:/rac12ch-cluster/OCRBACKUP/day.ocr.261.975133125 0
rac12chub1 2018/04/26 22:42:34 +MGMT:/rac12ch-cluster/OCRBACKUP/week.ocr.259.974500955 0
rac12chub1 2018/05/04 15:46:13 +MGMT:/rac12ch-cluster/OCRBACKUP/backup_20180504_154613.ocr.262.975253575 0
注意:DNS SERVER 跟 DHCP SERVER可以放同台服务器,但是不跟数据库服务器放一起的,换句话说,另外新开一台服务器用于配置DNS以及DHCP
2、安装配置DNS(新开的服务器)
配置DNS
2.1、安装DNS
[root@server ~]# yum install bind-libs bind bind-utils
[root@server ~]# rpm -qa | grep "^bind"
2.2、配置DNS
修改name.conf 文件
安装完成后,bind的主配置文件是/etc/named.conf;区域类型配置文件是/etc/named.rfc1912.zones;区域配置文件在/var/named/下;
[root@server ~]# cat /etc/named.conf
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BINDnamed(8) DNS
// server as a caching only nameserver (as a localhost DNS resolveronly).
//
// See /usr/share/doc/bind*/sample/ for example named configurationfiles.
//
options {
// listen-onport 53 { 127.0.0.1; }; 默认监听IP为127.0.0.1,用//需注释掉
// listen-on-v6port 53 { ::1; }; 注释掉
directory "/var/named"; //存放区域配置文件的目录
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
// allow-query { localhost; };允许查询的客户端的IP,默认为本机,注释掉。
allow-transfer {"none";}; // Nees Manual ADD
recursion yes; //是否开启迭代查询功能
dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;
/* Path to ISC DLVkey */
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
};
logging {
channel default_debug{
file "data/named.run";
severity dynamic;
};
};
//这里必须注释掉原来的file,重新加上file
zone "." IN {
type hint;
// file "named.ca";
file "/dev/null";
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
2.3、配置正反向解析Zone
修改Zone 文件:/etc/named.rfc1912.zones, 添加正向解析和反向解析的Zone 定义
[root@server ~]# vi /etc/named.rfc1912.zones
--配置正向解析Zone
zone "marvin.cn" IN { // Zone名字可自取
type master;
file "marvin.cn.zone"; // Zone file 名可自取
allow-update{ none; };
};
--配置反向解析Zone
zone "10.16.172.in-addr.arpa" IN { // 反向Zone ,IP反转后缀需要加in-addr.arp,下同
type master;
file "10.16.172.local"; // Zone File名可自取,下同
allow-update{ none; };
};
zone "122.168.192.in-addr.arpa" IN { // 该方向用于解析私有IP
type master;
file "122.168.192.local";
allow-update{ none; };
};
这里需要注意的是,反向解析从左到右读取ip地址时是以相反的方向解释的,
所以需要将ip地址反向排列。这里,如:172.16.10.*网段的反向解析域名为"10.16.172.in-addr.arpa”。
2.4、配置正向解析的区域文件
在之前的name.conf 配置中指定的区域文件目录是:/var/named。 所以在这个目录里创建正向解析的区域文件。
文件名就是在Zone中定义的file名。
[root@server ~]# touch /var/named/marvin.cn.zone
[root@server ~]# chgrp named /var/named/marvin.cn.zone
[root@server ~]# vi /var/named/marvin.cn.zone
--添加如下内容:
$TTL 3D
@ IN SOA dnsserver.marvin.cn. root.marvin.cn. (
42 ; serial (d.adams)
3H ; refresh
15M ; retry
1W ; expiry
1D) ; minimum
IN NS dnsserver.marvin.cn.
dnsserver IN A 172.16.10.138
rac12chub1 IN A 172.16.10.116
rac12chub2 IN A 172.16.10.142
rac12cleaf1 IN A 172.16.10.146
rac12chub1-priv IN A 192.168.122.222
rac12chub2-priv IN A 192.168.122.238
rac12cleaf1-priv IN A 192.168.122.243 // 注意:原始环境中是没有节点rac12cleaf1节点的,就是因为我需要测试添加Leaf节点才到现在的变更RAC架构...这里一并添加(若没有可忽略)
$ORIGIN marvin.cn.
@ IN NS gnsserver.marvin.cn.
gnsserver.marvin.cn. IN A 172.16.10.139
2.5、配置反向解析的区域文件
在/var/named 目录下创建反向解析的区域文件,
文件名也是之前在Zone中定义的:10.16.172.local以及122.168.192.local
[root@server ~]# touch /var/named/10.16.172.local
[root@server ~]# chgrp named /var/named/10.16.172.local
[root@server ~]# vi /var/named/10.16.172.local
--添加如下内容:
$TTL 3D
@ IN SOA dnsserver.marvin.cn. root.marvin.cn. (
1997022700 ; Serial
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400) ; Minimum
IN NS dnsserver.marvin.cn.
138 IN PTR dnserver.marvin.cn.
139 IN PTR gnsserver.marvin.cn.
116 IN PTR rac12chub1.marvin.cn.
142 IN PTR rac12chub2.marvin.cn
146 IN PTR rac12cleaf1.marvin.cn // 注意:原始环境中是没有节点rac12cleaf1节点的,就是因为我需要测试添加Leaf节点才到现在的变更RAC架构...这里一并添加(若没有可忽略)
[root@server ~]# touch /var/named/122.168.192.local
[root@server ~]# chgrp named /var/named/122.168.192.local
[root@server ~]# vi /var/named/122.168.192.local
--添加如下内容:
$TTL 3D
@ IN SOA dnsserver.marvin.cn. root.marvin.cn. (
1997022700 ; Serial
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400) ; Minimum
IN NS dnsserver.marvin.cn.
138 IN PTR dnsserver.marvin.cn.
139 IN PTR gnsserver.marvin.cn.
222 IN PTR rac12chub1-priv.marvin.cn.
238 IN PTR rac12chub2-priv.marvin.cn
243 IN PTR rac12cleaf1-priv.marvin.cn // 注意:原始环境中是没有节点rac12cleaf1节点的,就是因为我需要测试添加Leaf节点才到现在的变更RAC架构...这里一并添加(若没有可忽略)
2.6、修改HOSTS文件(所有RAC节点) 去掉vip 以及scan 由GNS动态分配
在我们传统的架构中,PUBLIC IP,PRIVATE IP,VIP 都是我们预先分配好的,但如果使用GNS 来做SCAN,那么只需要固定PRIVATEIP 和 PUBLIC IP, VIP和SCAN IP 由GNS 从DHCP 中动态获取
原始HOSTS文件:
# Public Network - (eth0)
172.16.10.116 rac12chub1
172.16.10.142 rac12chub2
# Private Interconnect - (eth1)
192.168.122.222 rac12chub1-priv
192.168.122.238 rac12chub2-priv
# VIR Network - (eth0)
172.16.10.134 rac12chub1-vip
172.16.10.135 rac12chub2-vip
172.16.10.150 racnode-scanip
修改为:并需要添加指定域名
(注意:原始环境中是没有节点rac12cleaf1节点的,就是因为我需要测试添加Leaf节点才到现在的变更RAC架构...这里一并添加 (若没有可忽略))
# Public Network - (eth0)
172.16.10.116 rac12chub1.marvin.cn rac12chub1
172.16.10.142 rac12chub2.marvin.cn rac12chub2
172.16.10.146 rac12cleaf1.marvin.cn rac12cleaf1
# Private Interconnect - (eth1)
192.168.122.222 rac12chub1-priv.marvin.cn rac12chub1-priv
192.168.122.238 rac12chub2-priv.marvin.cn rac12chub2-priv
192.168.122.243 rac12cleaf1-priv.marvin.cn rac12cleaf1-priv
2.7、每台客户机(所有RAC节点)操作
修改网卡eth0配置,DNS1配置成DNS服务器IP
DNS1=172.16.10.138 // DNS服务器IP
停止NetworkManager服务
#systemctl stop NetworkManager
#systemctl disable NetworkManager
修改resolv.conf配置文件
# vi /etc/resolv.conf
添加如下
# Generated by NetworkManager
nameserver 172.16.10.138 // DNS服务器
nameserver 172.16.10.139 // GNS VIP
options rotate
options timeout: 2
options attempts: 5
重启网络
#service netork restart
2.8、重启DNS
[root@server ~]# systemctl restart named.service
[root@server ~]# systemctl enable named.service
[root@server ~]# systemctl status named.service
named.service - Berkeley Internet Name Domain (DNS)
Loaded: loaded (/usr/lib/systemd/system/named.service; enabled)
Active: active (running) since Sat 2018-05-05 11:52:12 CST; 6s ago
Process: 3649 ExecStop=/bin/sh -c /usr/sbin/rndc stop > /dev/null 2>&1 || /bin/kill -TERM $MAINPID (code=exited, status=0/SUCCESS)
Process: 3662 ExecStart=/usr/sbin/named -u named -c ${NAMEDCONF} $OPTIONS (code=exited, status=0/SUCCESS)
Process: 3658 ExecStartPre=/bin/bash -c if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then /usr/sbin/named-checkconf -z "$NAMEDCONF"; else echo "Checking of zone files is disabled"; fi (code=exited, status=0/SUCCESS)
Main PID: 3664 (named)
CGroup: /system.slice/named.service
└─3664 /usr/sbin/named -u named -c /etc/named.conf
May 05 11:52:12 server named[3664]: zone marvin.cn/IN: loaded serial 42
May 05 11:52:12 server named[3664]: zone localhost/IN: loaded serial 0
May 05 11:52:12 server named[3664]: zone 122.168.192.in-addr.arpa/IN: loaded serial 1997022700
May 05 11:52:12 server named[3664]: zone localhost.localdomain/IN: loaded serial 0
May 05 11:52:12 server named[3664]: all zones loaded
May 05 11:52:12 server named[3664]: running
May 05 11:52:12 server named[3664]: managed-keys-zone: Failed to create fetch for DNSKEY update
May 05 11:52:12 server named[3664]: managed-keys-zone: Failed to create fetch for DNSKEY update
May 05 11:52:12 server named[3664]: zone marvin.cn/IN: sending notifies (serial 42)
May 05 11:52:12 server systemd[1]: Started Berkeley Internet Name Domain (DNS).
注意:若DNS启动失败,一般配置文件问题(以下为正常输出...若有问题则它输出大概问题缘由)
解决思路:(以下两个步骤可放置前面解析文件当中...后续就不用等到启动失败采取检查)
1、检查/etc/下与named有关文件是否有问题(若正常无输出)
[root@server ~]# named-checkconf
2、检查域数据库文件 即zone文件是否有问题(若正常输出如下)
[root@server ~]# named-checkzone marvin.cn /var/named/marvin.cn.zone
zone marvin.cn/IN: loaded serial 42
OK
[root@server ~]# named-checkzone 10.16.172/local /var/named/10.16.172.local
zone 10.16.172/local/IN: loaded serial 1997022700
OK
[root@server ~]# named-checkzone 10.16.172.local /var/named/10.16.172.local
zone 10.16.172.local/IN: loaded serial 1997022700
OK
[root@server ~]# named-checkzone 122.168.192.local /var/named/122.168.192.local
zone 122.168.192.local/IN: loaded serial 1997022700
OK
2.9、验证DNS(DNs服务器解析正常,每台客户机(即所有RAC节点)也测试下DNS,命令照如下
但是,需要安装工具包
yum -y install bind bind-utils # 测试工具 dig host nslookup 来自 bind-utils包
)
[root@server ~]# nslookup rac12chub1.marvin.cn
Server: 172.16.10.138
Address: 172.16.10.138#53
Name: rac12chub1.marvin.cn
Address: 172.16.10.116
[root@server ~]# nslookup rac12chub2.marvin.cn
Server: 172.16.10.138
Address: 172.16.10.138#53
Name: rac12chub2.marvin.cn
Address: 172.16.10.142
[root@server ~]# nslookup rac12cleaf1.marvin.cn
Server: 172.16.10.138
Address: 172.16.10.138#53
Name: rac12cleaf1.marvin.cn
Address: 172.16.10.146
[root@server ~]# nslookup rac12cleaf1-priv.marvin.cn
Server: 172.16.10.138
Address: 172.16.10.138#53
Name: rac12cleaf1-priv.marvin.cn
Address: 192.168.122.243
[root@server ~]# nslookup rac12chub1-priv.marvin.cn
Server: 172.16.10.138
Address: 172.16.10.138#53
Name: rac12chub1-priv.marvin.cn
Address: 192.168.122.222
[root@server ~]# nslookup rac12chub2-priv.marvin.cn
Server: 172.16.10.138
Address: 172.16.10.138#53
Name: rac12chub2-priv.marvin.cn
Address: 192.168.122.238
也可以按照dig rac12chub1-priv.marvin.cn @172.16.10.138 该命令测试 dig +网址+@DNS服务器
RAC节点 安装工具包,报错,这是因为我们现在DNS解析到了DNS服务器IP,无法提供上网
[root@rac12cleaf1 network-scripts]# yum -y install bind bind-utils
Loaded plugins: fastestmirror
http://centos.uhost.hk/7.4.1708/os/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: centos.uhost.hk; Unknown error"
Trying other mirror.
http://mirror.sunnyvision.com/centos/7.4.1708/os/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: mirror.sunnyvision.com; Unknown error"
Trying other mirror.
http://repo.virtualhosting.hk/centos/7.4.1708/os/x86_64/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: repo.virtualhosting.hk; Unknown error"
Trying other mirror.
解决方法:
只需要在/etc/resolve.conf,添加上网关即可
nameserver 172.16.10.254
resolv.conf文件变更如图:为确保客户机所有RAC节点都能访问,还是添加为好
[root@rac12chub2 ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 172.16.10.138
nameserver 172.16.10.139
nameserver 172.16.10.254
options rotate
options timeout: 2
options attempts: 5
重启网络
# service network restart
至此,所有测试成功
3、安装配置DHCP(与DNS同台服务器)
3.1、安装配置DHCP
# yum install dhcp
# rpm -qa | grep "^dhcp"
# systemctl start dhcpd.service
# systemctl enable dhcpd.service
3.2、修改dhcpd.conf文件(如下)
# cat /etc/dhcp/dhcpd.conf
ddns-update-style interim;
ignore client-updates;
subnet 172.16.10.0 netmask 255.255.255.0 {
# --- default gateway
option routers 172.16.10.254;
option subnet-mask 255.255.255.0;
option nis-domain "marvin.cn";
option domain-name "marvin.cn";
option domain-name-servers 172.16.10.138;
option time-offset -18000; # Eastern Standard Time
option ip-forwarding off;
range dynamic-bootp 172.16.10.130 172.16.10.136;
default-lease-time 21600;
max-lease-time 43200;
}
3.3、重启dhcp
[root@server ~]# systemctl restart dhcpd.service
[root@server ~]# systemctl enable dhcpd.service
[root@server ~]# systemctl status dhcpd.service
dhcpd.service - DHCPv4 Server Daemon
Loaded: loaded (/usr/lib/systemd/system/dhcpd.service; enabled)
Active: active (running) since Sat 2018-05-05 12:43:08 CST; 23s ago
Docs: man:dhcpd(8)
man:dhcpd.conf(5)
Main PID: 3736 (dhcpd)
Status: "Dispatching packets..."
CGroup: /system.slice/dhcpd.service
└─3736 /usr/sbin/dhcpd -f -cf /etc/dhcp/dhcpd.conf -user dhcpd -group dhcpd --no-pid
May 05 12:43:08 server dhcpd[3736]: No subnet declaration for eth1 (no IPv4 addresses).
May 05 12:43:08 server dhcpd[3736]: ** Ignoring requests on eth1. If this is not what
May 05 12:43:08 server dhcpd[3736]: you want, please write a subnet declaration
May 05 12:43:08 server dhcpd[3736]: in your dhcpd.conf file for the network segment
May 05 12:43:08 server dhcpd[3736]: to which interface eth1 is attached. **
May 05 12:43:08 server dhcpd[3736]:
May 05 12:43:08 server dhcpd[3736]: Listening on LPF/eth0/52:54:00:5f:20:18/172.16.10.0/24
May 05 12:43:08 server dhcpd[3736]: Sending on LPF/eth0/52:54:00:5f:20:18/172.16.10.0/24
May 05 12:43:08 server dhcpd[3736]: Sending on Socket/fallback/fallback-net
May 05 12:43:08 server systemd[1]: Started DHCPv4 Server Daemon.
至此...DHCP服务搭建完成....
接下来...开始着手数据库...
4、数据库架构变更(GNS解析SCAN)
由于使用srvctl工具,只需要在RAC其中一个节点操作即可
4.1、停止所有相关资源
oracle@rac12chub1:/home/oracle> srvctl stop database -d marvin -- Stop all the databases
oracle@rac12chub1:/home/oracle> srvctl stop listener -n rac12chub1 -- Stop 节点1 the Listeners
oracle@rac12chub1:/home/oracle> srvctl stop listener -n rac12chub2 -- Stop 节点2 the Listeners
oracle@rac12chub1:/home/oracle> srvctl stop scan_listener -- Stop the SCAN_LISTENE
oracle@rac12chub1:/home/oracle> srvctl stop scan -- Stop the SCAN
oracle@rac12chub1:/home/oracle> srvctl stop vip -n rac12chub1 -- Stop the VIP
oracle@rac12chub1:/home/oracle> srvctl stop vip -n rac12chub2
oracle@rac12chub1:/home/oracle> srvctl stop nodeapps -n rac12chub1 -f
PRCR-1014 : Failed to stop resource ora.net1.network
PRCR-1065 : Failed to stop resource ora.net1.network
CRS-2670: Unable to start/relocate 'ora.net1.network' because 'ora.qosmserver' has a stop-time 'hard' dependency on it
CRS-0245: User doesn't have enough privilege to perform the operation
切换GRID用户停止
grid@rac12chub1:/home/grid>srvctl stop nodeapps -n rac12chub1 -f
PRCC-1017 : ons was already stopped on rac12chub1
PRCR-1005 : Resource ora.ons is already stopped
grid@rac12chub1:/home/grid> srvctl stop nodeapps -n rac12chub2 -f
grid@rac12chub1:/home/grid>
4.2、移除所有依赖于使用HOSTS文件中SCANIP的资源
查看现在SCAN配置
[root@rac12chub1 bin]# ./srvctl config scan
SCAN name: racnode-scanip, Network: 1
Subnet IPv4: 172.16.10.0/255.255.255.0/eth0, static (hosts文件解析,scan 静态static)
Subnet IPv6:
SCAN 1 IPv4 VIP: 172.16.10.150
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
[root@rac12chub1 bin]# su - grid
Last login: Sat May 5 14:01:19 CST 2018
****************************************************************
***You login as oracle,Please ask somebody to double check!******
****************************************************************
移除监听istener
grid@rac12chub1:/home/grid>srvctl remove listener -a
移除SCAN 监听
grid@rac12chub1:/home/grid>srvctl remove scan_listener
Remove scan listener? (y/[n]) y
移除SCAN
grid@rac12chub1:/home/grid>srvctl remove scan
Remove the scan? (y/[n]) y
PRCS-1024 : Failed to remove Single Client Access Name Virtual Internet Protocol(VIP) resources racnode-scanip
PRCN-2018 : Current user grid is not a privileged user
grid@rac12chub1:/home/grid>exit
logout
[root@rac12chub1 bin]# ./srvctl remove scan
Remove the scan? (y/[n]) y
移除Nodeapps (下面两条命令等同效果)
[root@rac12chub1 bin]# ./srvctl remove nodeapps -n rac12chub1 -f 或者 ./srvctl remove vip -i rac12chub1-vip -f
PRKO-2431 : Warning: the -node option is obsolescent; 'srvctl remove vip' is preferred for removing per-node VIP resource
[root@rac12chub1 bin]# ./srvctl remove nodeapps -n rac12chub2 -f 或者 ./srvctl move vip -i rac12chub2-vip -f
PRKO-2431 : Warning: the -node option is obsolescent; 'srvctl remove vip' is preferred for removing per-node VIP resource
验证上述资源LISTENER、VIP以及scan_listener、scan是否从OCR中移除
[root@rac12chub1 bin]# su - grid
grid@rac12chub1:/home/grid>crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....SM.lsnr ora....er.type ONLINE ONLINE rac12chub1
ora.DATA.dg ora....up.type ONLINE ONLINE rac12chub1
ora....AF.lsnr ora....er.type OFFLINE OFFLINE
ora.MGMT.dg ora....up.type ONLINE ONLINE rac12chub1
ora.MGMTLSNR ora....nr.type OFFLINE OFFLINE
ora.OCR.dg ora....up.type ONLINE ONLINE rac12chub1
ora.asm ora.asm.type ONLINE ONLINE rac12chub1
ora.cvu ora.cvu.type OFFLINE OFFLINE
ora.marvin.db ora....se.type OFFLINE OFFLINE
ora.qosmserver ora....er.type OFFLINE OFFLINE
已移除
4.3、增加依赖于GNS服务的资源
检查
[root@rac12chub1 bin]# ./srvctl config scan
PRCS-1102 : Could not find any Single Client Access Name (SCAN) Virtual Internet Protocol (VIP) resources using filter TYPE=ora.scan_vip.type on network 1
[root@rac12chub1 bin]# srvctl config nodeapps
bash: srvctl: command not found...
[root@rac12chub1 bin]# ./srvctl config nodeapps
PRKO-2439 : VIP does not exist.
PRKO-2331 : ONS daemon does not exist.
增加nodeapps 即VIP资源,命令格式:srvctl add nodeapps -subnet 网段/子网掩码/网卡(同DHCP配置)
这里注意若使用该命令: srvctl add nodeapps -n lc1n1 -A lc1n1-vip/255.255.255.0/eth0
则说明后续VIP将是静态static,而非dhcp自动获取
root@rac12chub1 bin]# ./srvctl add nodeapps -n rac12chub1 -subnet 172.16.10.0/255.255.255.0/eth0
PRKO-2389 : Invalid command line options. Neither -address nor -node can be specified with -subnet.
[root@rac12chub1 bin]# ./srvctl add nodeapps -subnet 172.16.10.0/255.255.255.0/eth0
[root@rac12chub1 bin]# ./srvctl config nodeapps
Network 1 exists
Subnet IPv4: 172.16.10.0/255.255.255.0/eth0, dhcp (GNS文件解析,scan动态 dhcp,对比见4.2步骤 查看scan配置)
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:
4.4、GNS准备
grid@rac12chub1:/home/grid>srvctl status gns
PRKF-1117 : GNS server is not configured in this cluster.
添加GNS资源(root用户) 参数domain 为域名marvin.cn
[root@rac12chub1 bin]# pwd
/g01/app/grid/12.2.0/bin
[root@rac12chub1 bin]# ./srvctl add gns -vip 172.16.10.139 -domain marvin.cn
启动GNS(ROOT用户)
GNS服务只会在一个节点运行
[root@rac12chub1 bin]# ./srvctl start gns
[root@rac12chub1 bin]# ./srvctl status gns
GNS is running on node rac12chub1.
GNS is enabled on node rac12chub1.
[root@rac12chub1 bin]# ./srvctl status gns -n rac12chub2
GNS is not running on node rac12chub2.
GNS is enabled on node rac12chub2.
查看GNS服务
grid@rac12chub1:/home/grid>crs_stat -t
ora.gns ora.gns.type ONLINE ONLINE rac12chub1
ora.gns.vip ora....ip.type ONLINE ONLINE rac12chub1
4.5、添加SCAN,并指定scanip名字
注意这里需要添加DNS子域名 即当前为marvin.cn,前面rac12c-scanip可自行指定
[root@rac12chub1 bin]# ./srvctl add scan -scanname rac12c-scanip.marvin.cn
查看scan 配置,由于它是dhcp动态分配的,所以开启scan会自动分配IP
[root@rac12chub1 bin]# ./srvctl config scan
SCAN name: rac12c-scanip.rac12ch-cluster.marvin.cn, Network: 1
Subnet IPv4: 172.16.10.0/255.255.255.0/eth0, dhcp
Subnet IPv6:
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
4.6、增加SCAN_LISTENER监听
[root@rac12chub1 bin]# ./srvctl add scan_listener -p 1521
[root@rac12chub1 bin]# ./srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
4.7、开启nodeapps
确保gns服务启动
grid@rac12chub1:/home/grid>crs_stat -t
ora.gns ora.gns.type ONLINE ONLINE rac12chub1
ora.gns.vip ora....ip.type ONLINE ONLINE rac12chub1
grid@rac12chub1:/home/grid>srvctl start nodeapps -v
Successfully started node applications.
grid@rac12chub1:/home/grid>srvctl status nodeapps
VIP 172.16.10.132 is enabled
VIP 172.16.10.132 is running on node: rac12chub1
VIP 172.16.10.114 is enabled
VIP 172.16.10.114 is running on node: rac12chub2
Network is enabled
Network is running on node: rac12chub1
Network is running on node: rac12chub2
ONS is enabled
ONS daemon is running on node: rac12chub1
ONS daemon is running on node: rac12chub2
从上面可以看出RAC12CHUB1 的VIP是从DHC中获取出来的,
但是有点可疑的是RAC12CHUB2节点未从指定的DHCP 172.16.10.130 172.16.10.136中获
取,但是nslookup查看rac12chub2节点的VIP又是是可以从GNS vip中解析出来/...而我DHCP配置文件范围又是130-136,这点很可疑...
我怀疑:可能是由于这范围IP已经被其他机器占用,导致IP不够分配..才出现这情况。所以,生产环境中需跟网络工程师确认网络端IP使用情况再进行分配...若有扩节点需要,可尽可能Rang范围可大一些
(要是有条件,最好网络路由上做一些限制,这些IP网络端,只能有哪些客户机去获取,禁止其他客户机抢占)
[root@rac12chub1 bin]# nslookup rac12chub1-vip.marvin.cn 172.16.10.139
Server: 172.16.10.139
Address: 172.16.10.139#53
Name: rac12chub1-vip.marvin.cn
Address: 172.16.10.132
[root@rac12chub1 bin]# nslookup rac12chub2-vip.marvin.cn 172.16.10.139
Server: 172.16.10.139
Address: 172.16.10.139#53
Name: rac12chub2-vip.marvin.cn
Address: 172.16.10.114
解决方法:
1、修改DHCP服务器上的配置文件dhcpd.conf,范围扩展
[root@server ~]# vi /etc/dhcp/dhcpd.conf
#修改rang参数范围,其他保持不变
range dynamic-bootp 172.16.10.100 172.16.10.136;
2、重启dhcp服务
[root@server ~]# systemctl restart dhcpd.service
[root@server ~]# systemctl enable dhcpd.service
4.8、检查VIP状态
[root@rac12chub1 bin]# ./srvctl config vip -n rac12chub2
VIP exists: network number 1, hosting node rac12chub2
VIP Name: rac12chub2
VIP IPv4 Address: -/rac12chub2-vip/172.16.10.114
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
[root@rac12chub1 bin]# ./srvctl config vip -n rac12chub1
VIP exists: network number 1, hosting node rac12chub1
VIP Name: rac12chub1
VIP IPv4 Address: -/rac12chub1-vip/172.16.10.132
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
4.9、开启SCAN以及SCAN_LISTENER
[root@rac12chub1 bin]# ./srvctl start scan
[root@rac12chub1 bin]# ./srvctl config scan
SCAN name: rac12c-scanip.rac12ch-cluster.marvin.cn, Network: 1
Subnet IPv4: 172.16.10.0/255.255.255.0/eth0, dhcp
Subnet IPv6:
SCAN 1 IPv4 VIP: -/scan1-vip/172.16.10.130
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: -/scan2-vip/172.16.10.117
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 3 IPv4 VIP: -/scan3-vip/172.16.10.136
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
[root@rac12chub1 bin]# ./srvctl start scan_listener
4.10、增加监听listener和开启节点监听
[root@rac12chub1 bin]# ./srvctl add listener -l LISTENER -p "TCP:1521"
[root@rac12chub1 bin]# ./srvctl start listener
PRCR-1079 : Failed to start resource ora.LISTENER.lsnr
CRS-5016: Process "/g01/app/grid/12.2.0/bin/lsnrctl" spawned by agent "ORAAGENT" for action "start" failed: details at "(:CLSN00010:)" in "/g01/grid/diag/crs/rac12chub2/crs/trace/crsd_oraagent_root.trc"
CRS-5016: Process "/g01/app/grid/12.2.0/bin/lsnrctl" spawned by agent "ORAAGENT" for action "start" failed: details at "(:CLSN00010:)" in "/g01/grid/diag/crs/rac12chub2/crs/trace/crsd_oraagent_root.trc"
CRS-2674: Start of 'ora.LISTENER.lsnr' on 'rac12chub2' failed
CRS-5016: Process "/g01/app/grid/12.2.0/bin/lsnrctl" spawned by agent "ORAAGENT" for action "start" failed: details at "(:CLSN00010:)" in "/g01/grid/diag/crs/rac12chub1/crs/trace/crsd_oraagent_root.trc"
CRS-5016: Process "/g01/app/grid/12.2.0/bin/lsnrctl" spawned by agent "ORAAGENT" for action "start" failed: details at "(:CLSN00010:)" in "/g01/grid/diag/crs/rac12chub1/crs/trace/crsd_oraagent_root.trc"
CRS-2674: Start of 'ora.LISTENER.lsnr' on 'rac12chub1' failed
启动监听失败,报错trc发现
tail -200f /g01/grid/diag/crs/rac12chub1/crs/trace/crsd_oraagent_root.trc
tail: cannot open ‘/g01/grid/diag/crs/rac12chub1/crs/trace/crsd_oraagent_root.trc’ for reading: No such file or directory
tail: no files remaining
发现rac12chub1节点是存在的,但是发现listener配置 所有者是root,可能是grid用户没权限访问造成,这是因为我在root用户下增加监听的
[root@rac12chub1 bin]# ll /g01/grid/diag/crs/rac12chub1/crs/trace/crsd_oraagent_root.trc
-rw-rw---- 1 root oinstall 113768 May 5 17:10 /g01/grid/diag/crs/rac12chub1/crs/trace/crsd_oraagent_root.trc
[root@rac12chub1 bin]# ./srvctl config listener
Name: LISTENER
Type: Database Listener
Network: 1, Owner: root
Home: <CRS home>
End points: TCP:1521
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:
解决方法:
1、重新移除监听以及scan_listner
[root@rac12chub1 bin]# ./srvctl remove listener -a
[root@rac12chub1 bin]# ./srvctl remove listener
PRCR-1001 : Resource ora.LISTENER.lsnr does not exist
[root@rac12chub1 bin]# ./srvctl stop scan_listener
[root@rac12chub1 bin]# ./srvctl remove scan_listener
Remove scan listener? (y/[n]) y
2、重新添加scan_listener以及listener,再开启,所以这里需要注意设计到listener 需要切换到grid用户下操作,否则权限属主有问题,造成报错
[root@rac12chub1 bin]# su - grid
grid@rac12chub1:/home/grid>srvctl add scan_listener -p 1521
grid@rac12chub1:/home/grid>srvctl start scan
PRCC-1014 : scan1 was already running
PRCR-1004 : Resource ora.scan1.vip is already running
PRCR-1079 : Failed to start resource ora.scan1.vip
CRS-5702: Resource 'ora.scan1.vip' is already running on 'rac12chub2'
PRCC-1014 : scan2 was already running
PRCR-1004 : Resource ora.scan2.vip is already running
PRCR-1079 : Failed to start resource ora.scan2.vip
CRS-5702: Resource 'ora.scan2.vip' is already running on 'rac12chub1'
PRCC-1014 : scan3 was already running
PRCR-1004 : Resource ora.scan3.vip is already running
PRCR-1079 : Failed to start resource ora.scan3.vip
CRS-5702: Resource 'ora.scan3.vip' is already running on 'rac12chub1'
grid@rac12chub1:/home/grid>srvctl stop scan
grid@rac12chub1:/home/grid>srvctl start scan
grid@rac12chub1:/home/grid>srvctl start scan_listener
grid@rac12chub1:/home/grid> srvctl add listener -l LISTENER -p "TCP:1521"
grid@rac12chub1:/home/grid>srvctl start listener
启动正常....
4.11、起库
现在所有资源修改添加启动完毕,最后启动数据库并检查监听状态以及服务是否开启
grid@rac12chub1:/home/grid>srvctl start database -d marvin
grid@rac12chub1:/home/grid>srvctl status database -d marvin
Instance marvin1 is running on node rac12chub1
Instance marvin2 is running on node rac12chub2
grid@rac12chub1:/home/grid>crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rac12chub1 STABLE
ONLINE ONLINE rac12chub2 STABLE
ora.DATA.dg
ONLINE ONLINE rac12chub1 STABLE
ONLINE ONLINE rac12chub2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE rac12chub1 STABLE
ONLINE ONLINE rac12chub2 STABLE
ora.MGMT.dg
ONLINE ONLINE rac12chub1 STABLE
ONLINE ONLINE rac12chub2 STABLE
ora.OCR.dg
ONLINE ONLINE rac12chub1 STABLE
ONLINE ONLINE rac12chub2 STABLE
ora.net1.network
ONLINE ONLINE rac12chub1 STABLE
ONLINE ONLINE rac12chub2 STABLE
ora.ons
ONLINE ONLINE rac12chub1 STABLE
ONLINE ONLINE rac12chub2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac12chub2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac12chub1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac12chub1 STABLE
ora.MGMTLSNR
1 OFFLINE OFFLINE STABLE
ora.asm
1 ONLINE ONLINE rac12chub1 Started,STABLE
2 ONLINE ONLINE rac12chub2 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 OFFLINE OFFLINE STABLE
ora.gns
1 ONLINE ONLINE rac12chub1 STABLE
ora.gns.vip
1 ONLINE ONLINE rac12chub1 STABLE
ora.marvin.db
1 ONLINE ONLINE rac12chub1 Open,HOME=/u01/oracl
e/12.2.0,STABLE
2 ONLINE ONLINE rac12chub2 Open,HOME=/u01/oracl
e/12.2.0,STABLE
ora.qosmserver
1 OFFLINE OFFLINE STABLE
ora.rac12chub1.vip
1 ONLINE ONLINE rac12chub1 STABLE
ora.rac12chub2.vip
1 ONLINE ONLINE rac12chub2 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac12chub2 STABLE
ora.scan2.vip
1 ONLINE ONLINE rac12chub1 STABLE
ora.scan3.vip
1 ONLINE ONLINE rac12chub1 STABLE
--------------------------------------------------------------------------------
[root@rac12chub1 bin]# su - oracle
Last login: Sun May 6 15:02:02 CST 2018
****************************************************************
***You login as oracle,Please ask somebody to double check!******
****************************************************************
oracle@rac12chub1:/home/oracle>sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Sun May 6 15:39:00 2018
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL>
一切正常....
至此…..RAC 12CR2 传统HOSTS文件解析SCAN 转换成 GNS解析SCAN 成功….RAC架构也随即变化….
版权声明:本文为博主原创文章,未经博主允许不得转载。



