2 NODE RAC INSTALLATION
HARD WARE REQUIRE MENTS:
2 PCS WITH 2 GB RAM EACH AND
3.0 GHZ PROCESSORS
2 LAN CARDS,
2 CABLES TO COMMUNICAT PC’S WITH EACH OTHER,
2 SWITCHES
SOFT WARE REQUIREMENTS
RHEL 5.2 OPERATING SYSTEM
VM WARE 7.0
OPEN FILER 2.0
ORACLE CLUSTER WARE 11G
ORACLE DATABASE 11G
ASM LIB RPMS
OCFS2 RPMS
CLUSTER WARE INSTALLATION
1. CONFIGURATION OF SAN
2. CONFIGURATION OF NODES FOR CLUSTER WARE INSTALLATION
3. CONFIGURATION OF OCFS ON NODES
4. CONFIGURATION OF ASMLIB ON NODES
5. INSTALL CLUSTER WARE
6. DATABASE SOFTWARE INSTALLATION
7. LISTENER CONFIGURATION
8. ASM CONFIGURATION
9. DATABASE CONFIGURATION
1.CONFIGURATION OF SAN USING OPEN FILER:
A) INSTALL OPEN FILER SOFTWARE AND CONFIGURE REQUIRED PARTITIONS . LEFT ONE SCSI HARD DISK FREE FOR CREATION OF SAN
After installation completed the bellow screen will be appear then log in with root user.
Change pwd for openfiler user
#passwd openfiler
In this machine password set was redhat. In the screen its shown that you can access the openfiler web interface through
https://192.168.0.29:446
But before that a few things to be done.
Configuring the web interface:
For accessing openfiler through web interface you need a web server configured. Apache comes as configured with opefiler. But you have to start the service.
[root@openfiler ~]# /etc/init.d/httpd start
Starting httpd: [ OK ]
[root@openfiler ~]# chkconfig httpd on
And you need to start the openfiler servicce also.
[root@openfiler ~]# /etc/init.d/openfiler start
Starting openfiler:
[root@openfiler ~]# chkconfig openfiler on
Then go to this address in web https://10.10.10.1:446/
The bellow screen will appear
Go to services tab in bellow screen and enable iscsi service
Go to system tab and add two node ip’s information in network access configuration
Create volume group using above create physical volume
Then create volumes in above created volume group
Then add one new iscsi target as showing in bellow screen
Then go to lun mapping tab in iscsi target tab and map all volumes created above to lun’s
Then go to network acl tab and change access mode to allow for all nodes
Then go to /etc/initiator.deny in openfiler system and comment line start with
iqn.2006-01.com.openfiler:storage.lap
2.CONFIGURATION OF NODES FOR CLUSTER WARE INSTALLATION
1.
INSTALL RHEL 5.2 OPERATIONG SYSTEM ON BOTH NODES WITH SAME PARTITONS
GIVE NAMES AS RACNODE1 , RACNODE2
2. GIVE TWO IPS FOR EACH NODE
PUBLIC : 10.15.20.1 , 10.15.20.2
PRIVATE: 10.20.25.1 , 10.20.25.2
VIRTUAL :10.25.30.1 ,10.25.30.2
2. INSTALL REQUIRED RPMS ON BOTH NODES GIVNEN ON CLUSTER WARE DOCUMENTATION
binutils-2.17.50.0.6-2.el5
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
glibc-2.5-12
glibc-common-2.5-12
glibc-devel-2.5-12
gcc-4.1.1-52
gcc-c++-4.1.1-52
libaio-0.3.106
libaio-devel-0.3.106
libgcc-4.1.1-52
libstdc++-4.1.1
libstdc++-devel-4.1.1
make-3.81-1.1
iscsi.initator
3. SET KERNAL PERAMETERS ON BOTH NODES GIVNEN ON CLUSTER WARE DOCUMENTATION
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 262144
4. CREATE OINSTALL AND DBA GROUPS
GROUP ADD OINSTALL
GROUPADD DBA
5. CREATE USER ORACLE WITH SAME USER ID AND GROUP ID’S
USERADD ORACLE –G DBA –g oinstall oracle
Password oracle
6. create direcrotry structures
Mkdir -p /opt/app/oracle/product/11.1.0/dbhome
Mkdir -p /opt/app/oracle/product/11.1.0/crs_home
7. include public,private,virtual ip’s in /etc/hosts
#eth0 - PUBLIC
10.15.20.1 racnode1 racnode1
10.15.20.2 racnode2 racnode2
#eth1 - PRIVATE
10.20.25.1 racnodepriv1 racnodepriv1
10.20.25.2 racnodepriv2 racnodepriv2
#VIPs
10.25.30.1 racnodevip1 racnodevip1
10.25.30.2 racnodevip2 racnodevip2
8. Set shell value in /etc/profile
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -u 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
9. Set limits in /etc/security/limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
10. set hang check timer in /etc/rc.d/rc.local
Modprobe hangcheck_timer hangcheck_tick = 30 hangcheck_margin=180
11. set equal dates in both nodes
Date –s “mm/dd/yyyy hh24:mi:ss”
12. establishing the user equivalence on both nodes
RSA remote security authentication ,DSA digital signature authentication
Su – oracle
$echo ~
/home/oracle
Mkdir ~/.ssh
$chmod 755 ~/.ssh
$cd .ssh
$/usr/bin/ssh-keygen –t rsa
$/usr/bin/ssh-keygen –t dsa
$ls
Id_rsa
Id_dsa
----Club these two files on both nodes
$cat id_rsa.pub >> racnode1
$cat id_dsa.pub >> racnode1
---On node2
$cat id_rsa.pub >> racnode2
$cat id_dsa.pub >> racnode2
--Now we have two files racnode1 racnode2 on both nodes
$ scp racnode1 10.15.20.1:/home/oracle/.ssh/.ssh
$ ll
$ ll
racnode1 racode2
$ cat racnode1 racnode2 > authorized_keys
---Copy this authorized_keys to other node also
$chmod 644 authorized_keys
$exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
---U will get message that user identity added
--After that u can add all user identities on both nodes
$ ssh 10.15.20.1 date
$ssh racnode1 date
$ssh racnode1-priv date
$ssh racnode1-vip
$ssh 10.20.25.1 date
13. stop some services and start some services
---Services to be stop:
Cann , cups , cups config , hpoj, send mail, smartd
---Service to be start:
Iscsi, nfs,netpluged , nscd, ntalk , ntpd , rlogin , rsh , telnet , vncserver , vsftpd
Click save and restart both nodes
14. set open filer address and username and password in /etc/iscsid/iscsi.conf
DiscoveryAddress=10.10.10.1
Username=openfiler
Password=redhat
Authentication method= chap
Login_name=openfiler
Password=redhat
15. loging into open filer Service restart iscsi
U will get success fully message
#iscsiadm –m discovery –t sendtargets –p 10.10.10.1
Iqn.2006-01.com.openfiler:storage.lap
#iscsiadm –m node –T Iqn.2006-01.com.openfiler:storage.lap –p 10.10.10.1 -l
16. check for presence of san devises in two nodes
$fdisk -l
----create partitions for rac
3 partitions for voting disks
2 partitions for ocr disks
1 partition for asm data
1 partition for asm fra
1 partition for ocfs data
3. CONFIGURATION OF OCFS FILE SYSTEM
1. install required rpms rpms should be suitable for kernel version other wise YOU will get stack error
ocfs2-2.6.18-53.el5-1.2.7-1.el5.i686.rpm
ocfs2-2.6.18-53.el5PAE-1.2.7-1.el5.i686.rpm
ocfs2-2.6.18-53.el5xen-1.2.7-1.el5.i686.rpm
ocfs2-tools-1.2.7-1.el5.i386.rpm
ocfs2console-1.2.7-1.el5.i386.rpm
1) run the ocfs2console on terminal , the following screen will appear
You can close this box as you can enable the o2cb service later.
Click Cluster > Configure Nodes. Add node names for each node one-by-one.
Make sure to add exact same node name as it has been returned by the `hostname` command.
Make sure to add exact same node name as it has been returned by the `hostname` command.
Propagate the configuration on all other nodes
[root@node1-pub rpms]# /etc/init.d/o2cb enable
Writing O2CB configuration: OK
Starting O2CB cluster ocfs2: Failed
Cluster ocfs2 created
Node node1-nas added
Node node2-nas added
o2cb_ctl: Configuration error discovered while populating cluster ocfs2. None of its nodes were considered local. A node is considered local when its node name in the configuration matches this machine's host name.
Stopping O2CB cluster ocfs2: OK
[root@node1-pub rpms]#
Writing O2CB configuration: OK
Starting O2CB cluster ocfs2: Failed
Cluster ocfs2 created
Node node1-nas added
Node node2-nas added
o2cb_ctl: Configuration error discovered while populating cluster ocfs2. None of its nodes were considered local. A node is considered local when its node name in the configuration matches this machine's host name.
Stopping O2CB cluster ocfs2: OK
[root@node1-pub rpms]#
if you get above error then do the following
So, stop o2cb service, open the /etc/ocfs2/cluster.conf file and update the hostname value to the one that is returned by `hostname` command.
Do not update the IP. Start the service and load it again and the error should go away.
Do not update the IP. Start the service and load it again and the error should go away.
[oracle@node2-pub ~]$ cat /etc/ocfs2/cluster.conf
node:
ip_port = 7777
ip_address = 192.168.0.11
number = 0
name = node1-pub.hingu.net
cluster = ocfs2
node:
ip_port = 7777
ip_address = 192.168.0.22
number = 1
name = node2-pub.hingu.net
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2
--================
[root@node2-pub rpms]# /etc/init.d/o2cb load
Loading module "configfs": OK
Creating directory '/config': OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Creating directory '/dlm': OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
[root@node2-pub rpms]# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Offline
[root@node2-pub rpms]#
Configure o2cb to startup at Boot time:
[root@node2-pub rpms]# /etc/init.d/o2cb configure
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
without typing an answer will keep that current value. Ctrl-C
will abort.
Load O2CB driver on boot (y/n) [n]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [7]: 61
Specify network idle timeout in ms (>=5000) [10000]:
Specify network keepalive delay in ms (>=1000) [5000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Starting O2CB cluster ocfs2: OK
[root@node2-pub rpms]#
[root@node2-pub rpms]# chkconfig --add ocfs2
[root@node2-pub rpms]# chkconfig --add o2cb
[root@node2-pub rpms]# mkdir -p /u02/ocfs2 -- ocfs2 mountpoint
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
will abort.
Load O2CB driver on boot (y/n) [n]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [7]: 61
Specify network idle timeout in ms (>=5000) [10000]:
Specify network keepalive delay in ms (>=1000) [5000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Starting O2CB cluster ocfs2: OK
[root@node2-pub rpms]#
[root@node2-pub rpms]# chkconfig --add ocfs2
[root@node2-pub rpms]# chkconfig --add o2cb
[root@node2-pub rpms]# mkdir -p /u02/ocfs2 -- ocfs2 mountpoint
Steps to enable/disble ocfs2 :
- /etc/init.d/o2cb enable
- /etc/init.d/o2cb online ocfs2
- /etc/init.d/o2cb offline ocfs2
- /etc/init.d/o2cb unlod
- /etc/init.d/o2cb configure
Creating the ocfs file system mount points on master node only
# mkifs.ocfs2 –b 4k –C 32k –N 4 -L ocfscrs /dev/sde
# mkifs.ocfs2 –b 4k –C 32k –N 4 -L ocfscrd /dev/sdf
Mount the ocfs file system on both nodes
# mount –t ocfs2 –o datavolume,nointr –L “ocfsvote” /u01
# mount –t ocfs2 –o datavolume,nointr –L “ocfsocr” /u02
# mount –t ocfs2 –o datavolume,nointr –L “ocfsdata” /u03
Include contents in fstab file
LABEL=ocfsvote /u01 ocfs2 -netdev,datavolume,nointr 0 0
LABEL=ocfsocr /u02 ocfs2 -netdev,datavolume,nointr 0 0
LABEL=ocfsdata /u03 ocfs2 -netdev,datavolume,nointr 0 0
Check permission of /u02, /u03 :
# ls -ld /uo1
# ls -ld /uo2
# ls –ld /u03
# chown oracle:oinstall /u01
# chown oracle:oinstall /u02
# chown oracle:oinstall /u03
# mounted.ocfs2 -d
# mounted.ocfs2 -f
To check the version of ocfs
# cat /proc/ocfs2/version
Creating dummy files for voting
Su – oracle
$ mkdir /u01/vote1
$ touch /u01/vote1/vote1.dbf
$ mkdir /u01/vote2
$ touch /u01/vote2/vote2.dbf
$ mkdir /u01/vote3
$ touch /u01/vote3/vote3.dbf
Creating dummy files for ocr
$ mkdir /u02/ocr1
$ touch /u02/ocr1/ocr1.dbf
$ mkdir /u02/ocr2
$ touch /u02/ocr2/ocr2.dbf
4.CREATING AND CONFIGURING ASM PARTIONS USING ASM LIB
Install following rps which are suitable kernel version
oracleasm-2.6.18-53.el5-2.0.4-1.el5.i686.rpm
oracleasm-2.6.18-53.el5PAE-2.0.4-1.el5.i686.rpm
oracleasm-2.6.18-53.el5xen-2.0.4-1.el5.i686.rpm
oracleasm-support-2.0.4-1.el5.i386.rpm
Configure ASM on both the nodes
[root@node1-pub ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: [ OK ]
Creating /dev/oracleasm mount point: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]
[root@node1-pub ~]#
[root@node1-pub ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: [ OK ]
Creating /dev/oracleasm mount point: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]
[root@node1-pub ~]#
Create ASM Disk Device(s) that will be used in ASM diskgroup (stamping devises as an ASM disks): from one node only
[root@node1-pub ~]# /etc/init.d/oracleasm createdisk DSK1 /dev/sdb1
Marking disk "/dev/sdb1" as an ASM disk: [ OK ]
[root@node1-pub ~]# /etc/init.d/oracleasm createdisk DSK2 /dev/sdc1
Marking disk "/dev/sdc1" as an ASM disk: [ OK ]
[root@node1-pub ~]# /etc/init.d/oracleasm createdisk DSK3 /dev/sdd1
Marking disk "/dev/sdd1" as an ASM disk: [ OK ]
[root@node1-pub ~]# /etc/init.d/oracleasm createdisk DSK4 /dev/sde1
Marking disk "/dev/sde1" as an ASM disk: [ OK ]
[root@node1-pub ~]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
[root@node1-pub ~]# /etc/init.d/oracleasm listdisks
DSK1
DSK2
DSK3
DSK4
[root@node1-pub ~]#
[root@node1-pub ~]# /etc/init.d/oracleasm status
Checking if ASM is loaded: [ OK ]
Checking if /dev/oracleasm is mounted: [ OK ]
[root@node1-pub ~]#
[root@node1-pub ~]# /etc/init.d/oracleasm createdisk DSK1 /dev/sdb1
Marking disk "/dev/sdb1" as an ASM disk: [ OK ]
[root@node1-pub ~]# /etc/init.d/oracleasm createdisk DSK2 /dev/sdc1
Marking disk "/dev/sdc1" as an ASM disk: [ OK ]
[root@node1-pub ~]# /etc/init.d/oracleasm createdisk DSK3 /dev/sdd1
Marking disk "/dev/sdd1" as an ASM disk: [ OK ]
[root@node1-pub ~]# /etc/init.d/oracleasm createdisk DSK4 /dev/sde1
Marking disk "/dev/sde1" as an ASM disk: [ OK ]
[root@node1-pub ~]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
[root@node1-pub ~]# /etc/init.d/oracleasm listdisks
DSK1
DSK2
DSK3
DSK4
[root@node1-pub ~]#
[root@node1-pub ~]# /etc/init.d/oracleasm status
Checking if ASM is loaded: [ OK ]
Checking if /dev/oracleasm is mounted: [ OK ]
[root@node1-pub ~]#
On the other node, you only need to execute the below command to show these disks up there.
[root@node2-pub ~]# /etc/init.d/oracleasm scandisks
[root@node2-pub ~]# /etc/init.d/oracleasm listdisks
DSK1
DSK2
DSK3
DSK4
[root@node2-pub ~]# /etc/init.d/oracleasm listdisks
DSK1
DSK2
DSK3
DSK4
6.CLUSTER WARE INSTALLATION
‘
In bellow screen you have to mention paths of the ocr keys . mention the file name which we have created dumm files names , don’t specify only directory names . if you specify only dir name then will get following error shown In bellow sceen
6. DATABASE SOFTWARE INSTALLATION
7. LISTENER CONFIGURATION
8. ASM CONFIGURATION
9. DATABASE INSTALLAION



























































No comments:
Post a Comment