-->

Tuesday, November 6, 2012

Oracle RAC 11.2.0.3 on Oracle Solaris 11 11.11 using Oracle VM VirtualBox

Энэ постоороо би Oracle 11gR2 RAC -г Solaris 11 дээр Oracle VM VirtualBox ашиглан хэрхэн суулгаж тохируулах талаар орууллаа.

Системийг суулгаж тохируулахад дараах гол алхамуудыг хийнэ.

  1. OS installation. Solaris 11
  2. System and Kernel parameters configurations 
  3. Oracle Grid Infrastructure
  4. Oracle Database Software installation
  5. Oracle Database Creation

Алхам 1: OS installation

Манай системийн 2 сервер болох Sol1, Sol2 дараах шаардлагыг хангасан байх хэрэгтэй
  • 4GB RAM
  • Доод тал нь 30GB HDD
  • NIC - bridged public interface
  • NIC - bridged private interface
  • Shared disks доод тал нь 2 ширхэг
Эхний Node (Sol1) үүсгэе.
VM дээрээс шинээр VM machine үүсгэх сонголтыг сонгож дараах дэлгэц гарч ирнэ. Энд Sol1 гэсэн нэр өгөөд үйлдлийн системийн төрлөө сонгож next товчийг дарна.

VM machine - ний санах ойн хэмжээг тавьж өгнө.

VM machine - ний HDD шинээр үүсгэнэ.

HDD - ний Dynamically expanding storage гэсэн сонголтыг сонгож өгнө. Ингэснээр манай VM machine маань хэрэгтэй хэмжээгээ ашиглана.

HDD - ний нэр болон хэмжээг тавьж өгнө. 

VM machine - ний мэдээлэл зөв эсэхийг шалгаад finish товчийг дарж VM machine - аа үүсгэнэ.

 Дараах коммандыг Oracle VM VirtualBox суусан machine дээрээс ажиллуулж үүсгэсэн VM machine - д Shared Disks нэмж өгнө. 
10GB хэмжээтэй 10 диск үүсгэе.
VBoxManage createhd -filename "c:\Users\user\VirtualBox VMs\asm1.vdi" -size 10240 -format VDI -variant Fixed
VBoxManage createhd -filename "c:\Users\user\VirtualBox VMs\asm2.vdi" -size 10240 -format VDI -variant Fixed
VBoxManage createhd -filename "c:\Users\user\VirtualBox VMs\asm3.vdi" -size 10240 -format VDI -variant Fixed
VBoxManage createhd -filename "c:\Users\user\VirtualBox VMs\asm4.vdi" -size 10240 -format VDI -variant Fixed
VBoxManage createhd -filename "c:\Users\user\VirtualBox VMs\asm5.vdi" -size 10240 -format VDI -variant Fixed
VBoxManage createhd -filename "c:\Users\user\VirtualBox VMs\asm6.vdi" -size 10240 -format VDI -variant Fixed
VBoxManage createhd -filename "c:\Users\user\VirtualBox VMs\asm7.vdi" -size 10240 -format VDI -variant Fixed
VBoxManage createhd -filename "c:\Users\user\VirtualBox VMs\asm8.vdi" -size 10240 -format VDI -variant Fixed
VBoxManage createhd -filename "c:\Users\user\VirtualBox VMs\asm9.vdi" -size 10240 -format VDI -variant Fixed
VBoxManage createhd -filename "c:\Users\user\VirtualBox VMs\asm10.vdi" -size 10240 -format VDI -variant Fixed
Үүсгэсэн 10 дискээ VM machine болох Sol1 - инй port 2,3,4,5 дээр нь зааж өгнө.
VBoxManage storageattach Sol1 --storagectl "SATA Controller" --port 2 --device 0 --type hdd --medium "c:\Users\user\VirtualBox VMs\asm1.vdi" --mtype shareable
VBoxManage storageattach Sol1 --storagectl "SATA Controller" --port 3 --device 0 --type hdd --medium "c:\Users\user\VirtualBox VMs\asm2.vdi" --mtype shareable
VBoxManage storageattach Sol1 --storagectl "SATA Controller" --port 4 --device 0 --type hdd --medium "c:\Users\user\VirtualBox VMs\asm3.vdi" --mtype shareable
VBoxManage storageattach Sol1 --storagectl "SATA Controller" --port 5 --device 0 --type hdd --medium "c:\Users\user\VirtualBox VMs\asm4.vdi" --mtype shareable
VBoxManage storageattach Sol1 --storagectl "SATA Controller" --port 6 --device 0 --type hdd --medium "c:\Users\user\VirtualBox VMs\asm5.vdi" --mtype shareable
VBoxManage storageattach Sol1 --storagectl "SATA Controller" --port 7 --device 0 --type hdd --medium "c:\Users\user\VirtualBox VMs\asm6.vdi" --mtype shareable
VBoxManage storageattach Sol1 --storagectl "SATA Controller" --port 8 --device 0 --type hdd --medium "c:\Users\user\VirtualBox VMs\asm7.vdi" --mtype shareable
VBoxManage storageattach Sol1 --storagectl "SATA Controller" --port 9 --device 0 --type hdd --medium "c:\Users\user\VirtualBox VMs\asm8.vdi" --mtype shareable
VBoxManage storageattach Sol1 --storagectl "SATA Controller" --port 10 --device 0 --type hdd --medium "c:\Users\user\VirtualBox VMs\asm9.vdi" --mtype shareable
VBoxManage storageattach Sol1 --storagectl "SATA Controller" --port 11 --device 0 --type hdd --medium "c:\Users\user\VirtualBox VMs\asm10.vdi" --mtype shareable
Үүсгэсэн 10 дискний төлвийг shared болгож өөрчилнө.
VBoxManage modifyhd "c:\Users\user\VirtualBox VMs\asm1.vdi" -type shareable
VBoxManage modifyhd "c:\Users\user\VirtualBox VMs\asm2.vdi" -type shareable
VBoxManage modifyhd "c:\Users\user\VirtualBox VMs\asm3.vdi" -type shareable
VBoxManage modifyhd "c:\Users\user\VirtualBox VMs\asm4.vdi" -type shareable
VBoxManage modifyhd "c:\Users\user\VirtualBox VMs\asm5.vdi" -type shareable
VBoxManage modifyhd "c:\Users\user\VirtualBox VMs\asm6.vdi" -type shareable
VBoxManage modifyhd "c:\Users\user\VirtualBox VMs\asm7.vdi" -type shareable
VBoxManage modifyhd "c:\Users\user\VirtualBox VMs\asm8.vdi" -type shareable
VBoxManage modifyhd "c:\Users\user\VirtualBox VMs\asm9.vdi" -type shareable
VBoxManage modifyhd "c:\Users\user\VirtualBox VMs\asm10.vdi" -type shareable
VM machine SATA port (CD/DVD): дээр Solaris 11 - ийн iso файлыг зааж өгнө. Ингээд VM machine - аа эхлүүлэхэд дараах дэлгэц гарч ирнэ.

Keyboard сонголтоо хийнэ.

Хэлээ сонгоно.

Live CD маань дараах байдлаар харагдах ба Install Oracle Solaris ажиллуулна.

Next товчийг дарна.

HDD зааж өгнө.

Цагийн бүсээ сонгоно.

Хэрэглэгчийн нэр, нууц үг болон серверийн hostname өгнө.

Өгөгдөл зөв эсэхийг шалгаад install товчийг дарна.

Install хийж дуусахыг хүлээнэ.


Install хийж дууссаны дараа Solaris 11 iso - гоо unmount хийнэ.

Үүсгэсэн хэрэглэгчийн нэр болон нууц үгээ ашиглаж нэвтэрнэ.

gnome сонголтыг дараах сонголтоос сонгоно.
VM machine - даа VM VirtualBox - ийн plug-in суулгана. Ингэснээр бид VM machine руу clipboard хийх боломжтой болно.

VM VirtualBox - ийн plug-in суулгахад нууц үг лавлаж асууна. Эхний удаа нууц үгээ оруулахад нууц үг хүчингүй болж шинэ нууц үг оруулахыг шаардана.

Ингээд VM machine - аа restart (reboot) хийнэ.

Shared Folder тохируулж өгснөөр бид VM machine болгонд хэрэгтэй файлуудыг нэг бүрчлэн хуулах шаардлагагүй нэг газраас шууд ашиглах боломжтой болно.

VirtualBox нь Oracle VM VirtualBox суусан machine - ний folder - ийг VM machine - д mount хийх боломжийг олгодог. Shared Folder - ийн бүх сонголтыг чагтална.

Shared Folder - оо VM machine boot хийхэд автоматаар mount хийдэг тохиргоог хийж өгөх хэрэгтэй. 

root@sol1: sudo mkdir /Software
root@sol1: sudo mkdir /OracleVMServer
root@sol1: pfexec mount -F vboxfs software /Software
root@sol1: pfexec mount -F vboxfs OracleVMServer /OracleVMServer
root@sol1: cat /etc/vfstab file
...
software - /Software vboxfs - yes -
OracleVMServer - OracleVMServer vboxfs - yes -
...

Алхам 2: System and Kernel parameters configuration


Swap хэмжээг зааж өгнө. Swap хэмжээ дараах хамаарлаар тохиргоо хийгдэх ёстой.

Хэрэв RAM < 4GB бол SWAP - ийн хэмжээ 2xRAM
Хэрэв 4GB < RAM < 16GB бол SWAP - ийн хэмжээ 1.5xRAM
Хэрэв RAM > 16GB бол SWAP - ийн хэмжээ 16GB байна.

root@sol1# swap -d /dev/zvol/dsk/rpool/swap
root@sol1# zfs volsize=7G rpool/swap
root@sol1# swap -a /dev/zvol/dsk/rpool/swap
root@sol1# swap -s
root@sol1# swap -l
Shared Disks - нүүдийг эхлээд format хийнэ.
root@sol1# format
Searching for disks…done

AVAILABLE DISK SELECTIONS:
   0. c1t0d0 <DEFAULT cyl 20883 alt 2 hd 255 sec 63
   1. c1t1d0 <DEFAULT cyl 1303 alt 2 hd 255 sec 63
   2. c1t2d0 <DEFAULT cyl 1303 alt 2 hd 255 sec 63  
   3. c1t3d0 <DEFAULT cyl 1303 alt 2 hd 255 sec 63
   4. c1t4d0 <DEFAULT cyl 1303 alt 2 hd 255 sec 63
   
Specify disk (enter its number): 4
selecting c1t4d0
[disk formatted]
FORMAT MENU:
   disk – select a disk
   type – select (define) a disk type
   partition – select (define) a partition table
   current – describe the current disk
   format – format and analyze the disk
   fdisk – run the fdisk program
   repair – repair a defective sector
   label – write label to the disk
   analyze – surface analysis
   defect – defect list management
   backup – search for backup labels
   verify – read and display labels
   save – save new disk/partition definitions
   inquiry – show vendor, product and revision
   volname – set 8-character volume name
   !<cmd> – execute <cmd>, then return
   quit
format> format>
format> fdisk
No fdisk table exists. The default partition for the disk is: a 100% “SOLARIS System” partition
Type “y” to accept the default partition, otherwise type “n” to edit the partition table.
y
format>
format>  partition
PARTITION MENU:
   0 – change `0′ partition
   1 – change `1′ partition
   2 – change `2′ partition
   3 – change `3′ partition
   4 – change `4′ partition
   5 – change `5′ partition
   6 – change `6′ partition
   7 – change `7′ partition
   select – select a predefined table
   modify – modify a predefined partition table
   name – name the current table
   print – display the current table
   label – write partition map and label to the disk
   !<cmd> – execute <cmd>, then return
   quit
partition> 0
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[0]:  1
Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: 9.95g
partition> print
Current partition table (unnamed):
Total disk cylinders available: 1302 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 1 – 1299 9.95GB (1299/0/0) 20868435
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 – 1301 9.97GB (1302/0/0) 20916630
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 – 0 7.84MB (1/0/0) 16065
9 unassigned wm 0 0 (0/0/0) 0
partition>
partition> label
Ready to label disk, continue? y
partition>quit

User болон User Group нэмнэ. Format хийсэн Shared Disks - үүдэд эрх тавьж өгнө.
Groups:
root@sol1# groupadd -g 1000 oinstall
root@sol1# groupadd -g 1020 asmadmin
root@sol1# groupadd -g 1021 asmdba
root@sol1# groupadd -g 1022 asmoper
root@sol1# groupadd -g 1031 dba
root@sol1# groupadd -g 1032 oper
User:
root@sol1# useradd -u 1100 -g oinstall -G asmoper,asmadmin,asmdba,dba -d /export/home/grid -m grid
root@sol1# useradd -u 1101 -g oinstall -G oper,dba,asmdba -d /export/home/oracle -m oracle
Нууц үг тавих.
root@sol1# passwd grid
root@sol1# passwd oracle
Бусад user "su" комманд ажиллуулдаг болгох
root@sol1# rolemod -K type=normal root
Format хийсэн Shared Disks - үүдэд эрх тавьж өгнө.
root@sol1# chown grid:asmadmin /dev/rdsk/c3t2d0s0
root@sol1# chmod 660 /dev/rdsk/c3t2d0s0
root@sol1# chown grid:asmadmin /dev/rdsk/c3t3d0s0
root@sol1# chmod 660 /dev/rdsk/c3t3d0s0
root@sol1# chown grid:asmadmin /dev/rdsk/c3t4d0s0
root@sol1# chmod 660 /dev/rdsk/c3t4d0s0
root@sol1# chown grid:asmadmin /dev/rdsk/c3t5d0s0
root@sol1# chmod 660 /dev/rdsk/c3t5d0s0 
root@sol1# chown grid:asmadmin /dev/rdsk/c3t6d0s0
root@sol1# chmod 660 /dev/rdsk/c3t5d0s0
root@sol1# chown grid:asmadmin /dev/rdsk/c3t7d0s0
root@sol1# chmod 660 /dev/rdsk/c3t5d0s0
root@sol1# chown grid:asmadmin /dev/rdsk/c3t8d0s0
root@sol1# chmod 660 /dev/rdsk/c3t5d0s0
root@sol1# chown grid:asmadmin /dev/rdsk/c3t9d0s0
root@sol1# chmod 660 /dev/rdsk/c3t5d0s0
root@sol1# chown grid:asmadmin /dev/rdsk/c3t10d0s0
root@sol1# chmod 660 /dev/rdsk/c3t5d0s0
root@sol1# chown grid:asmadmin /dev/rdsk/c3t11d0s0
root@sol1# chmod 660 /dev/rdsk/c3t5d0s0
ORACLE_BASE, ORACLE_HOME, Oracle GI болон Oracle RDBMS - ийн хавтсуудыг үүсгэнэ.
root@sol1# mkdir -p /u01/app/grid
root@sol1# mkdir -p /u01/app/oracle
root@sol1# mkdir -p /u01/app/11.2.0/grid
root@sol1# mkdir -p /u01/app/oracle/product/11.2.0/db_1
root@sol1# mkdir -p /u01/app/oracle/Middleware
Үүсгэсэн хавтсандаа эрх тавьж өгнө.
root@sol1# chown -R grid:oinstall /u01
root@sol1# chown -R oracle:oinstall /u01/app/oracle
root@sol1# chown oracle:oinstall /u01/app/oracle
root@sol1# chown grid:oinstall /u01/app/11.2.0/grid
root@sol1# chmod -R 775 /u01
root@sol1# chmod -R 775 /u01/app/11.2.0/grid
root@sol1# chmod -R 775 /u01/app/oracle
User бүрт project үүсгэнэ.
root@sol1# projadd -U grid -K "project.max-shm-memory=(priv,6g,deny)" user.grid
root@sol1# projmod -sK "project.max-sem-nsems=(priv,512,deny)" user.grid
root@sol1# projmod -sK "project.max-sem-ids=(priv,128,deny)" user.grid
root@sol1# projmod -sK "project.max-shm-ids=(priv,128,deny)" user.grid
root@sol1# projmod -sK "project.max-shm-memory=(priv,6g,deny)" user.grid
root@sol1# projadd -U grid -K "project.max-shm-memory=(priv,6g,deny)" user.oracle
root@sol1# projmod -sK "project.max-sem-nsems=(priv,512,deny)" user.oracle
root@sol1# projmod -sK "project.max-sem-ids=(priv,128,deny)" user.oracle
root@sol1# projmod -sK "project.max-shm-ids=(priv,128,deny)" user.oracle
root@sol1# projmod -sK "project.max-shm-memory=(priv,6g,deny)" user.oracle
root@sol1# /usr/sbin/projmod -sK "process.max-file-descriptor=(priv,65536,deny)" user.grid
root@sol1# /usr/sbin/projmod -sK "process.max-file-descriptor=(priv,65536,deny)" user.oracle
TCP болон UDP - ийн kernel параметрийг тохируулна.
root@sol1# /usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000
root@sol1# /usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500
root@sol1# /usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000
root@sol1# /usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500
Дээрх комманд reboot хийхэд алга болох учир boot хийх бүрт ажилладаг script бичиж өгөх хэрэгтэй.
root@sol1# cat /etc/init.d/ndd   
#!/bin/sh
ndd -set /dev/tcp tcp_smallest_anon_port 9000
ndd -set /dev/tcp tcp_largest_anon_port 65500
ndd -set /dev/udp udp_smallest_anon_port 9000
ndd -set /dev/udp udp_largest_anon_port 65500 
Үүсгэсэн script - ээ init нэмж өгнө.
root@sol1# chmod 744 /etc/init.d/ndd
root@sol1# chown root:sys /etc/init.d/ndd
root@sol1# ln /etc/init.d/ndd /etc/rc2.d/S70ndd 
SSH - ийн тохиргоо хийнэ.
root@sol1# cat /etc/ssh/sshd_config  
...
LoginGraceTime 0
...
Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour,aes128-cbc,aes128-cbc,aes192-cbc,aes256-cbc,3des-cbc,blowfish-cbc
...
SSH - ийн тохиргоог хийсний дараа ssh service - ээ restart хийнэ.
root@sol1# svcadm restart ssh
SUDO файлыг засвар хийнэ.
root@sol1# cat /etc/sudoers
...
Defaults visiblepw
Core файл үүсгэдэг болгох.
root@sol1# mkdir /var/cores
root@sol1# coreadm -g /var/cores/%f.%n.%p.%t.core -e global -e global-setid -e log -d process -d proc-setid
NTP тохиргоо хийнэ. Cluster - ийн 2 node - ийн цаг адилхан байх ёстой.
root@sol1# cp /etc/inet/ntp.server /etc/inet/ntp.config
root@sol1# cat /etc/inet/ntp.config
...
server NTP_server_IP
NTP - гээ restart хийнэ.
root@sol1# /usr/sbin/svcadm restart ntp
User profile - уудыг шинэчилнэ. Grid User:
root@sol1# cat /export/home/grid/.profile
...
umask 022
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/11.2.0/grid
ORACLE_SID=+ASM1
LD_LIBRARY_PATH=$ORACLE_HOME/lib   PATH=$PATH:/usr/local/bin:/usr/sbin:/usr/bin:/usr/openwin/bin:/usr/ucb: $ORACLE_HOME/bin
export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH
TEMP=/tmp
TMPDIR=/tmp
export TEMP TMPDIR
ulimit -t unlimited
ulimit -f unlimited
ulimit -d unlimited
ulimit -s unlimited
ulimit -v unlimited
if [ -t 0 ]; then
 stty intr ^C
fi
Oracle user:
root@sol1# cat /export/home/oracle/.profile
umask 022
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
ORACLE_SID=D11G
ORACLE_UNQNAME=D11G
TZ=Asia/Ulaanbaatar
LD_LIBRARY_PATH=$ORACLE_HOME/lib
PATH=$PATH:/usr/local/bin:/usr/sbin:/usr/bin:/usr/openwin/bin:/usr/ucb:$ORACLE_HOME/bin
export ORACLE_BASE ORACLE_HOME ORACLE_UNQNAME ORACLE_SID TZ LD_LIBRARY_PATH PATH
TEMP=/tmp
TMPDIR=/tmp
export TEMP TMPDIR
ulimit -t unlimited
ulimit -f unlimited
ulimit -d unlimited
ulimit -s unlimited
ulimit -v unlimited
if [ -t 0 ]; then
 stty intr ^C
fi
Системийн файлын параметрийн тохиргоо.
root@sol1# cat /etc/system
...
set rlim_fd_max = 65536
set rlim_fd_cur = 65536
Дараах OS package - уудыг суулгана.
root@sol1# pkg install SUNWarc SUNWbtool SUNWhea SUNWlibC SUNWlibm SUNWlibms SUNWtoo SUNWi1cs SUNWi15cs SUNWcsl
Solaris Package Updater - г ашиглаж дараах package - уудыг суулгана. SUNWxwplr SUNWxwplt motif NETWORK тохируулах:
root@sol1# netadm enable -p ncp DefaultFixed
root@sol1# ipadm create-ip net1
root@sol1# ipadm create-ip net2
root@sol1# ipadm create-addr -T static -a local=172.16.250.50/24 net1/addr
root@sol1# ipadm create-addr -T static -a local=10.10.2.21/24 net2/addr
root@sol1# route -p add default 172.16.250.1
DNS server - т host - уудын нэр IP - г бүртгэж өгнө. Хэрэв DNS server байхгүй бол /etc/hosts файлыг ашиглана.
root@sol1# cat /etc/hosts
#
# Copyright 2009 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# Internet host table
#
::1 sol1 localhost 
127.0.0.1 localhost

# Public IP
172.16.250.50  sol1  sol1.electronics.int
172.16.250.51  sol2  sol2.electronics.int
  
# Vip IP
172.16.250.52  sol1-vip  sol1-vip.electronics.int
172.16.250.53  sol2-vip  sol2-vip.electronics.int

# SCAN IP
# 172.16.250.54 scan-sol  scan-sol.electronics.int 

# Private IP
10.10.2.21   sol1-priv  sol1-priv.electronics.int
10.10.2.22   sol2-priv  sol2-priv.electronics.int
Дээр хийсэн тохиргоогоо 2 дахь VM machine буюу Sol2 дээр хийнэ.

 Алхам 3: Oracle Grid Infrastructure



Хоёр Cluster Node - ийн хооронд passwordless connection үүсгэх script ажиллуулна. Эхлээд GRID хэрэглэгчээрээ нэвтэрч орно.
grid@sol1# cd /software/grid/sshsetup
grid@sol1# ./sshUserSetup.sh -user grid -hosts "sol1 sol2" -noPromptPassphrase -advanced -exverify
grid@sol1# ./sshUserSetup.sh -user oracle -hosts "sol1 sol2" -noPromptPassphrase -advanced -exverify
grid@sol1# ssh-add
Grid Infrastructure суухад бэлэн эсэхийг шалгана.
grid@sol1# ./runcluvfy.sh stage -pre crsinst -n sol1,sol2 -verbose
grid@sol1# ./runcluvfy.sh stage -post hwos -n sol1,sol2 -verbose
./runInstaller scritp - г ямарч алдаагүй ажиллуулахын тулд дараах коммандыг ажиллуулна.
grid@sol1# export AWT_TOOLKIT=XToolkit
Бүх зүйл зүгээр бол ./runInstaller ажиллуулна.
grid@sol1# ./runInstaller

Skip Software updates

Install and Configure Oracle Grid Infrastructure for a Cluster

Typical Installation

SSH connectivity болон node - үүдийн public, private гэдгийг тохируулж өгнө.

 Доорх зурган дээр бидэнд хэрэгтэй Network Interface маань net1, net2 байна.

Oracle Automatic Storage Management

ASM эхний диск буюу DATA дискийг үүсгэж байна.

Inventory замыг зааж өгнө.

Өгөгдөл зөв эсэхийг шалгана. Энэ хэсэгт гарсан алдааг ignore хийнэ. Учир нь манай shared disk - үүд маань virtual oracle prerequisite checks зөвхөн iSCSI шалгадаг. 

Өгөгдлөө зөв эсэхийг нягтлаад install товчийг дарна.

Дараах 2 script - г ажиллуулж install - г дуусгана.



root@sol1:/u01/app/11.2.0/grid# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.
Entries will be added to the /var/opt/oracle/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
OLR initialization – successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘sol1′
CRS-2676: Start of ‘ora.mdnsd’ on ‘sol1′ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘sol1′
CRS-2676: Start of ‘ora.gpnpd’ on ‘sol1′ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘sol1′
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘sol1′
CRS-2676: Start of ‘ora.gipcd’ on ‘sol1′ succeeded
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘sol1′ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘sol1′
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘sol1′
CRS-2676: Start of ‘ora.diskmon’ on ‘sol1′ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘sol1′ succeeded
ASM created and started successfully.
Disk Group DATA created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 054f135ad3644fbabff3f058ef28ff75.
Successful addition of voting disk 834ba35a0f6c4fe0bf04700b9edc2bb2.
Successful addition of voting disk 3a99cb045bd94f5ebf038d4b4584228c.
Successful addition of voting disk 133e3427001c4f0dbfa957587866693a.
Successful addition of voting disk 90b3a7e49b004ff3bf58d1aa6eb91bbb.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
– —– —————– ——— ———
1. ONLINE 054f135ad3644fbabff3f058ef28ff75 (/dev/rdsk/c4t1d0s0) [DATA]
2. ONLINE 834ba35a0f6c4fe0bf04700b9edc2bb2 (/dev/rdsk/c4t2d0s0) [DATA]
3. ONLINE 3a99cb045bd94f5ebf038d4b4584228c (/dev/rdsk/c4t3d0s0) [DATA]
4. ONLINE 133e3427001c4f0dbfa957587866693a (/dev/rdsk/c4t4d0s0) [DATA]
5. ONLINE 90b3a7e49b004ff3bf58d1aa6eb91bbb (/dev/rdsk/c4t5d0s0) [DATA]
Located 5 voting disk(s).
CRS-2672: Attempting to start ‘ora.asm’ on ‘sol1′
CRS-2676: Start of ‘ora.asm’ on ‘sol1′ succeeded
CRS-2672: Attempting to start ‘ora.DATA.dg’ on ‘sol1′
CRS-2676: Start of ‘ora.DATA.dg’ on ‘sol1′ succeeded
Configure Oracle Grid Infrastructure for a Cluster … succeeded
root@sol2:/u01/app/11.2.0/grid# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Creating /usr/local/bin directory…
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …
Creating /var/opt/oracle/oratab file…
Entries will be added to the /var/opt/oracle/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization – successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node sol1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster … succeeded


Grid Infrastructure бүрэн суусан эсэхийг дараах коммандаар шалгана.
grid@sol1:/u01/app/11.2.0/grid/log/sol2$ crsctl check cluster -all
**************************************************************
sol1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
sol2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
grid@sol2:/u01/app/11.2.0/grid/log/sol2$ crsctl stat res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA.dg
ONLINE ONLINE sol1
ONLINE ONLINE sol2
ora.LISTENER.lsnr
ONLINE ONLINE sol1
ONLINE ONLINE sol2
ora.asm
ONLINE ONLINE sol1 Started
ONLINE ONLINE sol2 Started
ora.gsd
OFFLINE OFFLINE sol1
OFFLINE OFFLINE sol2
ora.net1.network
ONLINE ONLINE sol1
ONLINE ONLINE sol2
ora.ons
ONLINE ONLINE sol1
ONLINE ONLINE sol2
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE sol1
ora.cvu
1 ONLINE ONLINE sol1
ora.oc4j
1 ONLINE ONLINE sol1
ora.scan1.vip
1 ONLINE ONLINE sol1
ora.sol1.vip
1 ONLINE ONLINE sol1
ora.sol2.vip
1 ONLINE ONLINE sol2
grid@sol1:/u01/app/11.2.0/grid/log/sol2$ crsctl stat res -t -init
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.asm
1 ONLINE ONLINE sol2 Started
ora.cluster_interconnect.haip
1 ONLINE ONLINE sol2
ora.crf
1 ONLINE ONLINE sol2
ora.crsd
1 ONLINE ONLINE sol2
ora.cssd
1 ONLINE ONLINE sol2
ora.cssdmonitor
1 ONLINE ONLINE sol2
ora.ctssd
1 ONLINE ONLINE sol2 ACTIVE:0
ora.diskmon
1 OFFLINE OFFLINE
ora.evmd
1 ONLINE ONLINE sol2
ora.gipcd
1 ONLINE ONLINE sol2
ora.gpnpd
1 ONLINE ONLINE sol2
ora.mdnsd
1 ONLINE ONLINE sol2


Алхам 4: Oracle Database Software Installation



oracle@sol1# export AWT_TOOLKIT=XWToolKit
oracle@sol1:#./runInstaller

E-mail notification ignore хийнэ.

Skip software updates.

Install database software only.

Oracle Real Application Clusters database installation.

Select Languages.

Select EE.

Oracle Base болон software байрыг зааж өгнө.

Бүлгүүдийг сонгоно.

 Өгөгдлийг автоматаар шалгана. Доор дэлгэцэн дээр SCAN анхааруулга гарсан нь бид SCAN IP - аа DNS серверт бүртгэж өгөөгүй тохиолдолд гарна.

Дараах script - ийг ажиллуулна.





Алхам 5: Oracle Database Creation


Database үүсгэхээс өмнө бид ASMCA ашиглаж DATADG диск FRA - д зориулж үүсгэнэ.
grid@sol1# asmca



dbca - г ашиглаж Oracle database үүсгэнэ.
oracle@sol1# dbca

Oracle Real Application Cluster (RAC) database.

Create a Database.

General Purpose or Transaction Processing.

Admin managed, D11G.

Configure Enterprise Manager.

Нууц үгээ оруулна.

Disck Group - ээ сонгоно.


FRA disk group - ээ сонгоно.

Sample Schemas

Parameter - үүдээ оруулна.

Controlfile, Datafile, Redologfile тохиргоог энд хийж өгнө.

Create Database, Generate Database Creation Scripts.

Өгөгдлөө нягтална.

Амжиллтай үүсч дуусаад EM URL дэлгэцэнд дараах байдлаар харуулна.



Дараах коммандыг ажиллуулж Cluster - ийн төлвийг шалгана.
oracle@sol1:/u01/app/11.2.0/grid/bin$ ./crsctl stat res -t
——————————————————————————–
NAME TARGET STATE SERVER STATE_DETAILS
——————————————————————————–
Local Resources
——————————————————————————–
ora.DATA.dg
ONLINE ONLINE sol1
ONLINE ONLINE sol2
ora.DATADG.dg
ONLINE ONLINE sol1
ONLINE ONLINE sol2
ora.LISTENER.lsnr
ONLINE ONLINE sol1
ONLINE ONLINE sol2
ora.asm
ONLINE ONLINE sol1 Started
ONLINE ONLINE sol2 Started
ora.gsd
OFFLINE OFFLINE sol1
OFFLINE OFFLINE sol2
ora.net1.network
ONLINE ONLINE sol1
ONLINE ONLINE sol2
ora.ons
ONLINE ONLINE sol1
ONLINE ONLINE sol2
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE sol2
ora.cvu
1 ONLINE ONLINE sol2
ora.d11g.db
1 ONLINE ONLINE sol1 Open
2 ONLINE ONLINE sol2 Open
ora.oc4j
1 ONLINE ONLINE sol2
ora.scan1.vip
1 ONLINE ONLINE sol2
ora.sol1.vip
1 ONLINE ONLINE sol1
ora.sol2.vip
1 ONLINE ONLINE sol2
oracle@sol1:~$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Mon Feb 6 18:24:05 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> select * from v$active_instances;
INST_NUMBER INST_NAME
———– ————————————————————
     1 sol1:D11G1
     2 sol2:D11G2

Хоёр Node - ийн маань EM дараах байдлаар харагдана.



За ингээд Oracle Database Real Application Cluster хэрхэн суулгах талаар та бүхэнд бага ч болов тус болсон гэдэгт найдаж байна. Амжилт хүсье! :)

No comments:

Post a Comment