-->

Wednesday, November 7, 2012

Installing the Solaris Cluster Software to Create a Two-Node Cluster with Highly Available Oracle 11g R2 Database on Solari 10 SPARC


Энэ постоороо би HA - ийн нэгэн хувилбар болох Active-Passive хувилбарыг Sun Cluster (Oracle Solaris Cluster) ашиглан Oracle Database 11gR2 - той Solaris 10 SPARC дээр хэрхэн суулгаж тохируулах талаар та бүхэнд хүргэж байна. Дараах дарааллын дагуу гүйцэтгэнэ.
  • Үйлдлийн систем суулгах
  • Sun Cluster 3.3 software суулгах
  • Cluster үүсгэх
  • Cluster - д node нэмэх
  • Quarum диск нэмэх
  • Kernel тохиргоо хийх
  • Cluster - ийн resource - ийн тохиргоо
  • Хадгалах төхөөрөмжийн тохиргоо
  • Oracle Database software суулгах
  • Listener үүсгэх
  • Database үүсгэх
  • Database - ийн Cluster - ийн тохиргоо

ҮЙЛДЛИЙН СИСТЕМ СУУЛГАХ

Манай энэхүү систем маань Solaris 10 SPACR 64 үйлдлийн систем ашиглана. Манай системийн 2 үйлдлийн системийн hostname-үүдийг node1, node2 гэж оруулна.

Hostname: node1, node2 байна.




Үйлдлийн систем суусны дараа IP хаягны тохиргоог хийнэ. Үүний тулд /etc/inet/hosts, /etc/defaultrouter, /etc/hostname.adaptername, /etc/netmasks гэсэн 4 файлыг үүсгэж өөрчилнө.
/etc/inet/hosts

node1# cat /etc/inet/hosts
172.16.250.1    node1         loghost

node2# cat /etc/inet/hosts
172.16.250.2    node2         loghost

/etc/hostname.adaptername
node1# cat /etc/hostname.adaptername
node1

node2# cat /etc/hostname.adaptername
node2         

/etc/netmasks
node1# cat /etc/netmasks
…

255.255.255.0
192.168.222.0    255.255.255.0
…

node2# cat /etc/netmasks
…
255.255.255.0
192.168.222.0    255.255.255.0
…

/etc/defaultrouter
node1# cat /etc/defaultrouter
…
172.16.250.1
…

node2# cat /etc/defaultrouter
…
172.16.250.1
…

Шинээр үүсгэсэн систем рүү TELNET холболт хийхийн тулд шинэ хэрэглэгч үүсгэх хэрэгтэй.
node1# useradd –d /export/home/admin –m admin
node1# passwd admin

node2# useradd –d /export/home/admin –m admin
node2# passwd admin

ORACLE SOLARIS CLUSTER 3.3 – Г СУУЛГАХ

Oracle Solaris Cluster 3.3 (Sun Cluster 3.3)-ийн software хоёр сервер дээр суулгана.
node1# ./installer

Алхам 1:

Алхам 2:



Алхам 3:

Алхам 4:



Алхам 5:

Алхам 6:



Алхам 7:

Oracle Solaris Cluster 3.3 хамгийн сүүлийн Core Patch –аар хоёр сервер дээрээ шинэчилнэ. Ингэснээр өмнөх Bug -уудтай тулгарахгүй.
#smpatch add –i 145333-15
add patch 126106-39
Transition old-style patching.
Patch 126106-39 has been successfully installed.

Манай Cluster node1, node2 гэсэн 2 node – тэй байна. Node – үүдийн тохиргоо дараах байдлаар тохиргоо хийгдэнэ.
node1# cat /etc/inet/hosts
172.16.250.1    node1         loghost
172.16.250.2    node2

node2# cat /etc/inet/hosts
172.16.250.1    node1         
172.16.250.2    node2         loghost

CLUSTER ҮҮСГЭХ БА ЭХНИЙ NODE НЭМЭХ

Cluster-ийн тохиргоог хийхдээ Sun Cluster-ийн scintall коммандыг ашиглана.
node1# scinstall

Дээрх коммандыг ажлууснаар дараах үйлдлүүдийг хийнэ.
*** Main Menu ***
    Please select from one of the following (*) options:
      * 1) Create a new cluster or add a cluster node
        2) Configure a cluster to be JumpStarted from this install server
        3) Manage a dual-partition upgrade
        4) Upgrade this cluster node
      * 5) Print release information for this cluster node

      * ?) Help with menu options
      * q) Quit
    Option:  1

  *** New Cluster and Cluster Node Menu ***
    Please select from any one of the following options:
        1) Create a new cluster
        2) Create just the first node of a new cluster on this machine
        3) Add this machine as a node in an existing cluster

        ?) Help with menu options
        q) Return to the Main Menu
    Option:  2

  *** Establish Just the First Node of a New Cluster ***
    This option is used to establish a new cluster using this machine as
    the first node in that cluster.
    Before you select this option, the Oracle Solaris Cluster framework
    software must already be installed. Use the Oracle Solaris Cluster
    installation media or the IPS packaging system to install Oracle
    Solaris Cluster software.
    Press Control-d at any time to return to the Main Menu.

    Do you want to continue (yes/no) [yes]?  yes
  >>> Typical or Custom Mode <<<
    This tool supports two modes of operation, Typical mode and Custom.
    For most clusters, you can use Typical mode. However, you might need
    to select the Custom mode option if not all of the Typical defaults
    can be applied to your cluster.
    For more information about the differences between Typical and Custom
    modes, select the Help option from the menu.
    Please select from one of the following options:
        1) Typical
        2) Custom
        ?) Help
        q) Return to the Main Menu

    Option [1]:  1

  >>> Cluster Name <<<
    Each cluster has a name assigned to it. The name can be made up of any
    characters other than whitespace. Each cluster name should be unique
    within the namespace of your enterprise.

    What is the name of the cluster you want to establish?  sccluster

  >>> Check <<<
    This step allows you to run cluster check to verify that certain basic
    hardware and software pre-configuration requirements have been met. If
    cluster check detects potential problems with configuring this machine
    as a cluster node, a report of violated checks is prepared and
    available for display on the screen.

    Do you want to run cluster check (yes/no) [yes]?  no 


   >>> Cluster Nodes <<<
    This Oracle Solaris Cluster release supports a total of up to 16
    nodes.
    Please list the names of the other nodes planned for the initial
    cluster configuration. List one node name per line. When finished,
    type Control-D:

    Node name (Control-D to finish):  node2
    Node name (Control-D to finish): 
    This is the complete list of nodes:
        node1
        node2
    Is it correct (yes/no) [yes]?  yes

  >>> Cluster Transport Adapters and Cables <<<                                                                   
    Transport adapters are the adapters that attach to the private cluster
    interconnect.
    Select the first cluster transport adapter:
        1) net1
        2) net2
        3) Other
    Option:  1
    Will this be a dedicated cluster transport adapter (yes/no) [yes]?  yes
    Searching for any unexpected network traffic on "net1" ... done
    Verification completed. No traffic was detected over a 10 second
    sample period.
    Select the second cluster transport adapter:
        1) net2
        2) Other

    Option:  2
    Will this be a dedicated cluster transport adapter (yes/no) [yes]?  yes
    Searching for any unexpected network traffic on "net2" ... done
    Verification completed. No traffic was detected over a 10 second
    sample period.
    Plumbing network address 172.16.0.0 on adapter net1 >> NOT DUPLICATE ... done    Plumbing network address 172.16.0.0 on adapter net2 >> NOT DUPLICATE ... done

  >>> Quorum Configuration <<<
    Every two-node cluster requires at least one quorum device. By
    default, scinstall selects and configures a shared disk quorum device
    for you.
    This screen allows you to disable the automatic selection and
    configuration of a quorum device.
    You have chosen to turn on the global fencing. If your shared storage
    devices do not support SCSI, such as Serial Advanced Technology
    Attachment (SATA) disks, or if your shared disks do not support
    SCSI-2, you must disable this feature.
    If you disable automatic quorum device selection now, or if you intend
    to use a quorum device that is not a shared disk, you must instead use
    clsetup(1M) to manually configure quorum once both nodes have joined
    the cluster for the first time.

    Do you want to disable automatic quorum device selection (yes/no) [no]?  yes  

  >>> Automatic Reboot <<<
    Once scinstall has successfully initialized the Oracle Solaris Cluster
    software for this machine, the machine must be rebooted. After the
    reboot, this machine will be established as the first node in the new
    cluster.

    Do you want scinstall to reboot for you (yes/no) [yes]?  yes

  >>> Confirmation <<<
    Your responses indicate the following options to scinstall:
      scinstall -i \
           -C sccluster \
           -F \
           -T node=node1,node=node2,authtype=sys \
           -w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=64,maxprivatenets=10,numvirtualclusters=12 \
           -A trtype=dlpi,name=net1 -A trtype=dlpi,name=net2 \
           -B type=switch,name=switch1 -B type=switch,name=switch2 \
           -m endpoint=:net1,endpoint=switch1 \
           -m endpoint=:net2,endpoint=switch2
    Are these the options you want to use (yes/no) [yes]?  yes
    Do you want to continue with this configuration step (yes/no) [yes]?  yes
Checking device to use for global devices file system ... done
Initializing cluster name to "sccluster" ... done
Initializing authentication options ... done
Initializing configuration for adapter "net1" ... done
Initializing configuration for adapter "net2" ... done
Initializing configuration for switch "switch1" ... done
Initializing configuration for switch "switch2" ... done
Initializing configuration for cable ... done
Initializing configuration for cable ... done
Initializing private network address options ... done
Setting the node ID for "node1" ... done (id=1)
Checking for global devices global file system ... done
Updating vfstab ... done
Verifying that NTP is configured ... done
Initializing NTP configuration ... done
Updating nsswitch.conf ... done
Adding cluster node entries to /etc/inet/hosts ... done
Configuring IP multipathing groups ...done
Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Oracle Solaris Cluster.
Please do not re-enable network routing.
Log file - /var/cluster/logs/install/scinstall.log.5988

Rebooting ...

updating /platform/sun4u/boot_archive
Connection closed by foreign host.

NODE НЭМЭХ

Өмнөх хэсэг бид Cluster-аа нэг Node-тэйгээр үүсгэж тохируулсан байгаа. Одоо бид нөгөө сервер дээрээс хоёр дахь Node-ийг мөн scintall коммандыг ашиглаж нэмнэ.
node2# scinstall


   *** Main Menu ***
    Please select from one of the following (*) options:
      * 1) Create a new cluster or add a cluster node
        2) Configure a cluster to be JumpStarted from this install server
        3) Manage a dual-partition upgrade
        4) Upgrade this cluster node
      * 5) Print release information for this cluster node
      * ?) Help with menu options
      * q) Quit

    Option:  1

  *** New Cluster and Cluster Node Menu ***
    Please select from any one of the following options:
        1) Create a new cluster
        2) Create just the first node of a new cluster on this machine
        3) Add this machine as a node in an existing cluster

        ?) Help with menu options
        q) Return to the Main Menu

    Option:  3

  *** Add a Node to an Existing Cluster ***
    This option is used to add this machine as a node in an already
    established cluster. If this is a new cluster, there may only be a
    single node which has established itself in the new cluster.
    Before you select this option, the Oracle Solaris Cluster framework
    software must already be installed. Use the Oracle Solaris Cluster
    installation media or the IPS packaging system to install Oracle
    Solaris Cluster software.
    Press Control-d at any time to return to the Main Menu.
    Do you want to continue (yes/no) [yes]?  yes
  >>> Typical or Custom Mode <<<
    This tool supports two modes of operation, Typical mode and Custom.
    For most clusters, you can use Typical mode. However, you might need
    to select the Custom mode option if not all of the Typical defaults
    can be applied to your cluster.
    For more information about the differences between Typical and Custom
    modes, select the Help option from the menu.
    Please select from one of the following options:

        1) Typical
        2) Custom

        ?) Help
        q) Return to the Main Menu

    Option [1]:  1

  >>> Sponsoring Node <<<
    For any machine to join a cluster, it must identify a node in that
    cluster willing to "sponsor" its membership in the cluster. When
    configuring a new cluster, this "sponsor" node is typically the first
    node used to build the new cluster. However, if the cluster is already
    established, the "sponsoring" node can be any node in that cluster.
    Already established clusters can keep a list of hosts which are able
    to configure themselves as new cluster members. This machine should be
    in the join list of any cluster which it tries to join. If the list
    does not include this machine, you may need to add it by using
    claccess(1CL) or other tools.
    And, if the target cluster uses DES to authenticate new machines
    attempting to configure themselves as new cluster members, the
    necessary encryption keys must be configured before any attempt to
    join.

    What is the name of the sponsoring node?  node1

  >>> Cluster Name <<<
    Each cluster has a name assigned to it. When adding a node to the
    cluster, you must identify the name of the cluster you are attempting
    to join. A sanity check is performed to verify that the "sponsoring"
    node is a member of that cluster.
    What is the name of the cluster you want to join?  sccluster
    Attempting to contact "node1" ... done
    Cluster name "sccluster" is correct.
   
Press Enter to continue: 

  >>> Check <<<
    This step allows you to run cluster check to verify that certain basic
    hardware and software pre-configuration requirements have been met. If
    cluster check detects potential problems with configuring this machine
    as a cluster node, a report of violated checks is prepared and
    available for display on the screen.

    Do you want to run cluster check (yes/no) [yes]?  yes

  >>> Autodiscovery of Cluster Transport <<<
    If you are using Ethernet or Infiniband adapters as the cluster
    transport adapters, autodiscovery is the best method for configuring
    the cluster transport.
    Do you want to use autodiscovery (yes/no) [yes]? yes
    Probing .........
    The following connections were discovered:
        node1:net1  switch1  node2:net1
        node1:net2  switch2  node2:net2

    Is it okay to configure these connections (yes/no) [yes]?  yes

  >>> Automatic Reboot <<<
    Once scinstall has successfully initialized the Oracle Solaris Cluster
    software for this machine, the machine must be rebooted. The reboot
    will cause this machine to join the cluster for the first time.
    Do you want scinstall to reboot for you (yes/no) [yes]?  yes
>>> Confirmation <<<
    Your responses indicate the following options to scinstall:
      scinstall -i \
           -C sccluster \
           -N node1 \
           -A trtype=dlpi,name=net1 -A trtype=dlpi,name=net2 \
           -m endpoint=:net1,endpoint=switch1 \
           -m endpoint=:net2,endpoint=switch2
    Are these the options you want to use (yes/no) [yes]?  yes
    Do you want to continue with this configuration step (yes/no) [yes]?  yes
Checking device to use for global devices file system ... done
Adding node " node2 " to the cluster configuration ... done
Adding adapter "net1" to the cluster configuration ... done
Adding adapter "net2" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Copying the config from " node1 " ... done
Copying the postconfig file from " node1 " if it exists ... done
No postconfig file found on " node1 ", continuing
Setting the node ID for " node2 " ... done (id=2)
Verifying the major number for the "did" driver with " node1 " ... done
Checking for global devices global file system ... done
Updating vfstab ... done
Updating nsswitch.conf ... done
Adding cluster node entries to /etc/inet/hosts ... done
Configuring IP multipathing groups ...done
Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Oracle Solaris Cluster.
Please do not re-enable network routing.
Updating file ("ntp.conf.cluster") on node node1... done
Updating file ("hosts") on node node1... done
Log file - /var/cluster/logs/install/scinstall.log.5771
Rebooting ...
updating /platform/sun4u/boot_archive


QUORUM ДИСК НЭМЭХ

Cluster тохиргоо хийх явцад бид Quorum disk дараа нь сонгож өгнө гэж сонгосон байгаа. Cluster Node нэмж өгсний дараа Quorum disk-ээ нэмж өгнө. Дараах коммандаар Cluster –ийн дискүүдийн mapping харна.
node1# /usr/cluster/bin/scdidadm –L
1        node1:/dev/rdsk/c0t3d0          /dev/did/rdsk/d1
2        node2:/dev/rdsk/c0t3d0          /dev/did/rdsk/d2
3        node1:/dev/rdsk/c1t3d0          /dev/did/rdsk/d3
4        node1:/dev/rdsk/c2t3d0          /dev/did/rdsk/d4
5        node1:/dev/rdsk/c3t3d0          /dev/did/rdsk/d5
6        node2:/dev/rdsk/c2t3d0          /dev/did/rdsk/d4
7        node2:/dev/rdsk/c3t3d0          /dev/did/rdsk/d5

Cluster-ийн Quorum disk нэмэхдээ scsetup коммандыг ашиглана.
node1# pwd
/
node1# cd /usr/cluster/bin/
node1:/usr/cluster/bin# ./scsetup

  >>> Initial Cluster Setup <<<
    This program has detected that the cluster "installmode" attribute is
    still enabled. As such, certain initial cluster setup steps will be
    performed at this time. This includes adding any necessary quorum
    devices, then resetting both the quorum vote counts and the
    "installmode" property.
    Please do not proceed if any additional nodes have yet to join the
    cluster.
    Is it okay to continue (yes/no) [yes]?  yes
    Do you want to add any quorum devices (yes/no) [yes]?  yes

    Following are supported Quorum Devices types in Oracle Solaris
    Cluster. Please refer to Oracle Solaris Cluster documentation for
    detailed information on these supported quorum device topologies.
    What is the type of device you want to use?
        1) Directly attached shared disk
        2) Network Attached Storage (NAS) from Network Appliance
        3) Quorum Server
        q) Return to the quorum menu

    Option:  1

  >>> Add a SCSI Quorum Disk <<<
    A SCSI quorum device is considered to be any Oracle Solaris Cluster
    supported attached storage which connected to two or more nodes of the
    cluster. Dual-ported SCSI-2 disks may be used as quorum devices in
    two-node clusters. However, clusters with more than two nodes require
    that SCSI-3 PGR disks be used for all disks with more than two
    node-to-disk paths.

    You can use a disk containing user data or one that is a member of a
    device group as a quorum device.

    For more information on supported quorum device topologies, see the
    Oracle Solaris Cluster documentation.

    Is it okay to continue (yes/no) [yes]?  yes
    Which global device do you want to use (d<N>)?  d5
    Is it okay to proceed with the update (yes/no) [yes]?  yes
scconf -a -q globaldev=d5
    Command completed successfully.
Press Enter to continue: 

    Do you want to add another quorum device (yes/no) [yes]?  no

    Once the "installmode" property has been reset, this program will skip
    "Initial Cluster Setup" each time it is run again in the future.
    However, quorum devices can always be added to the cluster using the
    regular menu options. Resetting this property fully activates quorum
    settings and is necessary for the normal and safe operation of the
    cluster.
    Is it okay to reset "installmode" (yes/no) [yes]?  yes
scconf -c -q reset
scconf -a -T node=.
    Cluster initialization is complete.

KERNEL БОЛОН СИСТЕМИЙН ПАРАМЕТРҮҮДИЙН ТОХИРГОО ХЙИХ

Алхам 1: Cluster-ийн 2 node дээр /oracle folder үүсгэж quota зааж өгнө.
#zfs create –o mountpoint=/oracle rpool/oracle
#chown oracle:oinstall /oracle

Алхам 2: Cluster-ийн 2 node –ийн swap –ийн хэмжээг 17GB болгоно.
#su –
#swap –d /dev/zvol/dsk/rpool/swap
#zfs volsize=17G rpool/swap
#swap –a /dev/zvol/dsk/rpoo/swap
#swap –s
#swap -l

Алхам 3: User group нэмж өгнө.
#groupadd –g 1000 oinstall
#groupadd –g 1020 asmadmin
#groupadd –g 1021 asmdba
#groupadd –g 1022 asmoper
#groupadd –g 1031 dba
#groupadd –g 1032 oper

Алхам 3: Oracle, Grid гэсэн хоёр user нэмж өгнө.
#useradd –u 1100 –g oinstall –G asmoper,asmadmin,asmdba,dba –d /export/home/grid –m grid
#useradd –u 1101 –g oinstall –G oper,dba,asmdba –d /export/home/oracle –m oracle

#passwd grid
#passwd oracle

Алхам 4: Бүх user su коммандыг ашиглах эрхтэй болгож өгнө.
#usermod –K type=normal root

Алхам 5: Oracle Binary буюу Oracle файлууд суух folder -уудыг нэмнэ
#mkdir –p /oracle/app/grid
#mkdir –p /oracle/app/oracle
#mkdir –p /oracle/app/grid/product/11.2.0/grid
#mkdir –p /oracle/app/oracle/product/11.2.0/dbhome_1
#mkdir –p /oracle/app/oracle/Middleware
#mkdir –p /oracle/app/oracle/agent_home

Алхам 6: Үүсгэсэн folder -уудад эрх тавьж өгнө
#chown –R oracle:oinstall /oracle
#chown –R grid:oinstall /oracle/app/grid
#chown oracle:oinstall /oracle/app/oracle
#chown grid:oinstall /oracle/app/grid

#chmod –R 775 /oracle
#chmod –R 775 /oracle/app/grid
#chmod –R 775 /oracle/app/oracle

Алхам 7: Project үү сгэнэ
#projadd –U grid –K “project.max-shm-memory=(priv,54GB,deny) user.grid
#projmod –sK “project.max-sem-nsems=(priv,800,deny)” user.grid
#projmod –sK “project.max-sem-ids=(priv,128,deny)” user.grid
#projmod –sK “project.max-shm-ids=(priv,128,deny)” user.grid
 
#projadd –U grid –K “project.max-shm-memory=(priv,54GB,deny) user.oracle
#projmod –sK “project.max-sem-nsems=(priv,800,deny)” user.oracle
#projmod –sK “project.max-sem-ids=(priv,128,deny)” user.oracle
#projmod –sK “project.max-shm-ids=(priv,128,deny)” user.oracle

#/usr/sbin/projmod –sK “process.max-file-descriptor=(priv,65536,deny)” user.grid
#/usr/sbin/projmod –sK “process.max-file-descriptor=(priv,65536,deny)” user.oracle

Алхам 8: Port -уудын хязгаарлалт хийж өгнө
#/usr/sbin/ndd –set /dev/tcp tcp_smallest_anon_port 9000
#/usr/sbin/ndd –set /dev/tcp tcp_largest_anon_port 65500
#/usr/sbin/ndd –set /dev/udp udp_smallest_anon_port 9000
#/usr/sbin/ndd –set /dev/udp udp_largest_anon_port 65500

Алхам 9: Port-уудын хязгаарлалтыг систем boot хийхэд автоматаар тохируулдаг script үү сгэнэ.
#cat /etc/init.d/ndd
#!/bin/sh
ndd –set /dev/tcp tcp_smallest_anon_port 9000
ndd –set /dev/tcp tcp_largest_anon_port 65500
ndd –set /dev/udp udp_smallest_anon_port 9000
ndd –set /dev/udp udp_largest_anon_port 65500

Алхам 10: Үүсгэсэн script-ээ init -дээ нэмж өгнө.
#chmod 744 /etc/init.d/ndd
#chown root:sys /etc/init.d/ndd

#ln /etc/init.d/ndd /etc/rc2.d/s70ndd

Алхам 11: SSH connection time 0 болгоно
#cat /etc/ssh/sshd_config
…
LoginGraceTime 0
…

Алхам 12: SSH restart хийж өөрчлөлтийг авна
#svcadm restart ssh

Алхам 13: Дараах коммандыг ажлуулна
#mkdir /var/cores
#coreadm –g /var/cores/%f.%n.%p.%t.core –e global –e global–setid –e log –d process –d proc-setid

Алхам 14: NTP client тохируулах
Cluster -ийн хувьд
#cat /etc/inet/ntp.server
…
Server ntp_server_ip
…

NTP restart хийнэ.
#/usr/sbin/svcadm disable ntp
#/usr/sbin/svcadm enable ntp
#/usr/sbin/svcadm refresh ntp

Алхам 15: User profile -уудыг тохируулна
Admin user:
#cat .profile
…
PATH=$PATH:/usr/cluster/bin
MANPATH=/usr/share/man:/usr/cluster/man
export PATH MANPATH
…

Grid user:
#cat /export/home/grid/profile
…
Umask 022
ORACLE_BASE=/oracle/app/grid
ORACLE_HOME=/oracle/app/grid/product/11.2.0/grid
ORACLE_SID=+ASM1
LD_LIBRARY_PATH=$ORACLE_HOME/lib
PATH=$PATH:/usr/local/bin:/usr/sbin:/usr/bin:/usr/openwin/bin:/usr/ucb:$ORACLE_HOME/bin
export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH
TEMP=/tmp
TEMPDIR=/tmp
Export TEMP TEMPDIR
ulimit –t unlimited
ulimit –f unlimited
ulimit –d unlimited
ulimit –s unlimited 
ulimit –v unlimited
if[-t 0]; then
         sty intr^C
fi
…

Oracle user:
#cat /export/home/oracle/profile
…
Umask 022
ORACLE_BASE=/oracle/app/oracle
ORACLE_HOME=/oracle/app/oracle/product/11.2.0/dbhome_1
ORACLE_SID=scluster
ORACLE_UNQNAME=scluster
ORACLE_HOSTNAME=scluster
TZ=Asia/Ulaanbaatar
AGENT_HOME=/oracle/app/oracle/Middleware/agent/core/12.1.0.1.0
OMS_HOME=/oracle/app/oracle/Middleware/oms
LD_LIBRARY_PATH=$ORACLE_HOME/lib
PATH=$PATH:/usr/local/bin:/usr/sbin:/usr/bin:/usr/openwin/bin:/usr/ucb:$ORACLE_HOME/bin
export ORACLE_UNQNAME ORACLE_HOSTNAME TZ AGENT_HOME OMS_HOME ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH
TEMP=/tmp
TEMPDIR=/tmp
Export TEMP TEMPDIR
ulimit –t unlimited
ulimit –f unlimited
ulimit –d unlimited
ulimit –s unlimited 
ulimit –v unlimited
if[-t 0]; then
         sty intr^C
fi
…

Алхам 16: System parameters
#cat /etc/system
…
set rlim_fd_max=65536
set rlim_fd_cur=65536
…

Алхам 17: Package update
Package-уудыг ISO файл дээрээс mount хийж хийнэ.
#lofiadm –a /export/home/oracle/Desktop/Solaris10/V27764-01.iso /dev/lofi/1

Mount хийх folder-оо үүсгээд mount хийнэ.
#mkdir /mnt
#mount –F hsfs –o ro /dev/lofi/1 /mnt

Mount хийсэн folder-руу орж package-аа install хийнэ.
#cd /mnt/Solaris_sparc/Product

#pkgadd –d . SUNWarc
#pkgadd –d . SUNWhea
#pkgadd –d . SUNWlibm
#pkgadd –d . SUNWsprot

ХАДГАЛАХ ТӨХӨӨРӨМЖИЙН ТОХИРГОО

Дундын хадгалах төхөөрөмжийг дараах байдлаар тохируулна.
ZPool үүсгэнэ.
#zpool create disk_pool /dev/did/dsk/d1s6

ZPool-ээ mountpoint-уудад хуваана.
#zfs create –o mountpoint=/database/data disk_pool/data

#zfs create –o mountpoint=/database/fra disk_pool/fra

Хуваасан mountpoint-уудад хэмжээний хязгаарлалт тавьж өгнө.
#zfs set quota=100gb disk_pool/data

#zfs set quota=100gb disk_pool/reco

Хамгийн сүүлд нь mountpoint-уудад permission тавьж өгнө.
#chown –R oracle:oinstall /database


CLUSTER RESOURCE-ИЙН ТОХИРГОО

Cluster resource group үүсгэнэ.
#clresourcegroup create scluster-rg

Logical Hostname-г сервер бүрийн /etc/inet/hosts файлд нэмж өгнө.
#cat scluster-lh /etc/inet/hosts
…
172.16.250.3    scluster-lh
…

Cluster type бүртгэнэ.
#clresourcetype register SUNW.HAStoragePlus
#clresourcetype register SUNW.oracle_server
#clresourcetype register SUNW.oracle_listener

Logical hostname resource үүсгэнэ.
#clreslogicalhostname create –h scluster –g scluster-rg scluster-rs

Дундын хадгалах төхөөрөмжийн cluster resource үүсгэнэ.
#clresource create –t SUNW.HAStoragePlus –p zpools=disk_pool –g scluster-rg scluster-hasp-rs

Cluster resource group-ээ online болгож төлвийг нь шалгана.
#clresourcegroup online –eM scluster-rg
#clresourcegroup status scluster-rg


 ORACLE DATABASE SOFTWARE СУУЛГАЖ ТОХИРУУЛАХ

#./runInstaller

 
Алхам 1: Email хаяг болон My Oracle Support-ийн нууц үгээ бөглөнө.

Алхам 2: Skip software updates


Алхам 3: Install database software only


Алхам 4: Single instance database installation



Алхам 5: Select languages



Алхам 6: Enterprise Edition



Алхам 7: Oracle base: /oracle/app/oracle. Software location: /oracle/app/oracle/product/11.2.0/dbhome_1



Алхам 8: Operating System Groups: dba, oper



Алхам 9: Perform Prerequisite Checks



Алхам 10: Summary



Алхам 11: Run Scripts


#/oracle/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
 
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

#/oracle/app/oracle/product/11.2.0/dbhome_1/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.


Алхам 12: Finish



ORACLE LISTENER ҮҮСГЭХ

NETCA комманд ашиглаж ORACLE LISTENER үү сгэнэ.
#netca

Алхам 1: Listener configuration

Алхам 2: Add


Алхам 3: Listener name: LISTENER


Алхам 4: TCP


Алхам 5: Use the standard port number of 1521


Алхам 6: No



Алхам 7: Next


#cat listener.ora
LISTENER_scluster =
         (DESCRIPTION_LIST =
           (ADDRESS = (PROTOCOL = TCP) (HOST = node1) (PORT = 1521))
           (ADDRESS = (PROTOCOL = IPC) (KEY = EXTPROC1521))
         )
       }

#cat sqlnet.ora
NAMES.DIRECTORY_PATH = (TNSNAMES)


ORACLE DATABASE ҮҮСГЭХ

#dbca

Алхам 1: Create Database


Алхам 2: General Purpose



Алхам 3: Global Database Name: scluster, SID: scluster



Алхам 4: Configure the Database with Enterprise Manager: чагтыг нь авна



Алхам 5: Password -оо өгнө



Алхам 6: File System



Алхам 6: Use Common Location for All Database Files



Алхам 7: FRA зааж өгнө



Алхам 8: Sample Schemas -ын чагтыг авна.



Алхам 9: Ө гөгдлийг өөрийн шаардлаганд тулгуурлан өгнө.



Алхам 11: Datafile, Redolog, Controlfile -уудын тохиргоог хийнэ



Алхам 12: Create Database, Save as a Database Template, Generate Database Creation Scripts -уудыг чагтална.



Алхам 13: Ok



Алхам 14: Database Configuration Assistant



Алхам 15: Exit



CLUSTER-ИЙН ТОХИРГОО

TNSNAMES.ORA файл дээр Logicalhostname –ээ нэмж өгнө.
#cat tnsnames.ora

scluster =
         (DESCRIPTION =
                 (ADDRESS = (PROTOCOL = TCP) (HOST = scluster-lh) (PORT = 1521))
                 (CONNECT_DATA =
                          (SERVER = DEDICATED)
                          (SERVICE_NAME = scluster)
                 )
         )
 

LISTENER_scluster =
         (DESCRIPTION =
                 (ADDRESS = (PROTOCOL = TCP) (HOST = scluster-lh) (PORT = 1521))
         )        

Үүсгэсэн өгөдлийн сангийн мэдээллийг Node 2 руу хуулна.
node2#cd $ORACLE_HOME/dbs
node2#scp –q oracle@node1:`pwd`/spfilescluster.ora.
node2#scp –q oracle@node1:`pwd`/orapwscluster.
node2#scp –q oracle@node1:`pwd`/spfilescluster.ora.

node2#cd $ORACLE_HOME/network/admin
node2#scp –q oracle@node1:`pwd`/tnsnames.ora.
node2#scp –q oracle@node1:`pwd`/orapwscluster.
node2#scp –q oracle@node1:`pwd`/spfilescluster.ora.

node2#cd /oracle/app/oracle/diag/rdbms
node2#scp –q –r oracle@node1:/oracle/app/oracle/diag/rdbms/scluster.

node2#cd /oracle/app/oracle
node2#scp –q –r oracle@node1:`pwd`/admin.

node2#cat /var/opt/oracle/oratab
scluster:/oracle/app/oracle/product/11.2.0/dbhome_1:N

Monitor хийх User өгөгдлийн санд үүсгэнэ.
node1#sqlplus ‘/as sysdba’
create user scuser identified by mnk88;
alter user scuser default tablespace system quota 1m on system;
grant select on v_$sysstat to scuser;
grant select on v_$archive_dest to scuser;
grant select on v_$database to scuser; 
grant create session to scuser;
grant create table to scuser; 
grant select on v_$database to scuser;
alter system set local_listener=LISTENER_scluster;
alter system set remote_os_authent=true scope spfile;
create user ops$scuser identified by externally default tablespace system quota 1m on system;
grant connect, resource to ops$cuser;
grant select on v_$sysstat to ops$scuser;
grant select on v_$archive_dest to ops$scuser;
grant create session to ops$scuser;
grant create table to ops$scuser;
quit

Өгөгдлийн сангийн Enterprise Manager -г дахин тохируулна.
#emca –config dbcontrol db –repos recreate

Өгөгдлийн сангийн Enterprise Manager-г тохируулсны дараа listener.ora дараах байдлаар харагдана.
#cat listener.ora
LISTENER_scluster =
         (DESCRIPTION_LIST =
                 (DESCRIPTION = 
                          (ADDRESS = (PROTOCOL = TCP) (HOST = node1) (PORT = 1521))
                          (ADDRESS = (PROTOCOL = TCP) (HOST = scluster-lh) (PORT = 1521))
                          (ADDRESS = (PROTOCOL = IPC) (KEY = EXTPROC1521))
                 )
         )

Oracle Listener –ийн Cluster resource үү сгэнэ.
#clresource create –t SUNW.oracle_listener –g scluster-rg \
-p ORACLE_HOME=/oracle/app/oracle/product/11.2.0/dbhome_1 \
-p LISTENER_NAME=LISTENER \
-p resource_dependencies=scluster-hasp-rs scluster-lsnr-rs

Oracle Server –ийн Cluster resource үү сгэнэ.
#clresource create -t SUNW.oracle_server -g scluster-rg \
-p ORACLE_HOME=/oracle/app/oracle/product/11.2.0/dbhome_1 \
-p ORACLE_SID=scluster \
-p Alert_log_file=/oracle/app/oracle/diag/rdbms/scluster/scluster/trace/alert_scluster.log \
-p resource_dependencies=scluster-hasp-rs -p connect_string=scuser/mnk88 scluster-svr-rs

Oracle Server –ийн Cluster resourceuser.oracle project-ийг default project-оор нь сонгож өгнө.
#clrs set –p Resource_project_name=user.oracle scluster-svr-rs

Сервер бүр дээр Oracle EM –г автоматаар эхлүүлэх scripts бичиж Cluster resource үү сгэнэ.
#cat /usr/local/bin/em_gds.ksh
#!/bin/ksh
/bin/su oracle -c "ORACLE_SID=scluster export ORACLE_SID; \
ORACLE_HOME=/oracle/app/oracle/product/11.2.0/dbhome_1 \
export ORACLE_HOME; PATH=\$PATH:\$ORACLE_HOME/bin: \
export PATH; ORACLE_HOSTNAME=scluster \
export ORACLE_HOSTNAME; emctl $1 dbconsole"

#chmod +x /usr/local/bin/em_gds.ksh

#clresourcetype register SUNW.gds

#clresource create -d -t SUNW.gds -g scluster-rg -p resource_dependencies=scluster-svr-rs \
-p Network_aware=false -p Start_command="/usr/local/bin/em_gds.ksh start" \
-p Stop_command="/usr/local/bin/em_gds.ksh stop" scluster-dbcon-rs

Cluster resource group-ээ node2 руу шилжүүлнэ.
#clresourcegroup switch -n node2 scluster-rg

Cluster resource-ийн төлвийг шалгана.
#clresource status

Өгөгдлийн сангийн Enterprise Manager -г дахин тохируулна.
#emca -config dbcontrol db -repos recreate

Бусад өгөгдлийн сангийн тохиргоог fep1prod-оос fep2prod руу хуулна
fep1prod$ cd /oracle/app/oracle/product/11.2.0/dbhome_1/fepprod_fepprod/sysman/config
fep1prod$ scp -q oracle@fep2prod:`pwd`/emkey.ora.

Өгөгдлийн санг автоматаар эхлүүлэх Cluster Resource-г идэвхжүүлээд Cluster Resource Group-ээ буцааж шилжүүлнэ.
fep2prod$ clresource enable oracle-dbcon-rs
fep2prod$ clresourcegroup switch -n phys-grass1 oracle-rg


За ингээд та бүхэнд бага ч болов тус болсон гэдэгт найдаж байна. Амжилт хүсье! :)

No comments:

Post a Comment