Oracle? Database 2 Day + Real Application Clusters Guide
11g Release 2 (11.2)
E17264-13
2 Preparing Your Cluster
About Shared Storage
Note:
If you choose not to use Oracle ASM for storing your Oracle Clusterware files, then both the voting disks and the OCR must reside on a cluster file system that you configure before you install Oracle Clusterware in the Grid home.
如果你不使用ASM来存储OCR或者Vote disk,OCR和Vote disk在你安装Grid home之前,必须放在集群文件系统上
About Network Hardware Requirements
Note:
You must use a switch for the interconnect. Oracle recommends that you use a dedicated network switch. Token-rings or crossover cables are not supported for the interconnect.
Loopback devices are not supported.
注意:oracle对interconnect来说,必须使用交换机。oracle建议使用专用的网络交换机。Token-rings和crossover cables不支持interconnect。Loopback devices也不支持。
The host name of each node must conform to the RFC 952 standard, which permits alphanumeric characters. Host names using underscores ("_") are not allowed.
主机名必须符合RFC 952标准,允许字母数字,不允许带下划线的。测试如下:
[[email protected] ~]# hostname rh_64
[[email protected] ~]# hostname
rh_64
[[email protected] ~]# hostname rh64
[[email protected] ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=rh64
GATEWAY=192.168.56.1
[[email protected] ~]# hostname rh_64
[[email protected] ~]# su - oracle
[[email protected]_64 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.4.0 Production on Sun Apr 19 15:46:47 2015
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected.
SQL> startup
ORA-00119: invalid specification for system parameter LOCAL_LISTENER
ORA-00130: invalid listener address ‘(ADDRESS=(PROTOCOL=TCP)(HOST=rh_64)(PORT=1521))‘
SQL> quit
Disconnected
[[email protected] ~]# hostname rh64
[[email protected] ~]# hostname
rh64
[[email protected] ~]# su - oracle
[[email protected] ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.4.0 Production on Sun Apr 19 15:52:24 2015
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 839282688 bytes
Fixed Size 2257880 bytes
Variable Size 507513896 bytes
Database Buffers 327155712 bytes
Redo Buffers 2355200 bytes
Database mounted.
Database opened.
SQL>
About IP Address Requirements
When performing an advanced installation of the Oracle Grid Infrastructure for a cluster software, you can chose to use Grid Naming Service (GNS) andDynamic Host Configuration Protocol (DHCP) for virtual IPs (VIPs). Grid Naming Service is a new feature in Oracle
Database 11g release 2 that usesmulticast Domain Name Server (mDNS) to enable the cluster to assign host names and IP addresses dynamically as nodes are added and removed from the cluster, without requiring additional network address configuration in the domain
name server (DNS). For more information about GNS, refer to Oracle Grid Infrastructure Installation Guide for your platform.
安装11g RAC可以选择使用GNS+DHCP而不用使用DNS服务了
During installation of the Oracle Grid Infrastructure for a cluster, a listener is created for each of the SCAN addresses. Clients that access the Oracle RAC database should use the SCAN or SCAN address, not the VIP name or address. If an application uses a
SCAN to connect to the cluster database, then the network configuration files on the client computer do not have to be modified when nodes are added to or removed from the cluster. The SCAN and its associated IP addresses provide a stable name for clients
to use for connections, independent of the nodes that form the cluster. Clients can connect to the cluster database using the easy connect naming method and the SCAN.
注意如果使用了SCAN,客户端会使用EZCONNECT来连接数据库或SCAN
Configuring the Network
If you configured the IP addresses in a DNS server, then, as the root user, change the hosts search order in
/etc/nsswitch.conf on all nodes as shown here:
Old:
hosts: files nis dns
New:
hosts: dns files nis
After modifying the nsswitch.conf file, restart the nscd daemon on each node using the following command:
# /sbin/service nscd restart
After you have completed the installation process, configure clients to use the SCAN to access the cluster. Using the previous example, the clients would usedocrac-scan to connect to the cluster.
-------------如果你使用了DNS注意,上面的配置要修改。原来是先找files,如果你没有修改的话,估计连接数据库什么的时间会增加。因为他是最后一个才开始寻找DNS
About Performing Platform-Specific Configuration Tasks
You may be required to perform special configuration steps that are specific to the operating system on which you are installing Oracle RAC, or for the components used with your cluster. The following list provides examples of operating-specific installation
tasks:
Configure the use of Huge Pages on SUSE Linux, Red Hat Enterprise Linux, or Oracle Linux.
Set shell limits for the oracle user on Red Hat Linux or Oracle Linux systems to increase the number of files and processes available to Oracle Clusterware and Oracle RAC.
Create X library symbolic links on HP-UX.
Configure network tuning parameters on AIX Based Systems.
注意:在安装RAC时,还要配置其他参数:
Linux:需要配置Huge pages
HP-UX:创建X library symbolic links
AIX:额外调优网络参数
Configuring Shared Storage
Each node in a cluster requires external shared disks for storing the Oracle Clusterware (Oracle Cluster Registry and voting disk) files, and Oracle Database files. The supported types of shared storage depend upon the platform you are using, for example:
Oracle Automatic Storage Management (strongly recommended)
A supported cluster file system, such as OCFS2 for Linux, OCFS for Microsoft Windows, or General Parallel File System (GPFS) on IBM platforms
Network file system (NFS), which is not supported on Linux on POWER or IBM zSeries Based Linux
(Upgrades only) Shared disk partitions consisting of block devices or raw devices. Block devices are disk partitions that are not mounted using the Linux file system. Oracle Clusterware and Oracle RAC write to these partitions directly.
注意:块设备或者裸设备只是升级支持。
Note:
You cannot use OUI to install Oracle Clusterware files on block or raw devices. You cannot put Oracle Clusterware binaries and files on Oracle Automatic Storage Management Cluster File System (Oracle ACFS).
Configuring Files on an NAS Device for Use with Oracle ASM
To use an NFS file system, it must be on a certified NAS device. If you have a certified network attached storage (NAS) device, then you can create zero-padded files in an NFS mounted directory and use those files as disk devices in an Oracle ASM disk group.
To ensure high availability of Oracle Clusterware files on Oracle ASM, you must have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. Each disk must have at least 1 GB of capacity
to ensure that there is sufficient space to create Oracle Clusterware files.
Use the following guidelines when identifying appropriate disk devices:
All of the devices in an Oracle Automatic Storage Management disk group should be the same size and have the same performance characteristics.
A disk group should not contain multiple partitions on a single physical disk device.
Using logical volumes as a device in an Oracle Automatic Storage Management disk group is not supported with Oracle RAC.
The user account with which you perform the installation (oracle) must have write permissions to create the files in the path that you specify.
所有的设备在ASM上都必须有同样的大小和同样的性能属性。
一个disk group不能包含在一个物理磁盘上的多个分区。
在ASM disk group上不能使用逻辑卷作为一个设备,oracle RAC不支持。
安装路径oracle必须有写权限。
To configure NAS device files for creating disk groups:
Add and configure access to the disks to the NAS device. Make sure each cluster node has been granted access to all the disks that are used by Oracle Grid Infrastructure for a cluster software and Oracle Database software.
Refer to your NAS device documentation for more information about completing this step.
On a cluster node, log in as the root user (or use sudo for the following steps).
Configure access to the disks on the NAS devices. The process for completing this step can vary depending on the type of disks and the type of NAS service.
One example of the configuration process is shown here. The first step is to create a mount point directory on the local system:
# mkdir -p /mnt/oracleasm
To ensure that the NFS file system is mounted when the system restarts, add an entry for the file system in the mount file /etc/fstab.
For more information about editing the mount file for the operating system, refer to the Linux man pages. For more information about recommended mount options, refer to Oracle Grid Infrastructure Installation Guide for your platform.
Enter a command similar to the following to mount the NFS file system on the local system, where host is the host name or IP address of the file server, and pathname is the location of the storage within NFS (for example, /public):
# mount <host>:<pathname> /mnt/oracleasm
Choose a name for the disk group to create, for example, nfsdg.
Create a directory for the files on the NFS file system, using the disk group name as the directory name, for example:
# mkdir /mnt/oracleasm/nfsdg
Use commands similar to the following to create the required number of zero-padded files in this directory:
# dd if=/dev/zero of=/mnt/oracleasm/nfsdg/disk1 bs=1024k count=1000
This example creates a 1 GB file named disk1 on the NFS file system. You must create one, two, or three files respectively to create an external, normal, or high redundancy disk group.
Enter the following commands to change the owner, group, and permissions on the directory and files that you created:
# chown -R oracle:dba /mnt/oracleasm
# chmod -R 660 /mnt/oracleasm
When installing Oracle RAC, if you choose to create an Oracle ASM disk group, then you must change the disk discovery path to specify a regular expression that matches the file names you created, for example, /mnt/oracleasm/nfsdg/*.
Using ASMLib to Mark the Shared Disks as Candidate Disks
Another option for configuring shared disks is to use the ASMLib utility. If you configure a shared disk to be mounted automatically when the server restarts, then, unless you have configured special files for device persistence, a disk that appeared as /dev/sdg
before the system shutdown can appear as /dev/sdh after the system is restarted.
If you use ASMLib to configure the shared disks, then when you restart the node:
The disk device names do not change
The ownership and group membership for these disk devices remains the same
You can copy the disk configuration implemented by Oracle ASM to other nodes in the cluster by running a simple command
Note:
If you followed the instructions in the section "Configuring Files on an NAS Device for Use with Oracle ASM" to configure your shared storage, then you do not have to perform the tasks in this section.
The following sections describe how to install and configure ASMLib, and how to use ASMLib to configure your shared disk devices:
Installing ASMLib
Configuring ASMLib
Using ASMLib to Create Oracle ASM Disks
Installing ASMLib
The ASMLib software is available from the Oracle Technology Network. Select the link for your platform on the ASMLib download page at:
http://www.oracle.com/technetwork/topics/linux/asmlib/index-101839.html
You should see four to six packages for your Linux platform. The oracleasmlib package provides the actual Oracle ASM library. The oracleasm-support package provides the utilities used to get the Oracle ASM driver up and running. Both of these packages must
be installed.
The remaining packages provide the kernel driver for the Oracle ASM library. Each package provides the driver for a different kernel. You must install the appropriate package for the kernel you run. Use the uname -r command to determine the version of the kernel
on your server. The oracleasm kernel driver package has that version string in its name. For example, if you run Red Hat Enterprise Linux 4 AS, and the kernel you are using is the 2.6.9-55.0.12.ELsmp kernel, then you would choose the oracleasm-2.6.9-55.0.12.ELsmp-2.0.3-1.x86_64.rpm
package.
Note:
The Oracle ASMLib kernel driver (oracleasm) is included with Oracle Linux 5. No driver package needs to be installed when using this kernel. The oracleasm-support and oracleasmlib packages still need to be installed.
To install the ASMLib software packages:
Download the ASMLib packages to each node in your cluster.
Change to the directory where the package files were downloaded.
As the root user, use the rpm command to install the packages. For example:
# rpm -Uvh oracleasm-support-2.1.3-1.el4.x86_64.rpm
# rpm -Uvh oracleasmlib-2.0.4-1.el4.x86_64.rpm
# rpm -Uvh oracleasm-2.6.9-55.0.12.ELsmp-2.0.3-1.x86_64.rpm
After you have completed these commands, ASMLib is installed on the system.
Repeat steps 2 and 3 on each node in your cluster.
See Also:
"Using ASMLib to Mark the Shared Disks as Candidate Disks"
Configuring ASMLib
Now that the ASMLib software is installed, a few steps have to be taken by the system administrator to make the Oracle ASM driver available. The Oracle ASM driver must be loaded, and the driver file system must be mounted. This is taken care of by the initialization
script, /usr/sbin/oracleasm.
To configure the ASMLib software after installation:
As the root user, run the following command:
# /usr/sbin/oracleasm configure
The script prompts you for the default user and group to own the Oracle ASM driver access point. Specify the Oracle Database software owner (oracle) and the OSDBA group (dba).
The script also prompts you to specify whether you want to start the ASMLib driver when the node is started and whether you want to scan for presence of any Oracle Automatic Storage Management disks when the node is started. Answer yes for both of these questions.
Repeat step 1 on each node in your cluster.
See Also:
"Using ASMLib to Mark the Shared Disks as Candidate Disks"
Using ASMLib to Create Oracle ASM Disks
Every disk that is used in an Oracle ASM disk group must be accessible on each node. After you make the physical disk available to each node, you can then mark the disk device as an Oracle ASM disk. The /usr/sbin/oracleasm script is used for this task.
If the target disk device supports partitioning, for example, raw devices, then you must first create a single partition that encompasses the entire disk. If the target disk device does not support partitioning, then you do not have to create a partition on
the disk.
To create Oracle ASM disks using ASMLib:
As the root user, use oracleasm to create Oracle ASM disks using the following syntax:
# /usr/sbin/oracleasm createdisk disk_name device_partition_name
In this command, disk_name is the name you choose for the Oracle ASM disk. The name you choose must contain only ASCII capital letters, numbers, or underscores, and the disk name must start with a letter, for example, DISK1 or VOL1, or RAC_FILE1. The name of
the disk partition to mark as an Oracle ASM disk is the device_partition_name. For example:
# /usr/sbin/oracleasm createdisk DISK1 /dev/sdb1
If you must unmark a disk that was used in a createdisk command, then you can use the following syntax:
# /usr/sbin/oracleasm deletedisk disk_name
Repeat step 1 for each disk that is used by Oracle ASM.
After you have created all the Oracle ASM disks for your cluster, use the listdisks command to verify their availability:
# /usr/sbin/oracleasm listdisks
DISK1
DISK2
DISK3
On all the other nodes in the cluster, use the scandisks command to view the newly created Oracle ASM disks. You do not have to create the Oracle ASM disks on each node, only on one node in the cluster.
# /usr/sbin/oracleasm scandisks
Scanning system for ASM disks [ OK ]
After scanning for Oracle ASM disks, display the available Oracle ASM disks on each node to verify their availability:
# /usr/sbin/oracleasm listdisks
DISK1
DISK2
DISK3
Note:
At this point, you should restart each node on which you are installing the Oracle Grid Infrastructure for a cluster software. After the node has restarted, view the configured shared storage on each node. This helps to ensure that the system configuration
is complete and persists across node shutdowns.
See Also:
"Using ASMLib to Mark the Shared Disks as Candidate Disks"
Configuring Disk Device Persistence
By default, the Linux 2.6 kernel device file naming scheme udev dynamically creates device file names when the server is started, and assigns ownership of them to root. If udev applies default settings, then it changes device file names and owners for voting
disks or Oracle Cluster Registry partitions, corrupting them when the server is restarted. For example, a voting disk on a device named /dev/sdd owned by the user grid may be on a device named /dev/sdf owned by root after restarting the server.
If you use ASMLib, then you do not have to ensure permissions and device path persistency in udev. If you do not use ASMLib, then you must create a custom rules file for the shared disks mounted on each node. When udev is started, it sequentially carries out
rules (configuration directives) defined in rules files. These files are in the path /etc/udev/rules.d/. Rules files are read in lexical order. For example, rules in the file 10-wacom.rules are parsed and carried out before rules in the rules file 90-ib.rules.
Where rules files describe the same devices, on Asianux, Red Hat, and Oracle Linux, the last file read is the one that is applied. On SUSE 2.6 kernels, the first file read is the one that is applied.
To configure a rules file for disk devices, see the chapter on configuring storage in Oracle Grid Infrastructure Installation Guide for your platform.
See Also:
"Configuring Shared Storage"
Oracle Grid Infrastructure Installation Guide for your platform