Tuesday, November 3, 2015

How to reset General "Chassis intrusion Assert" on SuperMicro

How to reset General "Chassis intrusion Assert" on SuperMicro

1. Boot CD rom Hirens.BootCD.15.2.iso =>  Select  Linux PartedMagic boot
2. Config IP remote connect

# sudo ifconfig eth0 up
# iconfig eth0 121.30.143.15 netmask 255.255.255.128 broadcast 
121.30.143.127
# route add default gw 
121.30.143.1

2. SSH Connect to "PartedMagic" default password root is partedmagic
# cd /tmp/
# wget ftp://ftp.supermicro.com/utility/SuperDoctor_II/Linux/Release/SD2_2.111.456-140922.tar.gz
# tar -xvf SD2_2.111.456-140922.tar.gz

root@PartedMagic:/tmp/superdoctor# ./quickinstall
./quickinstall: line 14: /usr/bin/clear: Input/output error
Is there an IPMI BMC card installed in this system? (y/n) y

There are two options that you can choose as the IPMI device drivers.
Linux kernel ver.  | Option 1             | Option 2
-------------------+----------------------+----------------------------------
2.4.20 and earlier | YES (Recommended)    | NO
                   | IPMI 1.5/2.0         |
2.4.21 ~ 2.4.25    | YES, IPMI 1.5/2.0    | YES (Recommended), IPMI 1.5 only
2.4.26 and later   | YES, IPMI 1.5/2.0    | YES (Recommended), IPMI 1.5/2.0 *
2.6.6 and earlier  | No                   | YES (Recommended), IPMI 1.5 only
2.6.7 and later    | No                   | YES (Recommended), IPMI 1.5/2.0 *
-------------------+----------------------+----------------------------------
* For detail, please read the file doc/README-IPMI.htm.

*** Option 1 (smbmc driver) ***
The smbmc driver is developed by SUPERMICRO and is available at
ftp://ftp.supermicro.com/CDR-0010_2.01_for_IPMI_Server_Managment/IPMI_Solution/Linux/GPC_Agent/

*** Option 2 (IPMI drivers in kernel) ***
These drivers are available in the Linux kernel since 2.4.21 and 2.6.0.
Please have them compiled either into kernel or as modules.

Current Linux kernel ver.: 3.5.6-pmagic
Which option do you like to choose? (1 or 2) 2


List of IPMI device drivers compiled as modules for this kernel(3.5.6-pmagic):
/lib/modules/3.5.6-pmagic/kernel/drivers/char/ipmi:
total 113
-rw-r--r-- 1 root root  8956 Oct  8  2012 ipmi_devintf.ko
-rw-r--r-- 1 root root 32280 Oct  8  2012 ipmi_msghandler.ko
-rw-r--r-- 1 root root  9700 Oct  8  2012 ipmi_poweroff.ko
-rw-r--r-- 1 root root 43836 Oct  8  2012 ipmi_si.ko
-rw-r--r-- 1 root root 19272 Oct  8  2012 ipmi_watchdog.ko
248
grep: /etc/rc.local: No such file or directory

To provide the SNMP function of SuperDoctor II, do as below manually.
1. Add the following line into the file /etc/snmp/snmpd.conf.
     pass .1.3.6.1.4.1.10876 /usr/sbin/sd_extension
2. Restart the SNMP agent.


root@PartedMagic:/tmp/superdoctor# ./sdt.x86

*****************************************************************************
 SuperDoctor II - Linux version 2.111(140922)
 Copyright(c) 1993-2014 by Super Micro Computer, Inc. http://supermicro.com/
*****************************************************************************
>> Disable monitoring of non-activated item: Fan2 Fan Speed
>> Disable monitoring of non-activated item: Fan4 Fan Speed
>> Disable monitoring of non-activated item: Fan5 Fan Speed
>> Disable monitoring of non-activated item: Fan6 Fan Speed
>> Disable monitoring of non-activated item: FanA Fan Speed

Monitored Item            High Limit  Low Limit     Status
----------------------------------------------------------------------
Fan1 Fan Speed                              300       3515          
Fan3 Fan Speed                              300       3524          
FanB Fan Speed                              300       3534          
CPU1 Vcore Voltage              1.49       0.53       0.91          
CPU2 Vcore Voltage              1.49       0.53       0.82          
VDIMM AB Voltage                1.65       1.20       1.33          
VDIMM CD Voltage                1.65       1.20       1.34          
VDIMM EFGH Voltage              1.65       1.20       1.34          
CPU1 VSA Voltage                1.33       0.53       0.90          
CPU2 VSA Voltage                1.33       0.53       0.88          
VTT Voltage                     1.34       0.91       1.04          
+5VSB Voltage                   5.50       4.48       4.99          
+1.5V Voltage                   1.65       1.34       1.49          
+5V Voltage                     5.50       4.48       4.99          
+12V Voltage                   13.24      10.80      11.97          
+3.3V Voltage                   3.65       2.93       3.31          
+3.3VSB Voltage                 3.65       2.93       3.26          
VBAT Voltage                    3.31       2.69       3.22          
CPU1 Temperature              95/203                43/109          
CPU2 Temperature              95/203                47/116          
System Temperature            85/185                 32/89          
Peripheral Temperature        85/185                40/104          
PCH Temperature               95/203                45/113          
Chassis Intrusion                                      Bad   Warning!
Power Supply Status                                   Good          
--------------------------------------------- Wed Nov  4 13:10:25 2015

root@PartedMagic:/tmp/superdoctor# ./sdt.x86 -r "Chassis Intrusion"

*****************************************************************************
 SuperDoctor II - Linux version 2.111(140922)
 Copyright(c) 1993-2014 by Super Micro Computer, Inc. http://supermicro.com/
*****************************************************************************
Done.

root@PartedMagic:/tmp/superdoctor# ./sdt.x86

*****************************************************************************
 SuperDoctor II - Linux version 2.111(140922)
 Copyright(c) 1993-2014 by Super Micro Computer, Inc. http://supermicro.com/
*****************************************************************************
Monitored Item            High Limit  Low Limit     Status
----------------------------------------------------------------------
Fan1 Fan Speed                              300       3506          
Fan3 Fan Speed                              300       3524          
FanB Fan Speed                              300       3488          
CPU1 Vcore Voltage              1.49       0.53       0.91          
CPU2 Vcore Voltage              1.49       0.53       0.82          
VDIMM AB Voltage                1.65       1.20       1.33          
VDIMM CD Voltage                1.65       1.20       1.34          
VDIMM EFGH Voltage              1.65       1.20       1.34          
CPU1 VSA Voltage                1.33       0.53       0.88          
CPU2 VSA Voltage                1.33       0.53       0.88          
VTT Voltage                     1.34       0.91       1.04          
+5VSB Voltage                   5.50       4.48       4.99          
+1.5V Voltage                   1.65       1.34       1.49          
+5V Voltage                     5.50       4.48       4.99          
+12V Voltage                   13.24      10.80      11.97          
+3.3V Voltage                   3.65       2.93       3.31          
+3.3VSB Voltage                 3.65       2.93       3.26          
VBAT Voltage                    3.31       2.69       3.22          
CPU1 Temperature              95/203                42/107          
CPU2 Temperature              95/203                47/116          
System Temperature            85/185                 32/89          
Peripheral Temperature        85/185                40/104          
PCH Temperature               95/203                45/113          
Chassis Intrusion                                     Good          
Power Supply Status                                   Good          
--------------------------------------------- Wed Nov  4 13:10:40 2015
root@PartedMagic:/tmp/superdoctor#




Monday, May 4, 2015

Fix error "This computer can't connect to the remote computer."

Problem or Question:

RDP not working
Error: "This computer can't connect to the remote computer. The connection was lost due to a network error. Try connecting again. If the problem continues, contact your network administrator or technical support."

Environment:

RDP

Solution or Answer:

1. Check in Error Log on target Machine for this error:
Event Type: Error
Event Source: TermDD
Event ID: 50
Description: The RDP protocol component "DATA ENCRYPTION" detected an error in the protocol stream and has disconnected the client.

2. Try to delete certificate as the following
3. Start Registry Editor. [Start => regedit] or [Start => Run => regedit]
4. Locate and then click the following registry subkey:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\TermService\Parameters
5. Under this registry subkey, delete the following values:

  • Certificate
  • X509 Certificate
  • X509 Certificate ID
6. Quit Registry Editor, and then restart the computer.

Friday, May 1, 2015

VMware ESXI 5.x Shrink LVM without dataloss

In Vmware ESXI host

 # ls -ltr
total 91829760
-rw-r--r--    1 root     root           265 Apr  4 13:04 copy73 (2).vmxf
-rw-------    1 root     root           516 Apr 22 09:07 copy73.vmdk
-rw-------    1 root     root     25769803776 Apr 22 11:37 copy73-flat.vmdk
-rw-r--r--    1 root     root            43 Apr 23 17:42 copy73 (2).vmsd
-rw-------    1 root     root     25769803776 Apr 23 18:15 copy73_2-flat.vmdk
-rw-r--r--    1 root     root        167490 Apr 23 18:25 vmware-1.log
-rw-------    1 root     root           492 Apr 23 18:28 copy73_2.vmdk
-rw-------    1 root     root           496 Apr 23 18:28 copy73 (2)_2.vmdk
-rw-------    1 root     root           496 Apr 23 18:28 copy73 (2)_1.vmdk
-rw-------    1 root     root     21474836480 Apr 23 18:28 copy73 (2)_2-flat.vmdk
-rw-------    1 root     root     21474836480 Apr 23 18:28 copy73 (2)_1-flat.vmdk
-rw-------    1 root     root          8684 Apr 23 18:36 copy73 (2).nvram
-rw-r--r--    1 root     root        134657 Apr 23 18:36 vmware.log
-rwxr-xr-x    1 root     root          3142 May  1 10:27 copy73 (2).vmx

1. Clone copy disk to backup, before convert
/vmfs/volumes/4a2ea43a-dcf1b3df-3141-00215e74a364/copy73 (2) # vmkfstools -i copy73_2.vmdk copy73_3.vmdk
Destination disk format: VMFS zeroedthick
Cloning disk 'copy73_2.vmdk'...
Clone: 10% done.

2. Edit vitual machine add disk from clone step 1


3. Reboot and check disk
[root@TT4-Web00-S ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/VolGroup00/LogVol00
                       14G  6.5G  6.5G  50% /
/dev/sda1              99M   13M   82M  14% /boot
tmpfs                 3.0G     0  3.0G   0% /dev/shm
/dev/sdb1              11G  9.0G  593M  94% /u02
/dev/sdc1              20G  8.0G   11G  43% /u03
[root@TT4-Web00-S ~]#

4. Boot Troubleshooting => Rescue a CentOS system
Use: CentOS-7-x86_64-DVD-1503-01.iso

5. Check volume group name
bash-3.2# lvm vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "VolGroup00" using metadata type lvm2
 
bash-3.2# lvm vgchange -a y 
  2 logical volume(s) in volume group "VolGroup00" now active
 
bash-3.2# ls /dev/VolGroup00/
LogVol00  LogVol01

6. You can display information about the logical volumes using lvm lvs
bash-3.2# lvm lvs
  LV       VG         Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  LogVol00 VolGroup00 -wi-ao 14.12G                                     
  LogVol01 VolGroup00 -wi-ao  9.75G                    

 7. Display Logical Volumes

bash-3.2# lvdisplay
  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol00
  VG Name                VolGroup00
  LV UUID                zG2Jef-Jn3f-gqgj-VsCX-OCBd-jcpM-n0spR8
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                14.12 GB
  Current LE             452
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
  
  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol01
  VG Name                VolGroup00
  LV UUID                ceUxVm-GypP-PNRR-WPFP-sUqv-MhPB-CWRgeu
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                9.75 GB
  Current LE             312
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
 

bash-3.2# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "VolGroup00" using metadata type lvm2
bash-3.2# lvscan
  ACTIVE            '/dev/VolGroup00/LogVol00' [14.12 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol01' [9.75 GB] inherit

bash-3.2# vgdisplay
  --- Volume group ---
  VG Name               VolGroup00
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               23.88 GB
  PE Size               32.00 MB
  Total PE              764
  Alloc PE / Size       764 / 23.88 GB
  Free  PE / Size       0 / 0  
  VG UUID               zeIZpI-rZkO-MKFA-NgAC-OuU5-rt7f-jwRHQB

 
7. Resize to from 14G to 10G data
[root@livecd ~]# e2fsck -f /dev/VolGroup00/LogVol00
e2fsck 2.39 (29-May-2006)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/VolGroup00/LogVol00: 92069/4704768 files (1.3% non-contiguous), 1285660/4702208 blocks

[root@livecd ~]# resize2fs /dev/VolGroup00/LogVol00 10G
resize2fs 1.39 (29-May-2006)
Resizing the filesystem on /dev/VolGroup00/LogVol00 to 2097152 (4k) blocks.
The filesystem on /dev/VolGroup00/LogVol00 is now 2097152 blocks long.


[root@livecd ~]# lvreduce -L 10G /dev/VolGroup00/LogVol00
  WARNING: Reducing active logical volume to 10.00 GB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce LogVol00? [y/n]: y
  Reducing logocal volume LogVol00 to 10.00 GB
  Logical volume LogVol00 successfully resized

bash-3.2# lvdisplay /dev/VolGroup00/LogVol00
  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol00
  VG Name                VolGroup00
  LV UUID                zG2Jef-Jn3f-gqgj-VsCX-OCBd-jcpM-n0spR8
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                10 GB
  Current LE             320
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
 
[root@livecd ~]# pvscan
    PV /dev/sda22   VG VolGroup00   lvm2 [23.88 GB / 4.12 GB free]
  Total: 1 [23.88 GB] / in use: 1 [23.88 GB] / in no VG: 0 [0   ]
 
[root@livecd ~]# lvscan
  ACTIVE    '/dev/VolGroup00/LogVol00' [10.00 GB] inherit
  ACTIVE    '/dev/VolGroup00/LogVol01' [9.75 GB] inherit
 






8. Remove Volume Swap
[root@livecd ~]# lvremove /dev/VolGroup00/LogVol01
  Can't remove open logical volume "LogVol01"

[root@livecd ~]# swapoff /dev/VolGroup00/LogVol01

[root@livecd ~]# lvremove /dev/VolGroup00/LogVol01
Do you really want to remove active logical volume LogVol01? [y/n]:y
  Logical volume "LogVol01" successfully removed

 
[root@livecd ~]# lvcreate -L 2G -n LogVol01 VolGroup00 
  Logical volume "LogVol01" created

[root@livecd ~]# mkswap /dev/VolGroup00/LogVol01
Setting up swapspace vrsion 1, size = 2147479 kB

[root@livecd ~]# swapon /dev/VolGroup00/LogVol01

[root@livecd ~]# pvscan
  PV /dev/sda2   VG VolGroup00   lvm2 [23.88GB / 11.88 GB free]
  Total: 1 [23.88 GB] / in use: 1 [23.88 GB] / in on FV: 0 [0   ]

 [root@livecd ~]# pvresize --setphysicalvolumesize 12G /dev/sda2
  /dev/sda2: cannot resize to 383 extents as 384 are allocated.
  0 physical volume(s) resized / 1 physical volumes(s) not resized
 
 [root@livecd ~]# pvresize --setphysicalvolumesize 12500M /dev/sda2
  Physical volume "/dev/sda2" changed
  1 physical volume(s) resized / 0 physical volumes(s) not resized
 
  [root@livecd ~]# pvscan
  PV /dev/sda2   VG VolGroup00   lvm2 [12.19 GB / 192.00 MB free]
  Total: 1 [12.19 GB] / in use: 1 [12.19 GB] / in on FV: 0 [0   ]
 
  12500M-192M = 12308M
 
  [root@livecd ~]# pvresize --setphysicalvolumesize 12308M /dev/sda2
  Physical volume "/dev/sda2" changed
  1 physical volume(s) resized / 0 physical volumes(s) not resized

 [root@chuo221 ~]# pvscan
  PV /dev/sda2   VG VolGroup00   lvm2 [12.00 GB / 0    free]
  Total: 1 [12.00 GB] / in use: 1 [12.00 GB] / in no VG: 0 [0   ]
 
 
 
 

9. Shrinking VMWare disk

Calculate the desired size
vmdk_shrunken = [x * (1024*1024*1024)] / 512
where x is the size in Gigabytes

=> [12 * (1024*1024*1024)] / 512 = 25165824

Change to the desired size

Locate the lines that look like:
# Extent description
RW 25165824 VMFS “foo-flat.vmdk”


/vmfs/volumes/4a2ea43a-dcf1b3df-3141-00215e74a364/copy73 (2) # vi copy73_3.vmdk

# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=c0e13e2a
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"

# Extent description
RW 25165824 VMFS "copy73_3-flat.vmdk"

# The Disk Data Base
#DDB

ddb.adapterType = "lsilogic"
ddb.deletable = "true"
ddb.geometry.cylinders = "3133"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.longContentID = "67a55ef19ceef3110ed453dec0e13e2a"
ddb.uuid = "60 00 C2 9c a1 49 6f 68-bb 51 38 e1 15 63 5c 5d"
ddb.virtualHWVersion = "7"



10 Export to a new VMDK

/vmfs/volumes/4a2ea43a-dcf1b3df-3141-00215e74a364/copy73 (2) # vmkfstools -i copy73_3.vmdk copy73_4.vmdk
Destination disk format: VMFS zeroedthick
Cloning disk 'copy73_3.vmdk'...
Clone: 49% done.

Remove old disk from VM, add new disk to VM
Remove VM from inventory, Re-add VM to inventory


  http://technogrip.blogspot.com/2012/04/shrinking-linux-centos-lvm-disk-on.html
  http://www.vmwarearena.com/2013/03/shrink-virtual-machine-vmdk.html
 


 11. Fixing LVM I/O Errors

[root@TT4-Web00-S ~]# pvscan
  /dev/sda2: read failed after 0 of 4096 at 25662783488: Input/output error
  /dev/sda2: read failed after 0 of 4096 at 25662865408: Input/output error

  PV /dev/sda2   VG VolGroup00   lvm2 [12.97 GB / 992.00 MB free]
  Total: 1 [12.97 GB] / in use: 1 [12.97 GB] / in no VG: 0 [0   ]
 
 
 
  [root@TT4-Web00-S ~]# fdisk /dev/sda

The number of cylinders for this disk is set to 1958.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        3133    25061400   83  Linux

Command (m for help): d
Partition number (1-4): 2

Command (m for help): p

Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (14-1958, default 14): 14
Last cylinder or +size or +sizeM or +sizeK (14-1958, default 1958): 1958

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): 8e
Changed system type of partition 2 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        1958    15623212+  8e  Linux LVM


Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
reboot ...done

Wednesday, April 15, 2015

Fix [INS-40718] Single Client Access Name (SCAN):clus-scan could not be resolved.

Two host machines with following settings
  • rac1.dbaora.com
  • rac2.dbaora.com

public private vip
rac1 192.168.0.50 192.168.56.60 192.168.0.70
rac2 192.168.0.51 192.168.56.61 192.168.0.71

with single client access name (SCAN) address

public
rac-scan 192.168.0.20
192.168.0.21
192.168.0.22

My WIFI router generates ip adress like 192.168.1.X so it doesn’t interfere with RAC public, private and SCAN. It’s important to have your internet network on separate subnet.

So my entry in “/ect/hosts” looks following. As you can notice SCAN entries are commented and will be resolved via dnsmasq.

127.0.0.1     localhost.localdomain localhost

#public
192.168.0.50   rac1        rac1.dbaora.com
192.168.0.51   rac2        rac2.dbaora.com

#private
192.168.56.60  rac1-priv   rac1-priv.dbaora.com 
192.168.56.61  rac2-priv   rac2-priv.dbaora.com

#virtual
192.168.0.70   rac1-vip    rac1-vip.dbaora.com
192.168.0.71   rac2-vip    rac2-vip.dbaora.com

#scan
#192.168.0.20   rac-scan    rac-scan.dbaora.com
#192.168.0.21   rac-scan    rac-scan.dbaora.com
#192.168.0.22   rac-scan    rac-scan.dbaora.com
 
Install and configure dnsmasq

1. To install dnsmasq run as root following command
yum install dnsmasq
2. Configure dnsmasq
create new file “/etc/racdns” with settings for SCAN
 
[root@rac1 ~]# cat /etc/racdns
#scan
192.168.0.20   rac-scan    rac-scan.dbaora.com
192.168.0.21   rac-scan    rac-scan.dbaora.com
192.168.0.22   rac-scan    rac-scan.dbaora.com
 
 
modify dnsmasq default configuration file “/etc/dnsmasq.conf”. One parameter addn-hosts should be changed to point to file “/etc/racdns”.

[root@rac1 ~]# cat /etc/dnsmasq.conf | grep addn-hosts
addn-hosts=/etc/racdns
 
3. Start dnsmasq
service dnsmasq start
chkconfig dnsmasq on
4. Next step is to resolve problem with file “/etc/resolv.conf”
I’m using third network card as NAT with DHCP so each time you restart network card or reboot host the file is overwritten with automatically generated settings. Nameserver points for 192.168.1.1 which is required to resolve internet entries but not enough to resolve SCAN settings via dnsmasq.
[root@rac1 ~]# cat /etc/resolv.conf
# Generated by Networkmanager
search dbaora.com
nameserver 192.168.1.1
As default dnsmasq is running on ip adress 127.0.0.1 so it requires following settings in “/etc/resolv.conf”.
[root@rac1 ~]# cat /etc/resolv.conf
# Generated by Networkmanager
nameserver 127.0.0.1
search dbaora.com
nameserver 192.168.1.1
You must protect the file from being automatically overwritten by host reboot, network card restart etc.
[root@rac1 ~]# chattr +i /etc/resolv.conf
Verification
It’s just running nslookup to verify all is working fine
rac-scan
[root@rac1 ~]# nslookup rac-scan
Server:        127.0.0.1
Address:    127.0.0.1#53

Name:    rac-scan.dbaora.com
Address: 192.168.0.22
Name:    rac-scan.dbaora.com
Address: 192.168.0.20
Name:    rac-scan.dbaora.com
Address: 192.168.0.21

[root@rac1 ~]# nslookup rac-scan
Server:        127.0.0.1
Address:    127.0.0.1#53

Name:    rac-scan.dbaora.com
Address: 192.168.0.20
Name:    rac-scan.dbaora.com
Address: 192.168.0.21
Name:    rac-scan.dbaora.com
Address: 192.168.0.22

[root@rac1 ~]# nslookup rac-scan
Server:        127.0.0.1
Address:    127.0.0.1#53

Name:    rac-scan.dbaora.com
Address: 192.168.0.21
Name:    rac-scan.dbaora.com
Address: 192.168.0.22
Name:    rac-scan.dbaora.com
Address: 192.168.0.20
rac1, rac2, rac1-priv, rac2-priv, rac1-vip, rac2-vip
[root@rac1 ~]# nslookup rac1
Server:        127.0.0.1
Address:    127.0.0.1#53

Name:    rac1.dbaora.com
Address: 192.168.0.50

[root@rac1 ~]# nslookup rac2
Server:        127.0.0.1
Address:    127.0.0.1#53

Name:    rac2.dbaora.com
Address: 192.168.0.51

[root@rac1 ~]# nslookup rac1-priv
Server:        127.0.0.1
Address:    127.0.0.1#53

Name:    rac1-priv.dbaora.com
Address: 192.168.56.60

[root@rac1 ~]# nslookup rac2-priv
Server:        127.0.0.1
Address:    127.0.0.1#53

Name:    rac2-priv.dbaora.com
Address: 192.168.56.61

[root@rac1 ~]# nslookup rac1-vip
Server:        127.0.0.1
Address:    127.0.0.1#53

Name:    rac1-vip.dbaora.com
Address: 192.168.0.70

[root@rac1 ~]# nslookup rac2-vip
Server:        127.0.0.1
Address:    127.0.0.1#53

Name:    rac2-vip.dbaora.com
Address: 192.168.0.71
Have a fun

How to install Oracle RAC 12C using Oracle Linux 6.4 on VMware 5.x

Installing a Virtualized Oracle 12cR1 RAC Cluster using Oracle Linux 6.4 Virtual Machines on VMware ESXi 5

Last updated 27-Sep-2013

A new release of Oracle means it’s time for a new walkthrough. In this fourth “RAC on ESX” walkthrough, I’ll go over the process of building an Oracle 12c RAC cluster on VMware ESXi 5 from start to finish.  My goal in this walkthrough is to have you up and running with a virtualized Oracle cluster with minimal hassle. Since this guide is step by step, you don't need to be an expert to follow along, but the more experience you have the better.
The following diagram will give you a conceptual idea of the cluster.

 

As I’ve mentioned in previous walkthroughs, this configuration is meant only for testing, and to give you a way to learn RAC without buying the expensive hardware a traditional RAC cluster entails. If you’re building a production RAC cluster, I suggest you read the Grid Infrastructure and RAC installation guides, and the RAC Administration and Deployment Guide.  The following MOS (My Oracle Support) notes will also provide you with guidance (this requires an Oracle support subscription):
  • RAC and Oracle Clusterware Best Practices and Starter Kit (Platform Independent) (Doc ID 810394.1)
  • RAC: Frequently Asked Questions (Doc ID 220970.1)

The following is required in order to succeed in following this guide:
Hardware Requirements
  • An operational VMware ESXi 5 server.
  • 80GB of available space in your ESX storage location (30GB of storage for each virtual machine, and 20GB of shared storage).
  • 8GB available RAM for virtual machines (4GB per virtual machine).
Software Requirements
Network Requirements
We will need 9 IP addresses for the RAC cluster. 2 public communication IP addresses, 2 Virtual IPs, 2 Interconnect IP addresses, and 3 SCAN (single client access name) addresses. The public communication IP addresses, SCAN addresses, and VIP addresses need to be on the same segment. The private addresses need to be on their own segment. This is how it looks in my network:

Node Hostname Public IP Interconnect IP VIP
node1.example.com 192.168.2.220 10.0.0.1 192.168.2.222
node2.example.com 192.268.2.221 10.0.0.2 102.168.2.223

Lastly our SCAN addresses will be 192.168.2.117, 192.168.2.118, and 192.168.2.119. The SCAN addresses should be configured in DNS rather than in the hosts file, as 3 round-robin A records. If you don't have DNS configured, or are unable to configure DNS, you can place the SCAN addresses in the hosts file. Placing the SCAN addresses in the host file is against best practices. I gave them a name of clus-scan. Make sure these addresses are resolvable from the nodes.

Your networking environment is probably different from mine. Feel free to configure the IPs to be on whatever network segment you use, just make sure that the Public IPs, SCAN addresses, and VIPs are on the same segment. If you do use different addresses, make sure to use them during the OS install and update /etc/hosts appropriately.

Hypervisor Configuration and Virtual Machine Creation
Each RAC node requires two network connections; one connection is for public communication, and the other is for the Interconnect. In order to isolate Interconnect traffic, we will create a virtual switch as shown below. It's likely that the RAC Interconnect will work without following this step, however I haven't tested this and Interconnect traffic is supposed to be isolated over its own VLAN or switch in the real world anyway.

Virtual Switch Creation
Log into the ESXi host using the vSphere client, select the host, and click the "Configuration" tab.

Click "Networking" in the "Hardware" box.


Click "Add Networking..." in the upper right corner of the pane.

3 

Select "Virtual Machine" as the connection type and click the "Next" button.

4 

Make sure "Create a vSphere standard switch" is selected, and click the "Next" button.

5

I used "RAC Interconnect" as my network label. Feel free to use any label you want to, and click the "Next" button.

6 

Click "Finish" to create the virtual switch.

7

Virtual Machine Creation

Right click on the ESXi host, and click "New Virtual Machine..." to begin the process.


Select the "Custom" configuration option.


Set "node1.example.com" as the virtual machine name.

 

Select the storage location for the virtual machine files.


Select "Virtual Machine Version: 8."

 

Select "Linux" as the guest operating system, and then select "Oracle Linux 4/5/6 (64-bit)" from the version dropdown.

 

I just left this screen set to the defaults, but you can change them if needed.

 

Set the memory size to 4236MB. You'll notice this is slightly more than the required 4GB. I am doing this because the virtual machine reserves a small amount of memory that isn't visible to the guest operating system. Setting this amount of memory allows the guest to have a full 4GB available. The Cluster Verification Utility memory check will fail otherwise.

 

Configure the networking as shown below. Make sure that this setting is consistent across all RAC nodes.

 

I left the controller setting at the default, but you can change it if you have a specific reason to do so.

 

Select "Create a new virtual disk."

 

Set the disk size to 30GB, and select "Thick Provision Eager Zeroed."

 

I left these settings at their defaults.

 

Click the "Finish" button to create the virtual machine. This may take some time to complete.

 

Next, the second cluster node virtual machine will be created. Repeat the same process you used for node1, but use "node2.example.com" as the name of the virtual machine instead.

Oracle Linux Installation
As listed in the prerequisites section, you'll need the Oracle Linux 6.4 installation ISO to follow this guide. You can mount it in the virtual machine by selecting the virtual machine, then mounting the ISO as shown below. The screenshot below has me mounting the ISO from the datastore, but you can also mount it from a local ISO image (which means the installation will run over the network). On my hypervisor that option is grayed out until I turn on the virtual machine.

Select the newly created virtual machine, and click the play button to start it.



Once the virtual machine is started, you can view its console by right clicking on it and then clicking "Open Console."



From the console of the virtual machine, you can mount the ISO.



From my virtual machine, I clicked "Send Ctrl+Alt+del" in order to restart it and boot from the installation ISO.



The virtual machine will boot from the ISO. Press enter to proceed with the default installation option.



I selected "Skip." Feel free to test the installation media if you want to.



The graphical installation will commence.



Select your desired language.



Select your desired keyboard.



Select "Basic Storage Devices."



You may see a warning pop up, in which case you can click "Yes, discard any data."



Type in "node1.example.com" or a different hostname if you prefer. Click "Configure Network."



Select "System eth0," and then click "Edit."



Configure the network settings for the public interface as required. Make sure that "Connect automatically" is checked. I left IPv6 turned off, which is the default. Click "Apply" when you're finished.



Select "System eth1," and then click "Edit."



Configure the network settings for the private interface as required. Make sure that "Connect automatically" is checked. I left IPv6 turned off, which is the default. Click "Apply" when you're finished, and then click the "Close" button when the "Network Connections" screen pops up.



Select your desired time zone.



Enter a password for the root user.



Select "Use All Space." Click "Write changes to disk" when the warning pops up.



Change the installation type from "Basic Server" to "Minimal."



The dependency check will run and then the installation process will begin.



The installation is finished.



The Oracle Linux installation is now complete on node1. Repeat the process on node2, but make sure to use the correct hostname and IP address. The fully-qualified hostname for node2 is node2.example.com. Use the same root password on both nodes.

Pre-installation Tasks

All steps are run on both nodes as the root user, unless specified otherwise.
Disable SELinux
{
echo \# This file controls the state of SELinux on the system.
echo \# SELINUX= can take one of these three values:
echo \# enforcing - SELinux security policy is enforced.
echo \# permissive - SELinux prints warnings instead of enforcing.
echo \# disabled - No SELinux policy is loaded.
echo SELINUX=disabled
echo \# SELINUXTYPE= can take one of these two values:
echo \# targeted - Targeted processes are protected,
echo \# mls - Multi Level Security protection.
echo SELINUXTYPE=targeted

} > /etc/selinux/config
The node must be rebooted in order for the change to take effect (reboot or shutdown -r now). You can verify the change by running getenforce. The output should be "Disabled."

Install Required OS Packages

yum install compat-libcap1.x86_64 compat-libstdc++-33.x86_64 gcc.x86_64 gcc-c++.x86_64 glibc-devel.x86_64 ksh.x86_64 libstdc++-devel.x86_64 libaio-devel.x86_64 libXmu.x86_64 libXxf86dga.x86_64 libXxf86misc.x86_64 libdmx.x86_64 make.x86_64 nfs-utils.x86_64 sysstat.x86_64 mlocate.x86_64 compat-libstdc++-33.i686 glibc-devel.i686 libstdc++.i686 libstdc++-devel.i686 libaio-devel.i686 glibc.i686 libgcc.i686 libaio-devel.i686 libXext.i686 libXtst.i686 libX11.i686 libXau.i686 libxcb.i686 libXi.i686 xorg-x11-twm.x86_64 xorg-x11-server-utils.x86_64 xorg-x11-utils.x86_64 xorg-x11-xauth.x86_64 oracleasm-support.x86_64 tigervnc-server.x86_64 xterm.x86_64 ntp.x86_64 nscd.x86_64 openssh-clients.x86_64 unzip.x86_64 smartmontools.x86_64 parted.x86_64 wget.x86_64 bind-utils.x86_64 -y

Check all package install
 rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE}(%{ARCH})\n' compat-libcap1.x86_64 compat-libstdc++-33.x86_64 gcc.x86_64 gcc-c++.x86_64 glibc-devel.x86_64 ksh.x86_64 libstdc++-devel.x86_64 libaio-devel.x86_64 libXmu.x86_64 libXxf86dga.x86_64 libXxf86misc.x86_64 libdmx.x86_64 make.x86_64 nfs-utils.x86_64 sysstat.x86_64 mlocate.x86_64 compat-libstdc++-33.i686 glibc-devel.i686 libstdc++.i686 libstdc++-devel.i686 libaio-devel.i686 glibc.i686 libgcc.i686 libaio-devel.i686 libXext.i686 libXtst.i686 libX11.i686 libXau.i686 libxcb.i686 libXi.i686 xorg-x11-twm.x86_64 xorg-x11-server-utils.x86_64 xorg-x11-utils.x86_64 xorg-x11-xauth.x86_64 oracleasm-support.x86_64 tigervnc-server.x86_64 xterm.x86_64 ntp.x86_64 nscd.x86_64 openssh-clients.x86_64 unzip.x86_64 smartmontools.x86_64 parted.x86_64 wget.x86_64 bind-utils.x86_64
The ASMlib tools package (oracleasmlib-2.0.4-1.el6) is not available on the public YUM repository, so we will download it and install it manually.

wget http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.4-1.el6.x86_64.rpm
rpm -iv oracleasmlib-2.0.4-1.el6.x86_64.rpm

Configure System Services
chkconfig iptables off && /etc/init.d/iptables stop && \
chkconfig ip6tables off && /etc/init.d/ip6tables stop && \
chkconfig nscd on && /etc/init.d/nscd start && \
chkconfig ntpd on
Configure the Network Time Protocol Daemon

{
echo OPTIONS=\"-x -u ntp:ntp -p /var/run/ntpd.pid\"
} > /etc/sysconfig/ntpd

I changed the following lines in /etc/ntp.conf:

server 0.rhel.pool.ntp.org
server 1.rhel.pool.ntp.org
server 2.rhel.pool.ntp.org

To the following:

server nist1-ny.ustiming.org
server nist.time.nosc.us
server nist1-la.ustiming.org
Feel free to use any NTP servers you want, or leave it at the defaults. I changed the NTP servers in my configuration in order to avoid the "PRVF-5408 : NTP Time Server is common only to the following node" errors that occur when the Cluster Verification Utility runs.
Synchronize time on our nodes using ntpdate (use any NTP server you want):
ntpdate nist1-ny.ustiming.org
Start ntpd:
/etc/init.d/ntpd start
 PRVF-5413 : Node "node1" has a time offset of -2355.8 that is beyond permissible limit of 1000.0 from NTP Time Server ".LOCL."

Login using root user and execute on both nodes one by one.

[root@node2 ~]# service ntpd stop
[root@node2 ~]# ntpdate 10.151.110.28
[root@node2 ~]# service ntpd start
[root@node1 ~]# service ntpd stop
[root@node1 ~]# ntpdate 10.151.110.28
[root@node1 ~]# service ntpd start

Now, crosscheck the output with below command.
[root@ ~]# ssh node2 date;ssh node1 date
Configure /etc/hosts

{
echo \#Do not remove the following line, or various programs
echo \# that require network functionality will fail.
echo 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
echo ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
echo
echo \#public
echo 192.168.2.220 node1 node1.example.com
echo 192.168.2.221 node2 node2.example.com
echo
echo \#private
echo 10.0.0.1 node1-priv node1-priv.example.com
echo 10.0.0.2 node2-priv node2-priv.example.com
echo
echo \#vip
echo 192.168.2.222 node1-vip node1-vip.example.com
echo 192.168.2.223 node2-vip node2-vip.example.com
} > /etc/hosts
Configure the Shared Memory Filesystem
Open /etc/fstab and change the following line:
tmpfs /dev/shm tmpfs defaults 0 0
To the following:
tmpfs /dev/shm tmpfs rw,exec,size=4G 0 0
Then remount the file system:
mount -o remount /dev/shm
Set Linux Kernel Parameters

{
echo
echo \# BEGIN ORACLE RAC KERNEL PARAMETERS
echo \# kernel.shmall = 1/2 of physical memory in pages
echo \# See MOS note 301830.1
echo kernel.shmall = 2097152
echo \# kernel.shmmax = 1/2 of physical memory in bytes
echo \# See MOS note 567506.1
echo kernel.shmmax = 2148726784
echo kernel.shmmni = 4096
echo kernel.sem = 250 32000 100 128
echo fs.file-max = 6815744
echo fs.aio-max-nr = 1048576
echo net.ipv4.ip_local_port_range = 9000 65500
echo net.core.rmem_default = 262144
echo net.core.rmem_max = 4194304
echo net.core.wmem_default = 262144
echo net.core.wmem_max = 1048576
echo \# END ORACLE RAC KERNEL PARAMETERS
} >> /etc/sysctl.conf
/sbin/sysctl -p
Create and Configure OS Groups, Users, Directories, and Permissions
groupadd -g 1000 oinstall
groupadd -g 1100 asmadmin
groupadd -g 1200 dba
groupadd -g 1201 oper
groupadd -g 1202 backupdba
groupadd -g 1203 dgdba
groupadd -g 1204 kmdba
groupadd -g 1300 asmdba
groupadd -g 1301 asmoper
useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,dba grid
useradd -u 1101 -g oinstall -G dba,oper,asmdba,backupdba,dgdba,kmdba oracle
mkdir -p /u01/app/grid
mkdir -p /u01/app/12.1.0/grid
chown -R grid:oinstall /u01
mkdir -p /u01/app/oracle
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
After the oracle and grid users have been created, set passwords for the accounts with "passwd oracle" and "passwd grid"


Set Shell Limits for the Oracle and Grid Users
{
echo
echo "if [ \$USER = "oracle" ] || [ \$USER = "grid" ]"
echo "then"
echo " if [ \$SHELL = "/bin/ksh" ]"
echo " then"
echo " ulimit -p 16384"
echo " ulimit -n 65536"
echo " else"
echo " ulimit -u 16384 -n 65536"
echo " fi"
echo " umask 022"
echo "fi"
} >> /etc/profile
{
echo
echo "oracle soft nproc 2047"
echo "oracle hard nproc 16384"
echo "oracle soft nofile 1024"
echo "oracle hard nofile 65536"
echo "oracle soft stack 10240"
echo "oracle hard stack 10240"
echo "grid soft nproc 2047"
echo "grid hard nproc 16384"
echo "grid soft nofile 1024"
echo "grid hard nofile 65536"
echo "grid soft stack 10240"
echo "grid hard stack 10240"
} >> /etc/security/limits.conf

{
echo
echo \# Default limit for number of user\'s processes to prevent
echo \# accidental fork bombs.
echo \# See rhbz \#432903 for reasoning.
echo \# Update: this limit has been ammended for Oracle RAC
echo \# See MOS note 1487773.1
echo
echo \# \* soft nproc 1024
echo \* - nproc 1024
echo root soft nproc unlimited
} > /etc/security/limits.d/90-nproc.conf
{
echo
echo session required pam_limits.so
} >> /etc/pam.d/login
Configure the Bash Profiles for the Oracle and Grid Users
Run the following on node1:
{
echo
echo "export EDITOR=vi"
echo "export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1"
echo "export ORACLE_BASE=/u01/app/oracle"
echo "export PATH=\$PATH:\$ORACLE_HOME/bin:\$ORACLE_HOME/OPatch"
echo "export ORACLE_SID=ORCL1"
echo "export ORACLE_UNQNAME=ORCL"
echo "export LD_LIBRARY_PATH=\$ORACLE_HOME/lib"
echo
} >> /home/oracle/.bash_profile
{
echo
echo "export EDITOR=vi"
echo "export ORACLE_HOME=/u01/app/12.1.0/grid"
echo "export ORACLE_BASE=/u01/app/grid"
echo "export PATH=\$PATH:\$ORACLE_HOME/bin:\$ORACLE_HOME/OPatch"
echo "export ORACLE_SID=+ASM1"
echo "export LD_LIBRARY_PATH=\$ORACLE_HOME/lib"
echo
} >> /home/grid/.bash_profile
Run the following on node2:
{
echo
echo "export EDITOR=vi"
echo "export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1"
echo "export ORACLE_BASE=/u01/app/oracle"
echo "export PATH=\$PATH:\$ORACLE_HOME/bin:\$ORACLE_HOME/OPatch"
echo "export ORACLE_SID=ORCL2"
echo "export ORACLE_UNQNAME=ORCL"
echo "export LD_LIBRARY_PATH=\$ORACLE_HOME/lib"
echo
} >> /home/oracle/.bash_profile
{
echo
echo "export EDITOR=vi"
echo "export ORACLE_HOME=/u01/app/12.1.0/grid"
echo "export ORACLE_BASE=/u01/app/grid"
echo "export PATH=\$PATH:\$ORACLE_HOME/bin:\$ORACLE_HOME/OPatch"
echo "export ORACLE_SID=+ASM2"
echo "export LD_LIBRARY_PATH=\$ORACLE_HOME/lib"
echo
} >> /home/grid/.bash_profile
Configure SSH Equivalency for the Oracle and Grid Users

We will start with the Oracle user. Start this process as the root user. On each node run the following to generate the RSA keys. Use the default key location, and leave the passphrase blank.

su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
The following commands will append the public keys to the authorized_keys file. Run the following on node1:
touch ~/.ssh/authorized_keys
cd ~/.ssh
Now run the following on node1 to authorize the RSA key on node1:
cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
Next run the following on node1 to authorize the RSA key on node2:
ssh node2 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
Our keys have been appended to the authorized_keys file, and now we must copy this file to the second node. Run the following on node1:
scp authorized_keys node2:/home/oracle/.ssh
Run the following on each node:
chmod 600 /home/oracle/.ssh/authorized_keys
Next we will test equivalency on each node. On node2 you will be asked if you wish to continue connecting. Type yes. You should not see this warning again. Your output should look like the following:
[oracle@node1 ~]$ ssh node1 date
Thu Sep 19 08:54:09 CDT 2013
[oracle@node1 ~]$ ssh node2 date
Thu Sep 19 08:54:10 CDT 2013
Now we will configure equivalency for the grid user. On each node run the following to generate the RSA keys. Use the default key location, and leave the passphrase blank.
su - grid
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
The following commands will append the public keys to the authorized_keys file. Run the following on node1:
touch ~/.ssh/authorized_keys
cd ~/.ssh
Now run the following on node1 to authorize the RSA key on node1:
cat /home/grid/.ssh/id_rsa.pub >> authorized_keys
Next run the following on node1 to authorize the RSA key on node2:
ssh node2 cat /home/grid/.ssh/id_rsa.pub >> authorized_keys
 Our keys have been appended to the authorized_keys file, (Run the following on node1) and now we must copy this file to the second node.
scp authorized_keys node2:/home/grid/.ssh
Run the following on each node:
chmod 600 /home/grid/.ssh/authorized_keys
Next we will test equivalency on each node. On node2 you will be asked if you wish to continue connecting. Type yes. You should not see this warning again. Your output should look like the following:
[grid@node1 .ssh]$ ssh node1 date
Fri Jan 28 01:40:30 CST 2011
[grid@node1 .ssh]$ ssh node2 date
Fri Jan 28 01:40:32 CST 2011
Installation Media Preparation
I transferred the compressed grid installation media to the grid user's home directory. I transferred the media to the node from a Windows system with WinSCP, which can be downloaded for free from http://winscp.net/eng/download.php. When I used WinSCP, I connected to the node using the grid user; this will assure that the grid user will have ownership permissions on the compressed media. I did the same thing for the database installation media, except I connected with the oracle user.
[grid@node1 ~]$ unzip V38501-01_1of2.zip && unzip V38501-01_2of2.zip
[grid@node1 ~]$ ls -l
total 1906444
drwxr-xr-x 7 grid oinstall 4096 Jun 10 07:15 grid
-rw-r--r--. 1 grid oinstall 1750478910 Jun 26 04:36 V38501-01_1of2.zip
-rw-r--r--. 1 grid oinstall 201673595 Jun 26 03:44 V38501-01_2of2.zip
[oracle@node1 ~]$ unzip V38500-01_1of2.zip && unzip V38500-01_2of2.zip
[oracle@node1 ~]$ ls -l
total 2419504
drwxr-xr-x 7 oracle oinstall 4096 Jun 10 07:14 database
-rw-r--r--. 1 oracle oinstall 1361028723 Jun 26 04:29 V38500-01_1of2.zip
-rw-r--r--. 1 oracle oinstall 1116527103 Jun 26 04:23 V38500-01_2of2.zip
You may need to free up some space by removing the compressed media files after you unzip them. I did this by running the following as root:
rm -rf /home/oracle/V38500-01* /home/grid/V38501-01*

Installing CVUQDISK
CVUQDISK is required by the cluster verification utility. Below are the steps I took to install it. I unzipped the grid installation media to /home/grid. If you placed it elsewhere, you'll need to adjust the commands below.
This is how I copied the package to node2:
su - grid
scp ~/grid/rpm/cvuqdisk-1.0.9-1.rpm node2:/home/grid
Install the package on node1, as root:


CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
cd /home/grid/grid/rpm/
rpm -iv cvuqdisk-1.0.9-1.rpm

Install the package on node2, as root:

cd /home/grid
CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
rpm -iv cvuqdisk-1.0.9-1.rpm

Configuring Shared Storage
We will be creating three shared disk files, with their purposes and sizes as follows:

Name Size Disk Group Contents
crs.vmdk 5GB +CRS Grid Infrastructure Management Repository, voting files, and the Oracle Cluster Registry (OCR)
data.vmdk 5GB +DATA Database files
fra.vmdk 10GB +FRA Fast Recovery Area

Creating shared disks on the VMware ESXi host

You'll need to install VCLI in order to configure the shared disks. VCLI can be downloaded for free from https://my.vmware.com/web/vmware/downloads.
Once you have VCLI installed, cd to the following directory:

Run the following commands to create each disk file on the ESX host. You'll need to substitute the IP after -server with the IP of your ESX host, and use the correct path for your data store. I placed my shared disks in a folder called 12cR1RAC. If you want to create your own folder you can do so by using the Datastore Browser. This will take a while.

vmkfstools -d eagerzeroedthick -c 5G /vmfs/volumes/datastore2/rac_datastore/crs.vmdk
vmkfstools -d eagerzeroedthick -c 5G /vmfs/volumes/datastore2/rac_datastore/data.vmdk
vmkfstools -d eagerzeroedthick -c 10G /vmfs/volumes/datastore2/rac_datastore/fra.vmdk

Now that we've created our shared disks, we'll be adding them to our virtual machines.
Shutdown the virtual machines by executing shutdown -h now on each virtual machine. Once they are powered off right click node1 and select "Edit Settings...."


Click the "Add...."

Select "Hard Disk."

Select "Use an existing virtual disk."

Click "Browse..." and select the crs.vmdk file you just created.

Under "Virtual Device Node," select "SCSI (1:0)". This will create a new disk controller. Select "Independent" and "Persistent."


Review "Ready to Complete" and click "Finish." Don't click the "OK" button yet.
 

There are two more drives to add. Repeat the drive adding process, and use an incrementing virtual device node (1:0, 1:1, and 1:2). Select the new SCSI controller and select "Physical." Now, go ahead and click "OK" to have the change take effect.


Repeat this process on node2. Make sure that the drives have matching virtual device node IDs on each RAC node. After adding crs.vmdk, I added data.vmdk and finally fra.vmdk. Do this in the same order on both nodes.
Start the nodes. They should now have the drives attached to them. A listing of the devices should be similar to the following.

[root@node1 ~]# fdisk -l|egrep sd[bcd]
Disk /dev/sdb: 5368 MB, 5368709120 bytes
Disk /dev/sdc: 5368 MB, 5368709120 bytes
Disk /dev/sdd: 10.7 GB, 10737418240 bytes



[root@node2 ~]# fdisk -l|egrep sd[bcd]
Disk /dev/sdb: 5368 MB, 5368709120 bytes
Disk /dev/sdc: 5368 MB, 5368709120 bytes
Disk /dev/sdd: 10.7 GB, 10737418240 bytes
Configure Storage Devices
Next, we'll partition the disks we added to the virtual machines. These disks will be used by ASM. Run the following on node1:
fdisk /dev/sdb
Create a new partition with n, then select primary, partition number 1, and use the defaults for the starting and ending cylinder. Type w to write changes.
[root@node1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x61f217af.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-522, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-522, default 522):
Using default value 522
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Do the same thing for /dev/sdc, and /dev/sdd. The partitions are now visible on node1.
[root@node1 ~]# ls -l /dev/sd[bcd]1
brw-rw---- 1 root disk 8, 17 Sep 19 17:21 /dev/sdb1
brw-rw---- 1 root disk 8, 33 Sep 19 17:22 /dev/sdc1
brw-rw---- 1 root disk 8, 49 Sep 19 17:22 /dev/sdd1

On node2, run partprobe. Now node2 you should now be able to see the new partitions.
[root@node2 ~]# partprobe

Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.
[root@node2 ~]# ls -l /dev/sd[bcd]1
brw-rw---- 1 root disk 8, 17 Sep 19 17:29 /dev/sdb1
brw-rw---- 1 root disk 8, 33 Sep 19 17:29 /dev/sdc1
brw-rw---- 1 root disk 8, 49 Sep 19 17:29 /dev/sdd1
Configure ASMlib
This step will be done on both nodes. Run /etc/init.d/oracleasm configure, and use the following parameters:
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Now, we will configure our 3 shared disks to use ASMlib. This needs to be done on node1.
# /etc/init.d/oracleasm createdisk CRS01 /dev/sdb1
Marking disk "CRS01" as an ASM disk: [ OK ]
# /etc/init.d/oracleasm createdisk DATA01 /dev/sdc1
Marking disk "DATA01" as an ASM disk: [ OK ]
# /etc/init.d/oracleasm createdisk FRA01 /dev/sdd1
Marking disk "FRA01" as an ASM disk: [ OK ]
On node2, run the following so that the disks are available on node2 as well:
/etc/init.d/oracleasm scandisks
Once this is complete, oracleasm listdisks should show the newly created ASMlib disks on both nodes:
[root@node1 rpm]# /etc/init.d/oracleasm listdisks
CRS01
DATA01
FRA01
[root@node2 grid]# /etc/init.d/oracleasm listdisks
CRS01
DATA01
FRA01
Pre-installation Cluster Verification
We should now be ready to install Grid Infrastructure. You can use the Cluster Verification Utility to make sure there are no major underlying problems with the node configuration. I ran the following command as user grid on node1.
~/grid/runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose > cluvfy_results
The only check that failed was the membership check for the grid user in the dba group. Since this is intentional, you can ignore this. If you'd like to see what my output looked like, you can download it here: cluvfy_results.txt

Fix error
http://kinhnghiem.luyenthianhvan.org/2015/04/fix-ins-40718-single-client-access-name.html

Installing Oracle Grid Infrastructure
Start a VNC session on node1 as the grid user by typing:
vncserver
Then, enter a password of your choice.
Now, connect to the session with a client such as tight VNC client, using the syntax :1. Once connected, start the installer.
cd grid
./runInstaller
Click the "Add" button and fill in the appropriate node name and node VIP name.
The installation runs. Click "Yes" when the configuration scripts popup appears.

I skipped software updates, but feel free to try this if you want to.

Select "Install and Configure Oracle Grid Infrastructure for a Cluster."

Select "Configure a Standard cluster."

Select "Advanced Installation."

Select your desired product language.

As previously mentioned, I configured SCAN records in DNS, so I'm not going to configure GNS.

Fix [INS-40718] Single Client Access Name (SCAN):clus-scan could not be resolved.


 http://kinhnghiem.luyenthianhvan.org/2015/04/fix-ins-40718-single-client-access-name.html




Click the "Add" button to add the additional node, and then click "Next."

For the eth1 interface, select "Private" from the "Use for" dropdown.

Select "Yes."

Select "Use Standard ASM for storage."

This disk group will be used by the clusterware. Configure it as shown below.

This cluster is just for testing purposes, so I used a single password.

Select "Do not use Intelligent Platform Management Interface (IPMI)."

By default the correct operating system groups should be selected.

By default the correct software locations should be filled in.

By default the correct inventory location should be filled in.

You can fill this in if you want to have the installer execute the root scripts for you.

The prerequisite checks will now run. There shouldn't be any failures or errors.

Review the summary screen to make sure everything is correct, and then click "Install."

This will generally take a while, depending on your hardware. If you opted to automated the root script execution, you'll see a popup requesting additional approval before the installer will actually do it.

If you see the following screen, it means there were no issues with the installation. You're ready to install the database software.

Installing Oracle Database 12c
Run the following as the oracle user to start a vncserver session.
vncserver
Connect to the session as we did before with your VNC client. Because there may now be two VNC sessions running, you may need to connect by typing :2, which will connect you to the second VNC session running.
From the VNC session as the oracle user, run the installer.
cd database
./runInstaller
Uncheck "I wish to receive security updates via My Oracle Support." Click "Yes" when the popup appears.

Select "Skip software updates."

Select "Install database software only."

Select "Oracle Real Application Clusters database installation."

Make sure both nodes are selected.

Select your desired language.

Select "Enterprise Edition" in order to be able to test the full feature set of Oracle.

By default the correct locations should be filled in.

By default the correct OS groups should be selected.

The prerequisite checks will run. There should be no warnings or errors, and the installer should automatically go to the next screen.

Review the summary screen to make sure everything is correct, and then click "Install."

The installation runs.

Run the root script on each node.

You should now see the following screen. Click "Close."

Creating a RAC Database
We're just about done. Now that the software is installed, let's create a RAC database!
We still have two ASM disks that we need to create disk groups with. We'll be doing this from a vnc session as the grid user. Start the ASMCA by running asmca.
[grid@node1 ~]$ vncserver
Click the "Disk Groups" tab and click "Create."

Configure the "DATA", Externel(None) disk group as shown below, and click "OK" to create the disk group.

Configure the "FRA" disk group as shown below,  Externel(None),  and click "OK" to create the disk group.

The disk groups should be listed as shown below.

Feel free to click the "ASM Instances" tab to verify that ASM is running on both nodes. Click "Exit."

We will now use the Database Configuration Assistant to create a RAC database. From another VNC session being run under the oracle user, run dbca. "Create Database" should be selected.


Select "Advanced Mode."

Select "Admin-Managed" as the configuration type.

Type "ORCL" as the "Global Database Name," which will cause the SID prefix to automatically be filled in.

Add node2 to the "Selected" list.

I left this screen at its defaults.

Since this is for testing, I used the same password for the administrative accounts.

Select the "+DATA" disk group as the common location for all database files. I specified 10,000MB as the size of my FRA, and enabled archiving.

I added the sample schemas, and left the other settings at their defaults.

You can leave this at the defaults. The only thing I changed was enabling Automatic Memory Management.

"Create Database" should be checked by default.

The prerequisite checks will run. There should be no warnings or errors, and the installer should automatically go to the next screen.

Review the summary, then click "Finish" to have DBCA create the database.

The creation process runs.

The following popup should eventually appear, indicating that the database was successfully created. Click "Exit."

You should see the following screen. Click "Close."

If you've made it this far, you've successfully completed the installation and created a functional RAC database!
Post-installation Tasks
Clear Temp Files
The various installers used /tmp for their storage location. To free up some space you can run the following to clean this location out. Always double check what you've typed before pressing enter when using rm -rf. Run the following as root on each node.
 rm -rf /tmp/*
 Edit /etc/oratab
Add a new line to the bottom of /etc/oratab on node1 and node2, so that ORCL1 and ORCL2 are the SID values, respectfully. An example follows.
ORCL1:/u01/app/oracle/product/12.1.0/dbhome_1:N:
Verification
Verify that CRS is running:
[grid@node1 grid]$ crsctl check cluster -all
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
Check the status of the SCAN:
[grid@node1 grid]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node node2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node node1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node node1
Check the status of the ASM instances:
[grid@node1 grid]$ srvctl status asm
ASM is running on node1,node2
Check the status of the database instances:

[grid@node1 ~]$ srvctl status database -d ORCL
Instance ORCL1 is running on node node1
Instance ORCL2 is running on node node2

Check the node apps:

[grid@node1 grid]$ srvctl status nodeapps
VIP node1-vip is enabled
VIP node1-vip is running on node: node1
VIP node2-vip is enabled
VIP node2-vip is running on node: node2
Network is enabled
Network is running on node: node1
Network is running on node: node2
ONS is enabled
ONS daemon is running on node: node1
ONS daemon is running on node: node2


Check the SCAN config:
[grid@node1 grid]$ srvctl config scan
SCAN name: clus-scan, Network: 1
Subnet IPv4: 192.168.2.0/255.255.255.0/eth0
Subnet IPv6:
SCAN 0 IPv4 VIP: 192.168.2.117
SCAN name: clus-scan, Network: 1
Subnet IPv4: 192.168.2.0/255.255.255.0/eth0
Subnet IPv6:
SCAN 1 IPv4 VIP: 192.168.2.119
SCAN name: clus-scan, Network: 1
Subnet IPv4: 192.168.2.0/255.255.255.0/eth0
Subnet IPv6:
SCAN 2 IPv4 VIP: 192.168.2.118

Check the database config:
[grid@node1 grid]$ srvctl config database -d ORCL
Database unique name: ORCL
Database name: ORCL
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ORCL/spfileORCL.ora
Password file: +DATA/ORCL/orapworcl
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL
Database instances: ORCL1,ORCL2
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
Database is administrator managed

Verify that you can connect to the database:
[oracle@node1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Tue Sep 24 02:03:35 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL>

Verify the status for our instances:

SQL> select instance_name, status, startup_time from gv$instance;

INSTANCE_NAME    STATUS       STARTUP_T
---------------- ------------ ---------
ORCL1            OPEN         24-SEP-13
ORCL2            OPEN         24-SEP-13
And that's it! If you've made it this far, you've finished the install and verified that RAC is up and running. You'll probably want to study the documentation in more detail at this point to get a better understanding of RAC concepts and administration. I truly hope this article has been of use to you!
Miscellaneous Notes

Oracle RDBMS Pre-Install RPM
You may have noticed that I didn't use the oracle-rdbms-server-12cR1-preinstall.x86_64 RPM. I actually did initially, but it left so many things out that I decided not to bother with it and just configure everything myself. The side benefit of this is that the installation process I documented will more closely align with installations on other distributions, such as RHEL, that do not have the pre-install RPM.

The Disk I/O Scheduler
I didn't need to configure Deadline for I/O scheduling because we are using the UEK kernel, which uses Deadline for I/O scheduling by default.
References
Requirements for Installing Oracle Database 12.1 on RHEL6 or OL6 64-bit (x86-64) (Doc ID 1529864.1)
Oracle® Database Installation Guide 12c Release 1 (12.1) for Linux
Oracle® Grid Infrastructure Installation Guide 12c