HPUnix Tips and tricks

Published on June 2016 | Categories: Types, Instruction manuals | Downloads: 65 | Comments: 0 | Views: 675
of 21
Download PDF   Embed   Report

HPUnix Tips and tricks

Comments

Content

Boot Sequence: PA-RISC 1] Poweron. 2] PDC Activates and checks for CPU and memory and other peripheral connected to it. 3] Checks if AUTOBOOT Flag is on. If yes , its try to locate the Primary boot path. 4] And loads Initial system loader , from Primary boot path. 5] ISL Loads the secondery loader called hp-ux. 6] hpux loads the kernel /stand/vmunix 7] kernel starts first process swapper , and then init 8] Shell /sbin/pre_init_rc executes . 9] init locates /etc/inittab 10] /etc/inittab starts the system the default run level , mentioned with "initdefault" parameter. 11] Then executes /sbin/bcheckrc and activates LVM , it does following : 1) activates LVM (if appplicable). # 2) runs eisa_config in automatic mode # (if applicable). # 3) checks the file systems before mounting. # (the scripts required for file system specific # checking reside in /sbin/fs/<fstype>/bcheckrc) # 4) and anything else that should be done before # mounting any file systems. -----------------It checks for the file /sbin/lvmrc and , /etc/lvmrc activates the LVM , if AUTO_VG_ACTIVATE=1 mentioned. 12 ]Then it spawns getty process , cons:123456:respawn:/usr/sbin/getty console console And it gives login : prompt.

HP-UX Tips and tricks
Resetting GSP from Command line # stty +resetGSP </dev/GSPdiag1 Find out the boot path from command line # echo "boot_string/S" | adb /stand/vmunix /dev/mem How to extend a file system in hp-ux # lvextend -L 1500 /dev/vgxx/lvolx (assuming the final size will be 1.5GB) # fsadm -F vxfs -b 1500m /xxx (xxx=mount point) If you have online JFS. or # umount /xxx # extendfs -F vxfs /dev/vgxx/rlvolx

How to create a patch depot # swcopy -s /soft/patch/PHSS_35546.depot PHSS35546 @ /soft/patch/depot To Display mode parameter of a SCSI Device # /usr/sbin/scsictl -a -m ir=1 -m ir /dev/rdsk/c1t15d0 How to create a file system First check the available disks and minor no for the VG you want to create # ll /dev/*/group ( Will look like this) crw-rw-r-- 1 root sys 64 0x010000 Sep 16 2006 /dev/vg-ignite/group crw-r----- 1 root sys 64 0x000000 Sep 16 2006 /dev/vg00/group Note down the next available minor no. In this example we can use 0x020000 (0x for hex then 02 is for VG and next 0000 is reserved for logical volumes) # mkdir /dev/vg-test # mknod /dev/vg-test/group c 64 0x02000 # vgcreate /dev/vg-test /dev/dsk/c1t15d0 (assuming on this disk you want to create VG) then # lvcreate -L 20480 -n lv-TEST /dev/vg-TEST (assuming 20GB LV Size you want) # newfs -F vxfs -o largefiles /dev/vg-test/rlv-test # mount /dev/vg-test/lv-test /test To check the contents of a Itanium based make_tape_recovery do the following. (You can use -xvf to restore any file too.) # mt -t /dev/rmt/0mn rew # mt -t /dev/rmt/0mn fsf 22 # tar -tvf /dev/rmt/0mn Find the list of files in a bundle. First find the Bundle name from the Depot # swlist -l bundle -s /patch/11.00/depot Then # swlist -l file -l bundle -s/patch/11.00/depot PB_11_00_march_2003 How to make a Software package First swcopy to a depot then Ex. Assuming the the depot is /patch/11.00/depot Create the bundle first # make_bundles -B i -n "PB_11_00_march_2003" -t "Patch Database March 2003" \ -o /patch/11.00/depot/PB_March_2003_11.00.psf -r 1.0 /patch/11.00/depot then run # swpackage -s /patch/11.00/PB_March-2003_11.00.psf -xlayout_version=1.0 xreinstall_files=true \ -d /patch/11.00/depot To unregister a CD-ROM depot mounted at /mnt/cd, you would type: # swreg -l depot -u /mnt/cd To register the same depot (mounted at /mnt/cd on the local host) as a depot to be available on the network, type: swreg -l depot /mnt/cd The following example enables direct access from one or two other systems to the HWEnable11i depot on the Support Plus CD, assuming the Support Plus CD is mounted at /cdrom: # swreg -l depot /cdrom/HWEnable11i0. SCSI Tuning in HP-UX # scsictl -a /dev/rdsk/c4t6d0 -- too see the parameters # scsictl -m queue_depth=32 /dev/rdsk/c4t6d0 - to change from default 8k to 32k # scsictl -c get_lun_parms /dev/rdsk/cxtxdx does the same as scsictl -a To set the immediate report on and display all mode parameters for SCSI Device # scsictl -a -m ir=1 -m ir /dev/rdsk/c4t6d0

To see the scsi_max-q_depth settings run # kctune scsi_max_qdepth To change it from default 8 to desired 32k permanently # kctune scsi_max_qdepth=32 and type y when it asks for confirmation How to determine the SCSI queue depth for a device in HP-UX # scsictl -m queue_depth /dev/rdsk/cxtxdx How to change the SCSI queue depth for a device in HP-UX # scsictl -m queue_depth=X /dev/rdsk/cxtxdx How to Mirror VG00 using LVM with HPUX-11.23IA HP Document ID : KBRC00014526 NOTE: There are differences in procedure between 11.22 and 11.23. Please refer to KBRC00011156 for B.11.22. 1. From HPUX, use vgdisplay to identify the disk that is in vg00. Use ioscan to find the spare disk. # vgdisplay -v --> vg00 is on /dev/dsk/c2t1d0s2 in this example # ioscan -efunC disk --> Let's assume c3t2d0 for this example 2. Create the system, OS, and service partitions. # vi /tmp/partitionfile 3 EFI 500MB HPUX 100% HPSP 400MB # idisk -wf /tmp/partitionfile /dev/rdsk/c3t2d0 idisk version: 1.31 ********************** WARNING *********************** If you continue you may destroy all data on this disk. Do you wish to continue(yes/no)? yes <-- Answer "yes" and not "y" 3. Create device files needed for the new partitions. # insf -eC disk 4. Verify the partition table. # idisk /dev/rdsk/c3t2d0 5. Verify that the device files were created properly. # ioscan -efnC disk --> c3t2d0 is 0/1/1/1.2.0 6. Populate the /efi/hpux/ directory in the new EFI system partition. # mkboot -e -l /dev/rdsk/c3t2d0 7. Change the auto file for the mirror to boot without quorum. NOTE: Using "s1" # echo "boot vmunix -lq" > /tmp/AUTO.lq # efi_cp -d /dev/rdsk/c3t2d0s1 /tmp/AUTO.lq /EFI/HPUX/AUTO NOTE: We assume that if we boot from the primary, the mirror is fully functional and therefore we don't need to override quorum. Your site might require that both disks override quorum. 9. Verify the contents of the auto file on the primary and the mirror. NOTE: Using "s1" # efi_cp -d /dev/rdsk/c2t1d0s1 -u /EFI/HPUX/AUTO /tmp/AUTO.pri # efi_cp -d /dev/rdsk/c3t2d0s1 -u /EFI/HPUX/AUTO /tmp/AUTO.alt # cat /tmp/AUTO.pri # cat /tmp/AUTO.alt 10. Add the new partition to vg00. NOTE: Using "s2" # pvcreate -fB /dev/rdsk/c3t2d0s2 # vgextend vg00 /dev/dsk/c3t2d0s2 11. Mirror all logical volumes in vg00. NOTE: Using "s2" # lvextend -m 1 /dev/vg00/lvol1 /dev/dsk/c3t2d0s2 # lvextend -m 1 /dev/vg00/lvol2 /dev/dsk/c3t2d0s2 # lvextend -m 1 /dev/vg00/lvol3 /dev/dsk/c3t2d0s2 . .

. # lvextend -m 1 /dev/vg00/lvol8 /dev/dsk/c3t2d0s2 12. Add the new disk to /stand/bootconf. NOTE: Using "s2" # vi /stand/bootconf l /dev/dsk/c2t1d0s2 l /dev/dsk/c3t2d0s2 13. Verify that the new disk was added to vg00, and the lv's are in sync. # vgdisplay -v vg00 14. Verify that the BDRA was updated properly. Take note of the HW paths forstep 15. # lvlnboot -v 15. Add EFI primary and high availability boot path menu entries. # setboot -p 0/1/1/0.1.0 <-- Set primary disk # setboot -h 0/1/1/1.2.0 <-- Set mirror disk # setboot -b on <-- Set autoboot on 16. Verify that the primary and mirror boot paths are configured properly. # setboot 17. Test the new mirror by booting off of it. # shutdown -r -y 0 18. Select "HP-UX HA Alternate Boot" to test the mirror. EFI Boot Manager ver 1.10 [14.61] Firmware ver 2.21 [4334] Please select a boot option HP-UX Primary Boot: 0/1/1/0.1.0 HP-UX HA Alternate Boot: 0/1/1/1.2.0 EFI Shell [Built-in] 20. Verify which disk/kernel you booted from. # grep "Boot device" /var/adm/syslog/syslog.log vmunix: Boot device's HP-UX HW path is: 0.1.1.1.2.0 21. Remove temporary files. # rm /tmp/partitionfile /tmp/AUTO* Done. Date 0/31/04 INQ displaying devices as ACCESS DENIED (from EMC) Use rmsf command to remove the entries (Cause: Migrated from old Symmetrix to a new Symmetrix) Host cannot see more than 8 luns per port ( From EMC) For HP-UX hosts with HDS 9960 or HP XP512 arrays: Set the host mode to 03, not 08, as the HDS documentation specifies. Setting the host mode to 03 enables the host to see more than 8 LUNs per port. How to test if PowerPath is load balancing and configured properly to failover (emc87060) The following procedure can be used to make sure that PowerPath is configured properly for load balancing and failover. This example was done on an HP-UX machine, but it will work (with modifications for the device names) on any Unix host: Pick a Symmetrix device and note all of the native paths configured for that device: # powermt display dev=c24t0d1 Symmetrix ID=000187400662 Logical device ID=0011 state=alive; policy=SymmOpt; priority=0; queued-IOs=0 ================================================ ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --### HW Path I/O Paths Interf. Mode State Q-IOs Errors ================================================ 24 0/10/0/0.97.32.19.0.0.1 c24t0d1 FA 13aA active alive 0 0

35 0/12/0/0.97.32.19.0.0.1 c35t0d1 FA 13aA active alive 0 0 37 0/10/0/0.97.29.19.0.0.1 c37t0d1 FA 4bA active alive 0 0 38 0/12/0/0.97.29.19.0.0.1 c38t0d1 FA 4bA active alive 0 0 Set the policy to Round Robin for that device: # powermt set policy=rr dev=c24t0d1 # powermt display dev=c24t0d1 | grep policy state=alive; policy=RoundRobin; priority=0; queued-IOs=0 Start I/O to a single device in the group (in this case, use the block device as the input file and /dev/null as the output file to read from the device): # dd if=/dev/dsk/c24t0d1 of=/dev/null Show I/O on all of the paths to that device: # sar -d 10 HP-UX curly B.11.11 U 9000/800 05/18/04 15:54:21 device %busy avque r+w/s blks/s avwait avserv 15:54:31 c1t2d0 0.50 0.50 3 44 4.30 1.54 c24t0d1 6.79 0.50 336 2688 5.04 0.24 c35t0d1 8.08 0.50 336 2688 5.04 0.21 c37t0d1 6.79 0.50 336 2688 5.03 0.25 c38t0d1 7.68 0.50 336 2689 5.05 0.21 There is I/O down all four paths to Symmetrix device 0011. Note: In the above example, there is only I/O down the four paths to device 0011 and to the internal disk (c1t2d0). If there is I/O to many devices on the system (almost certain in a production environment), egrep can be used to display only the paths to the Symm device in question: # sar -d 10 | egrep "c24t0d1|c35t0d1|c37t0d1|c38t0d1" 16:00:56 c24t0d1 8.90 0.50 366 2929 5.06 0.20 c35t0d1 6.70 0.50 366 2929 5.02 0.20 c37t0d1 6.70 0.50 366 2929 4.99 0.21 How to interpret HP-UX device numbers from SCSI read and write errors in the syslog - emc88252 When SCSI read and write errors are logged in the syslog, the device number is written in hex. For example: Jun 2 21:13:38 pdb01 vmunix: SCSI: Read error -- dev: b 31 0x2b8400, errno: 126, resid: 8192, Jun 2 21:13:39 pdb01 above message repeats 13 times Jun 2 21:13:39 pdb01 vmunix: - dev: b 31 0x2b8400, errno: 126, resid: 8192, Jun 2 21:13:39 pdb01 vmunix: blkno: 2895352, sectno: 5790704, offset: -1330126848, bcount: 8192. Jun 2 21:13:52 pdb01 vmunix: SCSI: Write error -- dev: b 31 0x038400, errno: 126, resid: 8192, Jun 2 21:13:52 pdb01 vmunix: blkno: 5454416, sectno: 10908832, offset: 1290354688, bcount: 8192. The numbers can be broken down as follows: vmunix: SCSI: Write error -- dev: b 31 0x2b8400 2b = Controller 43 8 = Target 8 4 = LUN 4 This translates to c43t8d4 vmunix: SCSI: Read error -- dev: b 31 0x038400 03= Controller 3 8 = Target 8 4 = LUN 4 This translates to c3t8d4

There is another very easy way to find it : (my note). SCSI write error is most probably either disk or tape. In this case major no is 31 which is always for disk in hp-ux. So run ll /dev/dsk and grep for the entry 0x038400 and you will know right away which is the device. There are 2 paths to the same Symmetrix device (091) that is logging the read and write errors: /dev/rdsk/c3t8d4 :EMC :SYMMETRIX :5568 :32091000 :8838720 /dev/rdsk/c43t8d4 :EMC :SYMMETRIX :5568 :32091000 :8838720 In most cases when this type of error is seen against Symm devices, the issue is logical corruption and running fsck on the affected logical volumes or devices will solve the problem. VXFS Related FAQS To increase or decrease space in file system. First extend the LVM # lvextend -L 72 /dev/vg01/lvol1 # fsadm -b new_size mount_point (here 72*1024,assuming 1K block size ) How to report on directory fragmentation vxvm # fsadm –D mount_point How to reorganize directories vxvm to reduce fragmentation and reclaim wasted space # fsadm –d mount_point How to report on extent fragmentation within a file system vxvm # fsadm –E mount_point Reorganize (defragment) a file system's extents to reduce fragmentation and reclaim wasted space. # fsadm –e mount_point Create a snapshot file system. First create separate LVM for that purpose # lvcreate –L 20 /dev/vg02/snap_back # mkdir /backhome # mount –o snapof=primary_special special mount_point # mount –o snapof=/dev/vg02/lvdata /dev/vg02/snap_back /backhome Change extent attributes. To maximize performance. # setext –e extent_size –r reservation -f flags file How to export and import a Volume Group From the source Server do the these steps unmount the filesystem(s) of the VG you want to export umount /test (Ex. /test is the mount point) vgchange -a n vg-test vgexport -p -m vg-test.map vgtest (To preview and create the mapfile. To export omit -p) on the New Server ioscan –fnC disk (To find new disk entries) insf –d sdisk (To install the disk device files) ll /dev/*/group (To see what minor numbers have already been used) mkdir /dev/vg-test (To create the directory) mknod /dev/vgx/group c 64 0x00000 (To create the device file) vgimport -m /dev/dsk/cxtxdx /dev/dsk/cxtxdx (To import the volume group) vgchange -a y (to activate the volume group) mkdir /test and mount /dev/vg-test/lvxx /test How to test network bandthwitdh / Test ftp data transfer rate without actual transfererring ftp targethost username/passwd... bin

hash put "|dd if=/dev/zero bs=32k count=1000" /dev/null How to convert numbers to and from binary format. To make the conversion of value 195 to binary format, enter: bc obase=2 1000 1111101000 To convert the binary number 1111101000 to a decimal format, enter: bc ibase=2 1111101000 1000 How to mount a ISO image # nohup pfs_mountd & # nohup pfsd & # pfs_mount -o xlat=UNIX pathToIso mountPoint or # usr/sbin/pfs_mount -t iso9660 -x unix /images/cd.iso /mnt Problem : Unable to recover rx2620 server to rx7620 possible (tested) solution : - Boot from the recovery tape. Note down each file systems size and delete them and re-create them from the Ignite Menu. I successfully recovered on of my clients's server using this method. I don't know anyone else ever tested it or not. How to use linkloop command Suppose you want to troubleshoot network problem and you have the mac address of the remote server running hp-ux. From your current server u want to check the connectivity using lan0. From ioscan -funClan you got the instance no. which is 0 and MAC of the remote ethernet card is 0x00306EF3FDBD. the syntax will be: # linkloop -vi 0 0x00306EF3FDBD If swinstall, swlist or SAM takes very long time to come back Check /etc/hosts file and match the hostname with proper IP address. Then check /etc/resolv.conf file and check the correct entries and see if you can ping the dns server or not. For instant solution you may rename it and then run swagentd -r command. Once you are done u can move that file back and rerun swagentd -r again. How to to check any Tape Library and Optical Jukebox # ioscan –funCautoch ( And note down the device name with path) Ex. /dev/rac/cxtxdx # mc –p /dev/rac/cxtxdx –r IDSM ( Will show all the slot and Drive information) # mc –p /dev/rac/cxtxdx –e IDSM ( Will show less detailed information) How to find the tapes in the slots #mc –p /dev/rac/cxtxdx –r IDSM |grep –i full How to move Tape from a Storage slot 5 to drive 1 # mc –p /dev/rac/cxtxdx –s S5 –d D1 ( -s = Source, -d = destination) ( S= Storage Slot, D=Drives, E=Export/Import Slot, M=Media changer i.e robot) How to display all informations of all the nPars # parstatus How to display properties of nPar0 only # parstatus -V -p0 Example of a parcreate command to create a partition name shreya with Cell2 and cell3 (Remember at least one Cell must have core io attached to it i.e should have IO Drawers) # parcreate -P shreya -c 2:base:y:ri -c 3:base:y:ri

Here is the output of parstatus command without any switch (I am not explaining as man page has all the informations, but this screenshot will help people who are new to nPar world) root@SDPROD0> parstatus Warning: No action specified. Default behaviour is display all. [Complex] Complex Name : GOD Complex Capacity Compute Cabinet (8 cell capable) : 1 Active GSP Location : cabinet 0 Model : 9000/800/SD32000 Serial Number : USE12345678 Current Product Number : A5201A Original Product Number : A5201A Complex Profile Revision : 1.0 The total number of Partitions Present : 2 [Cabinet] Cabinet I/O Bulk Power Backplane Blowers Fans Supplies Power Boards OK/ OK/ OK/ OK/ Cab Failed/ Failed/ Failed/ Failed/ Num Cabinet Type N Status N Status N Status N Status GSP === ============ ========= ========= ========== ============ ====== 0 SD32000 4/ 0/ N+ 5/ 0/ ? 6/ 0/ N+ 3/ 0/ N+ active Notes: N+ = There are one or more spare items (fans/power supplies). N = The number of items meets but does not exceed the need. N- = There are insufficient items to meet the need. ? = The adequacy of the cooling system/power supplies is unknown. [Cell] CPU Memory Use OK/ (GB) Core On Hardware Actual Deconf/ OK/ Cell Next Par Location Usage Max Deconf Connected To Capable Boot Num ========== ============ ======= ========= =================== ======= ==== === cab0,cell0 active core 4/0/4 16.0/ 0.0 cab0,bay1,chassis3 yes yes 1 cab0,cell1 active base 4/0/4 12.0/ 0.0 cab0,bay0,chassis3 yes yes 1 cab0,cell2 inactive 4/0/4 8.0/ 0.0 - no - cab0,cell3 inactive 4/0/4 12.0/ 0.0 - no - cab0,cell4 active core 4/0/4 12.0/ 0.0 cab0,bay0,chassis1 yes yes 0 cab0,cell5 active base 4/0/4 12.0/ 0.0 - no yes 0 cab0,cell6 active base 4/0/4 12.0/ 0.0 cab0,bay1,chassis1 yes yes 0 cab0,cell7 inactive 4/0/4 12.0/ 0.0 - no - [Chassis] Core Connected Par Hardware Location Usage IO To Num =================== ============ ==== ========== === cab0,bay0,chassis0 absent - - cab0,bay0,chassis1 active yes cab0,cell4 0 cab0,bay0,chassis2 absent - - cab0,bay0,chassis3 active yes cab0,cell1 1 cab0,bay1,chassis0 absent - - cab0,bay1,chassis1 active yes cab0,cell6 0 cab0,bay1,chassis2 absent - - cab0,bay1,chassis3 inactive yes cab0,cell0 1 [Partition] Par # of # of I/O

Num Status Cells Chassis Core cell Partition Name (first 30 chars) === ============ ===== ======== ========== =============================== 0 active 3 2 cab0,cell4 sdprod0 1 active 2 1 cab0,cell0 sdoraprod1 Problem : Unable to recover RP4440 server to another RP4440 Server. OS loaded from make_tape_recovery but after OS Installation completed it fails Possible Solution : Check the SCSI Card you are using on both the Servers. Probably old RP4440 using Ultra 160 LVD card and the new RP4440 is using Ultra320 Card. One possible and tested solution is add one Ultra 160 LVD SCSI card in the new Server and connect the SCSI Internal Disks to it. Second tested Solution is same but you can use External DASD like DS2300 or DS2320. Third solution is install the Driver to the OS of old Server for the Ultra320 LVD SCSI Card or the Card used in the New Server and then run make_tape_recovery on old server. Problem : linkloop works fine still unable to ping the router or any other server in the same network. possible solution : This worked for three servers with same issue but don't have any explanation. run ifconfig lan0 down and ifconfig lan0 unplumb and then run ifconfig lan0 plumb and ifconfig lan0 xxx.xxx.xxx.xx netmask xxx.xxx.xxx.xx. link workes means that the Servers can see each other i.e. connected to link level. (xxx=replace with ip address and subnet mask)

HP Service Guard Cluster
This article describes the installation steps for a MC/Serviceguard Cluster Installation on two HP-UX Servers.

Environment:

Server 1: Hardware: HP Integrity rx4640 OS: HP-UX B.11.31 Servername: boston.vogtnet.com Stationary IP: 172.16.18.30 (lan0) Heartbeat IP: 10.10.1.30 (lan1) Standby: (lan2) Lock Disk: VG: /dev/vglock PV: /dev/disk/disk12

Server 2: Hardware: HP Integrity rx4640 OS: HP-UX B.11.31 Servername: denver.vogtnet.com Stationary IP: 172.16.18.31 (lan0) Heartbeat IP: 10.10.1.31 (lan1) Standby: (lan2) Lock Disk: VG: /dev/vglock PV: /dev/disk/disk12 Storage:

HP Enterprise Virtual Array EVA8000 SAN

Cluster Installation Steps

1. Configure /etc/hosts
-> on boston.vogtnet.com:

# vi /etc/hosts
—————————————# boston 172.16.18.30 boston.vogtnet.com boston 10.10.1.30 boston.vogtnet.com boston

127.0.0.1 localhost loopback # denver 172.16.18.31 denver.vogtnet.com denver 10.10.1.31 denver.vogtnet.com denver —————————————-> on denver.vogtnet.com

# vi /etc/hosts
—————————————# denver 172.16.18.31 denver.vogtnet.com denver 10.10.1.31 denver.vogtnet.com denver 127.0.0.1 localhost loopback # boston 172.16.18.30 boston.vogtnet.com boston 10.10.1.30 boston.vogtnet.com boston —————————————-

2. Set $SGCONF (on both nodes)
# vi ~/.profile
—————————————SGCONF=/etc/cmcluster export SGCONF —————————————-

# echo $SGCONF
/etc/cmcluster

3. Configure ~/.rhosts (for rcp, don’t use in secure envs)
-> on boston.vogtnet.com

# cat ~/.rhosts
denver root -> on denver.vogtnet.com

# cat ~/.rhosts
boston root

4. Create the $SGCONF/cmclnodelist
(every node in the cluster must be listed in this file)

# vi $SGCONF/cmclnodelist
—————————————boston root denver root —————————————-

#rcp cmclnodelist denver:/etc/cmcluster/

5. Configure Heartbeat IP (lan1)
-> on boston.vogtnet.com

# vi /etc/rc.config.d/netconf
—————————————INTERFACE_NAME[1]=‖lan1″ IP_ADDRESS[1]=‖10.10.1.30″ SUBNET_MASK[1]=‖255.255.255.0″

BROADCAST_ADDRESS[1]=‖" INTERFACE_STATE[1]=‖" DHCP_ENABLE[1]=0 INTERFACE_MODULES[1]=‖" —————————————-> on denver.vogtnet.com

# vi /etc/rc.config.d/netconf
—————————————INTERFACE_NAME[1]=‖lan1″ IP_ADDRESS[1]=‖10.10.1.31″ SUBNET_MASK[1]=‖255.255.255.0″ BROADCAST_ADDRESS[1]=‖" INTERFACE_STATE[1]=‖" DHCP_ENABLE[1]=0 INTERFACE_MODULES[1]=‖" —————————————Restart Network:

# /sbin/init.d/net stop # /sbin/init.d/net stop # ifconfig lan1
lan1: flags=1843<UP,BROADCAST,RUNNING,MULTICAST,CKO> inet 10.10.1.30 netmask ffffff00 broadcast 10.10.1.255

6. Disable the Auto Activation of LVM Volume Groups (on bot nodes)
# vi /etc/lvmrc

—————————————AUTO_VG_ACTIVATE=0 —————————————-

7. Lock Disk
( The lock disk is not dedicated for use as the cluster lock; the disk can be employed as part of a normal volume group with user data on it. The cluster lock volume group and physical volume names are identified in the cluster configuration file. ) However, in this cluster we use a dedicated Lock Volume Group so we are sure this VG will never be deleted. As soon as this VG is registered as lock disk in the cluster configuration, it will be automatically marked as cluster aware. Create a LUN on the EVA and present it to boston and denver. boston.vogtnet.com:

# ioscan -N -fnC disk
disk 12 64000/0xfa00/0×7 esdisk CLAIMED DEVICE HP HSV210 /dev/disk/disk12 /dev/rdisk/disk12

# mkdir /dev/vglock # mknod /dev/vglock/group c 64 0x010000 # ll /dev/vglock
crw-r–r– 1 root sys 64 0×010000 Jul 31 14:42 group

# pvcreate -f /dev/rdisk/disk12
Physical volume ―/dev/rdisk/disk12″ has been successfully created. // Create the VG with the HP-UX 11.31 agile Multipathing instead of LVM Alternate Paths.

# vgcreate /dev/vglock /dev/disk/disk12

Volume group ―/dev/vglock‖ has been successfully created. Volume Group configuration for /dev/vglock has been saved in /etc/lvmconf/vglock.conf

# strings /etc/lvmtab
/dev/vglock /dev/disk/disk12

# vgexport -v -p -s -m vglock.map /dev/vglock # rcp vglock.map denver:/
denver.vogtnet.com:

# mkdir /dev/vglock # mknod /dev/vglock/group c 64 0x010000 # vgimport -v -s -m vglock.map vglock
–> Agile Multipathing of HP-UX 11.31 is not used by default after import (HP-UX 11.31 Bug ?!). The volume group uses alternate LVM Paths. Solution:

# vgchange -a y vglock
// Remove Alternate Paths

# vgreduce vglock /dev/dsk/c16t0d1 /dev/dsk/c14t0d1 /dev/dsk/c18t0d1 /dev/dsk/c12t0d1 /dev/dsk/c8t0d1 /dev/dsk/c10t0d1 /dev/dsk/c6t0d1
// Add agile Path

# vgextend /dev/vglock /dev/disk/disk12
// Remove Primary Path

# vgreduce vglock /dev/dsk/c4t0d1
Device file path ―/dev/dsk/c4t0d1″ is an primary link. Removing primary link and switching to an alternate link. Volume group ―vglock‖ has been successfully reduced. Volume Group configuration for /dev/vglock has been saved in /etc/lvmconf/vglock.conf

# strings /etc/lvmtab
/dev/vglock /dev/disk/disk12

# vgchange -a n vglock
// Backup VG

# vgchange -a r vglock # vgcfgbackup /dev/vglock
Volume Group configuration for /dev/vglock has been saved in /etc/lvmconf/vglock.conf

# vgchange -a n vglock

8. Create Cluster Config (on boston.vogtnet.com)
# cmquerycl -v -C /etc/cmcluster/cmclconfig.ascii -n boston -n denver # cd $SGCONF # cat cmclconfig.ascii | grep -v "^#"
——————————————————————CLUSTER_NAME cluster1 FIRST_CLUSTER_LOCK_VG /dev/vglock NODE_NAME denver NETWORK_INTERFACE lan0 HEARTBEAT_IP 172.16.18.31 NETWORK_INTERFACE lan2 NETWORK_INTERFACE lan1 STATIONARY_IP 10.10.1.31 FIRST_CLUSTER_LOCK_PV /dev/dsk/c16t0d1 NODE_NAME boston

NETWORK_INTERFACE lan0 HEARTBEAT_IP 172.16.18.30 NETWORK_INTERFACE lan2 NETWORK_INTERFACE lan1 STATIONARY_IP 10.10.1.30 FIRST_CLUSTER_LOCK_PV /dev/disk/disk12 HEARTBEAT_INTERVAL 1000000 NODE_TIMEOUT 2000000 AUTO_START_TIMEOUT 600000000 NETWORK_POLLING_INTERVAL 2000000 NETWORK_FAILURE_DETECTION INOUT MAX_CONFIGURED_PACKAGES 150 VOLUME_GROUP /dev/vglock ———————————————————————————– -> Change this file to: ———————————————————————————– CLUSTER_NAME MCSG_SAP_Cluster FIRST_CLUSTER_LOCK_VG /dev/vglock NODE_NAME denver NETWORK_INTERFACE lan0 STATIONARY_IP 172.16.18.31 NETWORK_INTERFACE lan2 NETWORK_INTERFACE lan1 HEARTBEAT_IP 10.10.1.31 FIRST_CLUSTER_LOCK_PV /dev/disk/disk12

NODE_NAME boston NETWORK_INTERFACE lan0 STATIONARY_IP 172.16.18.30 NETWORK_INTERFACE lan2 NETWORK_INTERFACE lan1 HEARTBEAT_IP 10.10.1.30 FIRST_CLUSTER_LOCK_PV /dev/disk/disk12 HEARTBEAT_INTERVAL 1000000 NODE_TIMEOUT 5000000 AUTO_START_TIMEOUT 600000000 NETWORK_POLLING_INTERVAL 2000000 NETWORK_FAILURE_DETECTION INOUT MAX_CONFIGURED_PACKAGES 15 VOLUME_GROUP /dev/vglock ———————————————————————————– # cmcheckconf -v -C cmclconfig.ascii Checking cluster file: cmclconfig.ascii Checking nodes … Done Checking existing configuration … Done Gathering storage information Found 2 devices on node denver Found 2 devices on node boston Analysis of 4 devices should take approximately 1 seconds 0%—-10%—-20%—-30%—-40%—-50%—-60%—-70%—-80%—-90%—-100% Found 2 volume groups on node denver

Found 2 volume groups on node boston Analysis of 4 volume groups should take approximately 1 seconds 0%—-10%—-20%—-30%—-40%—-50%—-60%—-70%—-80%—-90%—-100% Gathering network information Beginning network probing (this may take a while) Completed network probing Checking for inconsistencies Adding node denver to cluster MCSG_SAP_Cluster Adding node boston to cluster MCSG_SAP_Cluster cmcheckconf: Verification completed with no errors found. Use the cmapplyconf command to apply the configuration.

# cmapplyconf -v -C cmclconfig.ascii
Checking cluster file: cmclconfig.ascii Checking nodes … Done Checking existing configuration … Done Gathering storage information Found 2 devices on node denver Found 2 devices on node boston Analysis of 4 devices should take approximately 1 seconds 0%—-10%—-20%—-30%—-40%—-50%—-60%—-70%—-80%—-90%—-100% Found 2 volume groups on node denver Found 2 volume groups on node boston Analysis of 4 volume groups should take approximately 1 seconds 0%—-10%—-20%—-30%—-40%—-50%—-60%—-70%—-80%—-90%—-100% Gathering network information

Beginning network probing (this may take a while) Completed network probing Checking for inconsistencies Adding node denver to cluster MCSG_SAP_Cluster Adding node boston to cluster MCSG_SAP_Cluster Marking/unmarking volume groups for use in the cluster Completed the cluster creation // Deactivate the VG (vglock will be activated from cluster daemon)

# vgchange -a n /dev/vglock

9. Start the Cluster (on boston.vogtnet.com)
# cmruncl -v
cmruncl: Validating network configuration… cmruncl: Network validation complete Waiting for cluster to form ….. done Cluster successfully formed. Check the syslog files on all nodes in the cluster to verify that no warnings occurred during startup.

# cmviecl -v
MCSG_SAP_Cluster up NODE STATUS STATE denver up running Cluster_Lock_LVM: VOLUME_GROUP PHYSICAL_VOLUME STATUS /dev/vglock /dev/disk/disk12 up Network_Parameters:

INTERFACE STATUS PATH NAME PRIMARY up 0/2/1/0 lan0 PRIMARY up 0/2/1/1 lan1 STANDBY up 0/3/2/0 lan2 NODE STATUS STATE boston up running Cluster_Lock_LVM: VOLUME_GROUP PHYSICAL_VOLUME STATUS /dev/vglock /dev/disk/disk12 up Network_Parameters: INTERFACE STATUS PATH NAME PRIMARY up 0/2/1/0 lan0 PRIMARY up 0/2/1/1 lan1

STANDBY up 0/3/2/0 lan2 10. Cluster Startup Shutdown
// Automatic Startup: /etc/rc.config.d/cmcluster AUTOSTART_CMCLD=1 // Manuel Startup

# cmruncl -v
// Overview

# cmviewcl -v
// Stop Cluster

# cmhaltcl -v

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close