Wednesday, October 24, 2012

Steps for detaching a snapshot VOLUME LUN and creat new volume copy from snapshot on HP P2000 MSA SAN VMWARE POWERCLI CLI



Step1. powerCLI (Only on the host where the VM is running)
    Shutdown & Power off the VM

 
Step2esxCLI (Perform on all hosts)
    a. If the LUN is an RDM, skip to step c. Otherwise, to get a list of all datastores mounted to an ESXi host, run the command:
 # esxcli storage filesystem list
    b. Unmount the datastore by running the command:
 # esxcli storage filesystem unmount [-u <UUID> | -l <label> | -p <path> ]
    c. Detach the device/LUN, run this command:
 # esxcli storage core device set --state=off -d NAA_ID
    d. To list the permanently detached devices: <Lists all detached LUN>
 # esxcli storage core device detached list
    e. Verify that the device is offline, run this command:
 # esxcli storage core device list -d NAA_ID
    f. Running the partedUtil getptbl command on the device shows that the device is not found.
 # partedUtil getptbl /vmfs/devices/disks/naa.?????????


Step3hpCLI
 # show volume-maps
 # unmap volume 10_23_LIS_Daily_C001



Step4esxCLI (Perform on all hosts)
  Rescan all devices on the ESXi host, run the command on all hosts:
    For rescanning vmhba2 only
 # esxcli storage core adapter rescan -A vmhba2
    For rescanning all adapters
 # esxcli storage core adapter rescan -a 


Step5hpCLI
 # show volumes
 # delete volumes 10_23_LIS_Daily_C001  (10 Seconds for 2.6TB partition)



Step6hpCLI
 # show volumes
 # show snapshots
 # show vdisks



Step7.  hpCLI
    To initiate volumecopy from the snapshot
 # volumecopy modified-snapshot no source-volume LIS_Daily dest-vdisk vd01 10_24_LIS_CLONE prompt yes  Success: Command completed successfully. (LIS_Daily) - The volume copy started. (2012-10-24 16:18:53)

    Check volumecopy status
 # show volumecopy-status


Step8.  hpCLI
    Map volume 10_24_LIS_CLONE with
    read-write access for
    HOSTS ESXHS01_vmhba2 and ESXHS02_vmhba2
    Using ports A1,A2 and B1,B2 and
    LUN 101:
 # map volume access rw ports a1,a2,b1,b2 lun 101 host ESXHS01_vmhba2,ESXHS02_vmhba2 10_24_LIS_CLONE
  
    To apply mapping to all hosts (Omit the host parameter, by default it applies to all hosts)
 # map volume access rw ports a1,a2,b1,b2 lun 101 10_24_LIS_CLONE

Tuesday, October 23, 2012

Give me an object lesson than showing me your abusive power

Very impressionable term learnt from a TV series called ALIAS

Give me an object lesson than showing me your abusive power

Monday, October 15, 2012

Steps to solving Rubiks Cube (Home Made recipie)

Bottom 2 lines
3a: Ti Fi TF TR Ti Ri  (Right-side top-center to be moved to Front-Side right-center)

3b: TR Ti Ri Ti Fi TF  (Front-Side top-center to be moved to Right-Side left-center)

Top Center pieces
4 : F R U Ri Ui Fi   

5 : R T Ri T R T2 Ri

  Corners : A - B in place
6 : Ri F Ri B2 R Fi Ri B2 R2 Ui

  Center Top Clockwise
E      R
      F 
7a: F2 U Ri L F2 R Li U F2
   Anti - Clockwise
7b: F2 Ui Ri L F2 R Li Ui F2

Tuesday, October 2, 2012

Bring up interface ethx : Error: No suitable device found: no device found for connection 'System ethx'

 

 
 

Network devices failing to start after MAC address change in RHEL 6


  Remove the file /etc/udev/rules.d/70-persistent-net.rules and reload the network device's module.

  For example:
 
   # rm /etc/udev/rules.d/70-persistent-net.rules
   # rmmod e1000
   # modprobe e1000
   # service network restart


 Why and What it's happening:
   In RHEL6, when udev detects a new network device it runs /lib/udev/write_net_rules to generate /etc/udev/rules.d/70-    persistent-net.rules. This file contains rules to map a MAC address to a specific ethX name persistently. If the MAC    address changes, this file will still reflect the old MAC and thus udev is unable to name the new device to the desired     ethX. By  removing this file and loading the module again, udev will see a device that is not listed and will run the script    again, generating appropriate rules.

Best Practice: VMWARE: Unpresenting a LUN on ESXi 5.0



Delete / Remove dead path http://raj2796.wordpress.com/2012/03/14/vmware-vsphere-5-dead-lun-and-pathing-issues-and-resultant-scsi-errors/


LIST of COMMANDS

~ # esxcli storage core device list|grep off -B12
~ # esxcli storage core device set --state=off -d naa.600c0ff00011d0284ca76c5001000000
~ # esxcli storage core device detached list
~ # partedUtil getptbl /vmfs/devices/disks/naa.600c0ff00011d028364bd54d01000000
~ # esxcli storage core adapter rescan -A vmhba1
~ # esxcli storage core adapter rescan -A vmhba2
~ # esxcli storage core adapter rescan --all


STEPS

  1. Getting the NAA ID of the LUN to be removed
    # esxcli storage vmfs extent list

  1. Unpresenting a LUN from vSphere Client
    • If the LUN is an RDM, skip to next step. Otherwise, in the Configuration tab of the ESXi host, click Storage. Right-click the datastore being removed, and click Unmount.

      Note: To unmount a datastore from multiple hosts, from the vSphere Client select Hosts and Clusters, Datastores and Datastore Clusters view (Ctrl+Shift+D)
    •  Choose the Devices View (Under Configuration > Storage > Devices tab):
      Right-click the NAA ID of the LUN (as noted above) and click Detach. A Confirm Device Unmount window is displayed. When the prerequisite criteria have been passed, click OK.  Perform individually on all hosts

In our case vc wasn’t an option since the hosts were unresponsive and vc couldn’t communicate, also the luns were allready detached since they were never used, so :
list permanently detached devices:
# esxcli storage core device detached list
look at output at state off luns e.g.
Device UID                            State
————————————  —–
naa.50060160c46036df50060160c46036df  off
naa.6006016094602800c8e3e1c5d3c8e011  off
next permanently remove the device configuration information from the system:
# esxcli storage core device detached remove -d
e.g.
# esxcli storage core device detached remove  -d naa.50060160c46036df50060160c46036df

OR

To detach a device/LUN, run this command:
# esxcli storage core device set –state=off -d
To verify that the device is offline, run this command:
# esxcli storage core device list -d


Monday, October 1, 2012

Understanding RHEL daemons (RedHat)

http://magazine.redhat.com/2007/03/09/understanding-your-red-hat-enterprise-linux-daemons/

acpid

This is the daemon for the Advanced Configuration and Power Interface (ACPI). ACPI is an open industry standard for system control related actions, most notably plug-and-play hardware recognition and power management, such as startup and shutdown and putting systems into low poser consumption modes.

You'll probably never want to shut down this daemon, unless you are explicitly instructed to do so to debug a hardware problem.

Learn more:
http://www.acpi.info

anacron

One of the problems with living on a laptop, as so many of us do these days, is that when you set up a cron job to run, you can't always be sure that your laptop will be running at the time that the job should run. anacron (the name refers to its being an "anachronistic cron") gets around this problem by scheduling tasks in days. For example, anacron will run a job if the job has not been run in the specified number of days.

When are you safe not running anacron? When your system is running continuously. Should you simply stop cron from running if you have anacron running? No; anacron is able to specify job intervals in days, not hours and seconds.

Learn more:
http://anacron.sourceforge.net

apmd

This is the daemon for the Advanced Power Management (APM) BIOS driver. The APM hardware standard and apmd are being replaced by ACPI and acpid. If your hardware supports ACPI, then you don't need to run apmd.

atd

This is the daemon for the at job processor (at enables you to run tasks at specified times). You can turn off this daemon if you don't use it.

autofs

This daemon automatically mounts disks and file systems that you define in a configuration file. Using this daemon can be more convenient that explicitly mounting removable disks.

Learn more:
http://freshmeat.net/projects/autofs


Avahi-daemon and avahi-dnsconfd

The Avahi website defines Avahi as: 'a system which facilitates service discovery on a local network. This means that you can plug your laptop or computer into a network and instantly be able to view other people who you can chat with, find printers to print to, or find files being shared…' Avahi is a Zeroconf implementation. Zeroconf is an approach that enables users to create usable IP networks without having special configuration servers such as DNS servers.
A common use of the avahi-daemon is with Rhythmbox, so you can see music that is made available to be shared with others. If you're not sharing music or files on your system, you can turn off this daemon.

Learn more:
http://avahi.org
http://zeroconf.org


more ...here...