Friday, 14 December 2012

Host Profile Error: host state doesn't match specification SATP configuration for device


host state doesn't match specification SATP configuration for device
To disable the PSA and NMP profiles in the vSphere Client:
  1. Log into the vCenter using the VI Client.
  2. Under the Home view, click Host Profiles under Management.
  3. In the Host Profiles view, right click the host profile and select the second option, Enable/Disable Profile Configuration.
  4. Expand Storage Configuration.
  5. Expand Pluggable Storage Architecture (PSA) configuration.
  6. Deselect the PSA Device Configuration profile.
  7. Expand Native Mulitpathing (NMP).
  8. Expand PSP and SATP Configuration for NMP Devices.
  9. Deselect PSP configuration for and SATP configuration for
  10. Click OK.
  11. In Profile compliance, click Check compliance

Tuesday, 11 December 2012

Creation of SQL DB for vCenter 5.1

http://pubs.vmware.com/vsphere-51/topic/com.vmware.vsphere.install.doc/GUID-36B92A8C-074A-4657-9938-62AB97225B91.html
From the website above do the following in this order:

1. Create a SQL Server Database and User for vCenter Server
2. Set Database Permissions By Manually Creating Database Roles and the VMW Schema
3. Set Database Permissions by Using the dbo Schema and the db_owner Database Role

OR using scripts...make vpxuser has DB Owner

1. Create a SQL Server Database and User for vCenter Server


use [master] 
go 
CREATE DATABASE [VCDB] ON PRIMARY 
(NAME = N'vcdb', FILENAME = N'C:\VCDB.mdf', SIZE = 2000KB, FILEGROWTH = 10% ) 
LOG ON 
(NAME = N'vcdb_log', FILENAME = N'C:\VCDB.ldf', SIZE = 1000KB, FILEGROWTH = 10%) 
COLLATE SQL_Latin1_General_CP1_CI_AS 
go
use VCDB 
go 
sp_addlogin @loginame=[vpxuser], @passwd=N'vpxuser!0', @defdb='VCDB', @deflanguage='us_english'
go 
ALTER LOGIN [vpxuser] WITH CHECK_POLICY = OFF 
go 
CREATE USER [vpxuser] for LOGIN [vpxuser]
go
use MSDB
go
CREATE USER [vpxuser] for LOGIN [vpxuser]
go


2. Use a Script to Create a Microsoft SQL Server Database Schema and Roles

Use [VCDB]
CREATE SCHEMA [VMW]
go
ALTER USER [vpxuser] WITH DEFAULT_SCHEMA =[VMW]
go

if not exists (SELECT name FROM sysusers WHERE issqlrole=1 AND name = 'VC_ADMIN_ROLE')
CREATE ROLE VC_ADMIN_ROLE;
GRANT ALTER ON SCHEMA :: [VMW] to VC_ADMIN_ROLE;
GRANT REFERENCES ON SCHEMA :: [VMW] to VC_ADMIN_ROLE;
GRANT INSERT ON SCHEMA ::  [VMW] to VC_ADMIN_ROLE;

GRANT CREATE TABLE to VC_ADMIN_ROLE;
GRANT CREATE VIEW to VC_ADMIN_ROLE;
GRANT CREATE Procedure to VC_ADMIN_ROLE;

if not exists (SELECT name FROM sysusers WHERE issqlrole=1 AND name = 'VC_USER_ROLE')
CREATE ROLE VC_USER_ROLE
go
GRANT SELECT ON SCHEMA ::  [VMW] to VC_USER_ROLE
go
GRANT INSERT ON SCHEMA ::  [VMW] to VC_USER_ROLE
go
GRANT DELETE ON SCHEMA ::  [VMW] to VC_USER_ROLE
go
GRANT UPDATE ON SCHEMA ::  [VMW] to VC_USER_ROLE
go
GRANT EXECUTE ON SCHEMA :: [VMW] to VC_USER_ROLE
go
sp_addrolemember VC_USER_ROLE , [vpxuser]
go
sp_addrolemember VC_ADMIN_ROLE , [vpxuser]
go
use MSDB
go
if not exists (SELECT name FROM sysusers WHERE issqlrole=1 AND name = 'VC_ADMIN_ROLE')
CREATE ROLE VC_ADMIN_ROLE;
go
GRANT SELECT on msdb.dbo.syscategories to VC_ADMIN_ROLE
go
GRANT SELECT on msdb.dbo.sysjobsteps to VC_ADMIN_ROLE
go
GRANT SELECT ON msdb.dbo.sysjobs to VC_ADMIN_ROLE
go
GRANT EXECUTE ON msdb.dbo.sp_add_job TO VC_ADMIN_ROLE
go
GRANT EXECUTE ON msdb.dbo.sp_delete_job TO VC_ADMIN_ROLE
go
GRANT EXECUTE ON msdb.dbo.sp_add_jobstep TO VC_ADMIN_ROLE
go
GRANT EXECUTE ON msdb.dbo.sp_update_job TO VC_ADMIN_ROLE
go
GRANT EXECUTE ON msdb.dbo.sp_add_jobserver TO VC_ADMIN_ROLE
go
GRANT EXECUTE ON msdb.dbo.sp_add_jobschedule TO VC_ADMIN_ROLE
go
GRANT EXECUTE ON msdb.dbo.sp_add_category TO VC_ADMIN_ROLE
go
sp_addrolemember VC_ADMIN_ROLE , [vpxuser]
go


3. Use a Script to Create a vCenter Server User by Using the dbo Schema and db_owner Database Role


use VCDB
go
sp_addrolemember @rolename = 'db_owner', @membername = 'vpxuser'
go
use MSDB
go
sp_addrolemember @rolename = 'db_owner', @membername = 'vpxuser'
go


Friday, 7 December 2012

VMware 5 Host Profiles prompts for MAC address


VMware 5 Host Profiles prompts for MAC address

host profile Prompt the user for the MAC Address
thanks to Paul Whitman
http://blog.eight02.com/2012/02/vmware-5-host-profiles-prompts-for-mac.html

Thursday, 6 December 2012

Host Profile error call hostprofilemanager.createprofile active directory all

Work around for this error when trying to create a host profile is to enable the "active directory all" in the host filewall

call hostprofilemanager.createprofile

Monday, 10 September 2012

Unplanned Device Loss

Unplanned device loss is a condition that occurs when your ESXi host permanently loses connection to a storage device.

To verify the status of the device, see Check the Connection Status of a Storage Device.
To unmount a datastore, see Unmount VMFS or NFS Datastores.
To perform a rescan, see Perform Storage Rescan.


http://pubs.vmware.com/vsphere-50/topic/com.vmware.vsphere.storage.doc_50/GUID-BA39DB47-F8DB-4945-B061-1B6FF3DD12E1.html

Thursday, 31 May 2012

power off / get state of vm vim-cmd


from VMware's own www

Using the ESXi command-line utility vim-cmd to power off the virtual machine

  1. On the ESXi console, enter Tech Support mode and log in as root. 
  2. Get a list of all registered virtual machines, identified by their VMID, Display Name, and path to the .vmx configuration file, using this command:

    vim-cmd vmsvc/getallvms

  3. To get the current state of a virtual machine
    vim-cmd vmsvc/power.getstate VMID
  4. Power off the virtual machine using the VMID found in Step 2 and run:

    vim-cmd vmsvc/power.off VMID

    Note: If the virtual machine fails to power off, use the following command:
    vim-cmd vmsvc/power.shutdownVMID

vSphere CLI


From VMware's own website...vSphere CLI commands
Documentation Description
esxcli command Lists descriptions of esxcli commands.
esxcli fcoe FCOE (Fibre Channel over Ethernet) comands
esxcli hardware Hardware namespace. Used primarily for extracting information about the current system setup.
esxcli iscsi iSCSI namespace for minitoring and managing hardware and software iSCSI.
esxcli license License management commands.
esxcli network Network namespace for managing virtual networking including virtual switches and VMkernel network interfaces.
esxcli software Software namespace. Includes commands for managing and installing image profiles and VIBs.
esxcli storage Includes core storage commands and other storage management commands.
esxcli system System monitoring and management command.
esxcli vm Namespace for listing virtual machines and shutting them down forcefully.
svmotion Moves a virtual machine's configuration file and optionally its disks while the virtual machine is running. Must run against a vCenter Server system.
vicfg-advcfg Performs advanced configuration including enabling and disabling CIM providers. Use this command as instructed by VMware.
vicfg-authconfig Manages Active Directory authentication.
vicfg-cfgbackup Backs up the configuration data of an ESXi system and restores previously saved configuration data.
vicfg-dns.pl Specifies an ESX/ESXi host's DNS (Domain Name Server) configuration.
vicfg-dumppart Manages diagnostic partitions.
vicfg-hostops Allows you to start, stop, and examine ESX/ESXi hosts and to instruct them to enter maintenance mode and exit from maintenance mode.
vicfg-ipsec Supports setup of IPSec.
vicfg-iscsi Manages iSCSI storage.
vicfg-module Enables VMkernel options. Use this command with the options listed, or as instructed by VMware.
vicfg-mpath Displays information about storage array paths and allows you to change a path's state.
vicfg-mpath35 Configures multipath settings for Fibre Channel or iSCSI LUNs.
vicfg-nas Manages NAS file systems.
vicfg-nics Manages the ESX/ESXi host's NICs (uplink adapters).
vicfg-ntp Specifies the NTP (Network Time Protocol) server.
vicfg-rescan Rescans the storage configuration.
vicfg-route Lists or changes the ESX/ESXi host's route entry (IP gateway).
vicfg-scsidevs Finds available LUNs.
vicfg-snmp Manages the Simple Network Management Protocol (SNMP) agent.
vicfg-syslog Specifies the syslog server and the port to connect to that server for ESXi hosts.
vicfg-user Creates, modifies, deletes, and lists local direct access users and groups of users.
vicfg-vmknic Adds, deletes, and modifies virtual network adapters (VMkernel NICs).
vicfg-volume Supports resignaturing a VMFS snapshot volume and mounting and unmounting the snapshot volume.
vicfg-vswitch Adds or removes virtual switches or vNetwork Distributed Switches, or modifies switch settings.
vifs.pl Performs file system operations such as retrieving and uploading files on the remote server.
vihostupdate Manages updates of ESX/ESXi hosts. Use vihostupdate35 for ESXi 3.5 hosts.
vihostupdate35 Manages updates of ESX/ESXi version 3.5 hosts.
vmkfstools Creates and manipulates virtual disks, file systems, logical volumes, and physical storage devices on ESX/ESXi hosts.
vmware-cmd Performs virtual machine operations remotely. This includes, for example, creating a snapshot, powering the virtual machine on or off, and getting information about the virtual machine.

Wednesday, 30 May 2012

viclient reghack

edit registry to populate vi client with vcenter server names 
HKEY_CURRENT_USER\Software\VMware\VMware Infrastructure Client\Preferences\recentconnections change ip to hostname or vice versa

Thursday, 17 May 2012

ESX WWN & Firmware of HBA and NIC

Enable TSM
As root 
cd /proc/scsi/lpfc820
ls
3 4
cat 3
info given:
FC SCSI driver version
make and model on which pci bus
firmware version
portname = WWN
link up or down

TO GET FIRMWARE VERSION OF NIC:
as root
ethtool -i vmnic0


Wednesday, 2 May 2012


Results of tests conducted to see if there is a difference in performance for an 8 vCPU Windows 2008 R2 VM.



Summary: It makes no difference !

The test wanted to ascertain if there was a difference in performance between the following 3 configurations:
1 Socket 4 Cores
2 Sockets 2 Cores
4 Sockets 1 Core
A single Windows 2008 R2 VM was used.
PassMark PerformanceTest software was installed onto the VM – only the CPU Mark Test was used.
http://www.passmark.com/products/pt.htm
The TEST:
The VM started off configured with 1 Socket and 4 Cores.
The PassMark CPU test was run – the results collected by screenshot (see below). 
The VM was then shutdown. The VM settings were changed to a 2 Socket 2 Core configuration and the VM was then powered back on.
The CPU test was run again and the results collected before the VM was shutdown and reconfigured with 4 Sockets and 1 Core.
After powering on the PassMark CPU Test was run again and the results collected.
The Test was repeated an additional 2 times.
There were no other VMs running on the ESX host at the time.



The following is a summary of the results:


Place
Configuration
Score
1
1 Socket 4 Cores
4811.7
2
2 Sockets 2 Cores
4811.3
3
4 Sockets 1 Core
4809.2
4
1 Socket 4 Cores
4808
5
2 Sockets 2 Cores
4806.9
6
4 Sockets 1 Core
4806.6
7
4 Sockets 1 Core
4789.4
8
2 Sockets 2 Cores
4651.6
9
1 Socket 4 Cores
4564.4
                           


reactivate rearm windows eval licence

slmgr /rearm
reboot windows

slmgr /dlv
displays detailed info about licence

Friday, 27 April 2012

cpu stress for windows



powershell script to send cpu to 100% - open several powershell windows to spike all the cores - thanks to WWoIT

$result = 1; foreach ($number in 1..2147483647) {$result = $result * $number};

Wednesday, 25 April 2012

domain group policy settings

to see applied domain group policy settings you can use rsop or gpresult.
Start by running rsop.msc.
To resolve this behavior, you can update policies manually, restart the computer, or wait for automatic updating.
To update the modified group policy manually, start an command prompt and then type gpupdate /force. Although this command reapplies all policy settings, by default only policy settings that have changed are applied

Tuesday, 17 April 2012

LUN Masking on ESX host inc scripts

Cause
Too many RDMs and too many Microsoft Clusters locking VMFS partitions on the SAN. ESX hosts spend too much time trying to read these locked devices, vCenter times out and the datastores dont get added.
NB this whole post is irrelevant in ESX 5 because you can reserve RDMs

Fix
(Before this blog was posted the fix was to do the lun masking below. however in ESX5 all you have to do is:
To mark the MSCS LUNs as permanently reserved on an already upgraded ESXi 5.1 host, set the permanently reserved flag in Host Profiles. For more information, see the vSphere documentation.

You can use esxcli command to mark the device as perennially reserved:

esxcli storage core device setconfig -d naa.id --perennially-reserved=true


The above only works for ESXi5. For ESX 4 do the LUN Masking as detailed below:)

LUN MASKING

The solution is mask the LUNs or RDMs from the ESX hosts
This can be done in 2 ways - at the SAN level or at the ESX host level. It was not possible to do this at the SAN level due to SAN limitations regarding sharing RDMs between cluster groups.


Docs referenced
Masking a LUN from ESX and ESXi using the MASK_PATH plug-in: VMWare KB Article: 1009449

Unable to claim the LUN back after unmasking it: VMWare KB Article: 1015252

Unpresenting a LUN containing a datastore from ESX 4.x and ESXi 4.x: VMWare KB Article: 1015084


The steps to MASKLUN Masking has to be done on the ESX command line using several different commands. The following example shows how to mask one RDM on one ESX host.
Towards the end of this document are scripts that have been implemented to mask off many RDMs from several ESX hosts and scripts to unmask the RDMs.
It may be necessary to unmask the RDMs or LUNs if it’s decided to run the VM on a host which currently has the MASKing rules applied to it.
 
1 Multipath Plug-ins Look at the Multipath Plug-ins currently installed on your ESX with the command:
# esxcfg-mpath -G
EG
[root@site1-intra-esx01 ~]# esxcfg-mpath –G
MASK_PATH
NMP
 
Verify that the MASK_PATH plugin is present
 
 
2 Claimrules List all the claimrules currently on the ESX with the command:
# esxcli corestorage claimrule list
For an unadulterated ESX host the output looks like:
EG
[root@site1-intra-esx01 ~]# esxcli corestorage claimrule list
Rule Class Rule Class Type Plugin Matches
MP 0 runtime transport NMP transport=usb
MP 1 runtime transport NMP transport=sata
MP 2 runtime transport NMP transport=ide
MP 3 runtime transport NMP transport=block
MP 4 runtime transport NMP transport=unknown
MP 101 runtime vendor MASK_PATH vendor=DELL model=Universal Xport
MP 101 file vendor MASK_PATH vendor=DELL model=Universal Xport
MP 65535 runtime vendor NMP vendor=* model=*
 
 
3 Identify eui numbers Identify the euid of the RDMs that you want to mask - this can be done from the gui or command line.
To do it from the GUI right click on the VM and go to "Edit Settings" then click on the "Manage Paths" button in the VM properties having highlighted the RDM disk first, then look for euid (IBM XIVs use euid - other SAN use naa)
From the command line there are 2 steps to find the euid:
First query the VM's rdm mapping file to find the vml identifier:
vmkfstools -q vmfilename.vmdk
EG
[root@site1-intra-esx03 site1-INTRA-SQL04]# vmkfstools -q site1-INTRA-SQL04_6.vmdk
Disk site1-INTRA-SQL04_6.vmdk is a Passthrough Raw Device Mapping
Maps to: vml.01005c00003738303042354530413934323831305849
 
next list the vml to euid mapping:
ls -l /vmfs/devices/disks/ | grep -i vml.number_from_above
EG
root@site1-intra-esx03 site1-INTRA-SQL04]# ls -l /vmfs/devices/disks/ | grep -i vml.01005c00003738303042354530413934323831305849
lrwxrwxrwx 1 root root 20 Feb 15 09:42 vml.01005c00003738303042354530413934323831305849 -> eui.001738000b5e0a94
lrwxrwxrwx 1 root root 22 Feb 15 09:42 vml.01005c00003738303042354530413934323831305849:1 -> eui.001738000b5e0a94:1
 
eui.001738000b5e0a94 is the euid needed
 
 
4 Eui pathsCheck all of the paths that the euid device has (vmhbaX:C0:TX:L92)
# esxcfg-mpath -L | grep euid
EG
[root@site1-intra-esx01 ~]#esxcfg-mpath -L | grep 001738000b5e0a94
vmhba5:C0:T0:L92 state:active eui.001738000b5e0a94 vmhba5 0 0 92 NMP active san fc.200000051e8bf5d2:100000051e8bf5d2 fc.500173800b5e0000:500173800b5e0142
vmhba5:C0:T1:L92 state:active eui.001738000b5e0a94 vmhba5 0 1 92 NMP active san fc.200000051e8bf5d2:100000051e8bf5d2 fc.500173800b5e0000:500173800b5e0152
vmhba4:C0:T0:L92 state:active eui.001738000b5e0a94 vmhba4 0 0 92 NMP active san fc.200000051e8bf5d1:100000051e8bf5d1 fc.500173800b5e0000:500173800b5e0180
vmhba4:C0:T1:L92 state:active eui.001738000b5e0a94 vmhba4 0 1 92 NMP active san fc.200000051e8bf5d1:100000051e8bf5d1 fc.500173800b5e0000:500173800b5e0170
vmhba3:C0:T0:L92 state:active eui.001738000b5e0a94 vmhba3 0 0 92 NMP active san fc.200000051e8bebb5:100000051e8bebb5 fc.500173800b5e0000:500173800b5e0172
vmhba3:C0:T1:L92 state:active eui.001738000b5e0a94 vmhba3 0 1 92 NMP active san fc.200000051e8bebb5:100000051e8bebb5 fc.500173800b5e0000:500173800b5e0182
vmhba2:C0:T0:L92 state:active eui.001738000b5e0a94 vmhba2 0 0 92 NMP active san fc.200000051e8bebb4:100000051e8bebb4 fc.500173800b5e0000:500173800b5e0140
vmhba2:C0:T1:L92 state:active eui.001738000b5e0a94 vmhba2 0 1 92 NMP active san fc.200000051e8bebb4:100000051e8bebb4 fc.500173800b5e0000:500173800b5e0150
 
note the output of this command returns the LUN ID of the RDM (L92)
 
 
5 Device PathsCheck that no other devices are using the same parameters:
# esxcfg-mpath -L | egrep "vmhba X X X"
(As you apply the rule -A vmhbaX -C 0 -L 92 , this verifies that there is no other device with those parameters. You can use the wildcards "vmhba.*L92" ( . means any character and * means zero or more times).
EG
[root@site1-intra-esx01 ~]#esxcfg-mpath -L | egrep "vmhba.*L92"
vmhba5:C0:T0:L92 state:active eui.001738000b5e0a94 vmhba5 0 0 92 NMP active san fc.200000051e8bf5d2:100000051e8bf5d2 fc.500173800b5e0000:500173800b5e0142
vmhba5:C0:T1:L92 state:active eui.001738000b5e0a94 vmhba5 0 1 92 NMP active san fc.200000051e8bf5d2:100000051e8bf5d2 fc.500173800b5e0000:500173800b5e0152
vmhba4:C0:T0:L92 state:active eui.001738000b5e0a94 vmhba4 0 0 92 NMP active san fc.200000051e8bf5d1:100000051e8bf5d1 fc.500173800b5e0000:500173800b5e0180
vmhba4:C0:T1:L92 state:active eui.001738000b5e0a94 vmhba4 0 1 92 NMP active san fc.200000051e8bf5d1:100000051e8bf5d1 fc.500173800b5e0000:500173800b5e0170
vmhba3:C0:T0:L92 state:active eui.001738000b5e0a94 vmhba3 0 0 92 NMP active san fc.200000051e8bebb5:100000051e8bebb5 fc.500173800b5e0000:500173800b5e0172
vmhba3:C0:T1:L92 state:active eui.001738000b5e0a94 vmhba3 0 1 92 NMP active san fc.200000051e8bebb5:100000051e8bebb5 fc.500173800b5e0000:500173800b5e0182
vmhba2:C0:T0:L92 state:active eui.001738000b5e0a94 vmhba2 0 0 92 NMP active san fc.200000051e8bebb4:100000051e8bebb4 fc.500173800b5e0000:500173800b5e0140
vmhba2:C0:T1:L92 state:active eui.001738000b5e0a94 vmhba2 0 1 92 NMP active san fc.200000051e8bebb4:100000051e8bebb4 fc.500173800b5e0000:500173800b5e0150
 
This shows that eui.001738000b5e0a94 is the only device using this path - so when you mask off this euid thats all you will be doing!
 
 
6 RulesAdd a rule to hide the LUN with the command:
# esxcli corestorage claimrule add --rule <number> -t location -A <hba_adapter> -C <channel> -T <target> -L <lun> -P MASK_PATH
Note the rule has to be applied to all HBAs (check the ESX HBA numbering - note in the example below there is no HBA1: only numbers 2 - 5 exist, as discovered in the section above)
Rules must be numbered. They must be between 101 and 200 - rule 101 is already in use by DELL - see section 2
 
Rule to mask HARD DISK 2 scsi1:1 site1-INTRA-SQL04_6.vmdk (RDM) LUN ID 92 eui.001738000b5e0a94
EG
[root@site1-intra-esx01 ~]#esxcli corestorage claimrule add --rule 102 -t location -A vmhba2 -C 0 -L 92 -P MASK_PATH
[root@site1-intra-esx01 ~]#esxcli corestorage claimrule add --rule 103 -t location -A vmhba3 -C 0 -L 92 -P MASK_PATH
[root@site1-intra-esx01 ~]#esxcli corestorage claimrule add --rule 104 -t location -A vmhba4 -C 0 -L 92 -P MASK_PATH
[root@site1-intra-esx01 ~]#esxcli corestorage claimrule add --rule 105 -t location -A vmhba5 -C 0 -L 92 -P MASK_PATH
 
 
7 Reload RulesReload your claimrules with the command:
EG
[root@site1-intra-esx01 ~]#esxcli corestorage claimrule load
 
 
8 Unclaim PathsUnclaim all paths to a device and then run the loaded claimrules on each of the paths to reclaim them.
EG
[root@site1-intra-esx01 ~]#esxcli corestorage claiming reclaim -d eui.001738000b5e0a94
 
 
9 Verification of masked deviceVerify that the masked device is no longer used by the ESX host
EG
[root@site1-intra-esx01 ~]#esxcfg-mpath -L | grep eui.001738000b5e0a94
 
Empty output indicates that the LUN is not active.
Refresh storage from the GUI - the LUN or RDM should disappear from there as well

Script to MASK RDMs of SQL03 & 04 cluster - applied to site1-INTRA-ESX01 & 02 
#!/bin/bash
# Guy Cowie
# 16 Feb 2012
# Script to mask site1-intra-sql03 rdms from ESX host
date
hostname
# Display the claim rules
esxcli corestorage claimrule list
# Rules to hide the RDMs
# Rule to mask HARD DISK 2 scsi1:1 site1-INTRA-SQL04_6.vmdk LUN ID 92 eui.001849000b5e0a94
esxcli corestorage claimrule add --rule 102 -t location -A vmhba2 -C 0 -L 92 -P MASK_PATH
esxcli corestorage claimrule add --rule 103 -t location -A vmhba3 -C 0 -L 92 -P MASK_PATH
esxcli corestorage claimrule add --rule 104 -t location -A vmhba4 -C 0 -L 92 -P MASK_PATH
esxcli corestorage claimrule add --rule 105 -t location -A vmhba5 -C 0 -L 92 -P MASK_PATH
# Rule to mask HARD DISK 3 scsi1:10 site1-INTRA-SQL04_5.vmdk LUN ID 93 eui.001849000b5e0a95
esxcli corestorage claimrule add --rule 106 -t location -A vmhba2 -C 0 -L 93 -P MASK_PATH
esxcli corestorage claimrule add --rule 107 -t location -A vmhba3 -C 0 -L 93 -P MASK_PATH
esxcli corestorage claimrule add --rule 108 -t location -A vmhba4 -C 0 -L 93 -P MASK_PATH
esxcli corestorage claimrule add --rule 109 -t location -A vmhba5 -C 0 -L 93 -P MASK_PATH
# Rule to mask HARD DISK 4 scsi1:11 site1-INTRA-SQL04_4.vmdk LUN ID 94 eui.001849000b5e0a96
esxcli corestorage claimrule add --rule 110 -t location -A vmhba2 -C 0 -L 94 -P MASK_PATH
esxcli corestorage claimrule add --rule 111 -t location -A vmhba3 -C 0 -L 94 -P MASK_PATH
esxcli corestorage claimrule add --rule 112 -t location -A vmhba4 -C 0 -L 94 -P MASK_PATH
esxcli corestorage claimrule add --rule 113 -t location -A vmhba5 -C 0 -L 94 -P MASK_PATH
# Rule to mask HARD DISK 5 scsi1:13 site1-INTRA-SQL03_13.vmdk LUN ID 86 eui.001849000b5e055d
esxcli corestorage claimrule add --rule 114 -t location -A vmhba2 -C 0 -L 86 -P MASK_PATH
esxcli corestorage claimrule add --rule 115 -t location -A vmhba3 -C 0 -L 86 -P MASK_PATH
esxcli corestorage claimrule add --rule 116 -t location -A vmhba4 -C 0 -L 86 -P MASK_PATH
esxcli corestorage claimrule add --rule 117 -t location -A vmhba5 -C 0 -L 86 -P MASK_PATH
# Rule to mask HARD DISK 6 scsi1:14 LUN ID 90 eui.001849000b5e0a98
esxcli corestorage claimrule add --rule 118 -t location -A vmhba2 -C 0 -L 90 -P MASK_PATH
esxcli corestorage claimrule add --rule 119 -t location -A vmhba3 -C 0 -L 90 -P MASK_PATH
esxcli corestorage claimrule add --rule 120 -t location -A vmhba4 -C 0 -L 90 -P MASK_PATH
esxcli corestorage claimrule add --rule 121 -t location -A vmhba5 -C 0 -L 90 -P MASK_PATH
# Rule to mask HARD DISK 7 scsi1:15 LUN ID 91 eui.001849000b5e0a97
esxcli corestorage claimrule add --rule 122 -t location -A vmhba2 -C 0 -L 91 -P MASK_PATH
esxcli corestorage claimrule add --rule 123 -t location -A vmhba3 -C 0 -L 91 -P MASK_PATH
esxcli corestorage claimrule add --rule 124 -t location -A vmhba4 -C 0 -L 91 -P MASK_PATH
esxcli corestorage claimrule add --rule 125 -t location -A vmhba5 -C 0 -L 91 -P MASK_PATH
# Load the claim rule into the PSA
esxcli corestorage claimrule load
# Unclaim and reclaim the RDMs using their eui numbers
esxcli corestorage claiming reclaim -d eui.001849000b5e0a94
esxcli corestorage claiming reclaim -d eui.001849000b5e0a95
esxcli corestorage claiming reclaim -d eui.001849000b5e0a96
esxcli corestorage claiming reclaim -d eui.001849000b5e055d
esxcli corestorage claiming reclaim -d eui.001849000b5e0a98
esxcli corestorage claiming reclaim -d eui.001849000b5e0a97
# Display the claim rules
esxcli corestorage claimrule list
 
 
Script to unMASK RDMs SQL03 & 04 cluster 
#!/bin/bash
# Guy Cowie
# 16 Feb 2012
# Script to unmask site1-intra-sql03 rdms from ESX host
date
hostname
# Display the claim rules
esxcli corestorage claimrule list
sleep 5s
# Rules to unmask the RDMs
# Rule to unmask HARD DISK 2 scsi1:1 site1-INTRA-SQL04_6.vmdk LUN ID 92 eui.001849000b5e0a94
esxcli corestorage claimrule delete --rule 102
esxcli corestorage claimrule delete --rule 103
esxcli corestorage claimrule delete --rule 104
esxcli corestorage claimrule delete --rule 105
# Rule to unmask HARD DISK 3 scsi1:10 site1-INTRA-SQL04_5.vmdk LUN ID 93 eui.001849000b5e0a95
esxcli corestorage claimrule delete --rule 106
esxcli corestorage claimrule delete --rule 107
esxcli corestorage claimrule delete --rule 108
esxcli corestorage claimrule delete --rule 109
# Rule to unmask HARD DISK 4 scsi1:11 site1-INTRA-SQL04_4.vmdk LUN ID 94 eui.001849000b5e0a96
esxcli corestorage claimrule delete --rule 110
esxcli corestorage claimrule delete --rule 111
esxcli corestorage claimrule delete --rule 112
esxcli corestorage claimrule delete --rule 113
# Rule to unmask HARD DISK 5 scsi1:13 site1-INTRA-SQL03_13.vmdk LUN ID 86 eui.001849000b5e055d
esxcli corestorage claimrule delete --rule 114
esxcli corestorage claimrule delete --rule 115
esxcli corestorage claimrule delete --rule 116
esxcli corestorage claimrule delete --rule 117
# Rule to unmask HARD DISK 6 scsi1:14 LUN ID 90 eui.001849000b5e0a98
esxcli corestorage claimrule delete --rule 118
esxcli corestorage claimrule delete --rule 119
esxcli corestorage claimrule delete --rule 120
esxcli corestorage claimrule delete --rule 121
# Rule to unmask HARD DISK 7 scsi1:15 LUN ID 91 eui.001849000b5e0a97
esxcli corestorage claimrule delete --rule 122
esxcli corestorage claimrule delete --rule 123
esxcli corestorage claimrule delete --rule 124
esxcli corestorage claimrule delete --rule 125
sleep 5s
# Load the claim rule into the PSA
esxcli corestorage claimrule load
sleep 5s
# Rule to unmask HARD DISK 2 scsi1:1 site1-INTRA-SQL04_6.vmdk LUN ID 92 eui.001849000b5e0a94
esxcli corestorage claiming unclaim -t location -A vmhba5 -C 0 -L 92
esxcli corestorage claiming unclaim -t location -A vmhba2 -C 0 -L 92
esxcli corestorage claiming unclaim -t location -A vmhba3 -C 0 -L 92
esxcli corestorage claiming unclaim -t location -A vmhba4 -C 0 -L 92
# Rule to unmask HARD DISK 3 scsi1:10 site1-INTRA-SQL04_5.vmdk LUN ID 93 eui.001849000b5e0a95
esxcli corestorage claiming unclaim -t location -A vmhba5 -C 0 -L 93
esxcli corestorage claiming unclaim -t location -A vmhba2 -C 0 -L 93
esxcli corestorage claiming unclaim -t location -A vmhba3 -C 0 -L 93
esxcli corestorage claiming unclaim -t location -A vmhba4 -C 0 -L 93
# Rule to unmask HARD DISK 4 scsi1:11 site1-INTRA-SQL04_4.vmdk LUN ID 94 eui.001849000b5e0a96
esxcli corestorage claiming unclaim -t location -A vmhba5 -C 0 -L 94
esxcli corestorage claiming unclaim -t location -A vmhba2 -C 0 -L 94
esxcli corestorage claiming unclaim -t location -A vmhba3 -C 0 -L 94
esxcli corestorage claiming unclaim -t location -A vmhba4 -C 0 -L 94
# Rule to unmask HARD DISK 5 scsi1:13 site1-INTRA-SQL03_13.vmdk LUN ID 86 eui.001849000b5e055d
esxcli corestorage claiming unclaim -t location -A vmhba5 -C 0 -L 86
esxcli corestorage claiming unclaim -t location -A vmhba2 -C 0 -L 86
esxcli corestorage claiming unclaim -t location -A vmhba3 -C 0 -L 86
esxcli corestorage claiming unclaim -t location -A vmhba4 -C 0 -L 86
# Rule to unmask HARD DISK 6 scsi1:14 LUN ID 90 eui.001849000b5e0a98
esxcli corestorage claiming unclaim -t location -A vmhba5 -C 0 -L 90
esxcli corestorage claiming unclaim -t location -A vmhba2 -C 0 -L 90
esxcli corestorage claiming unclaim -t location -A vmhba3 -C 0 -L 90
esxcli corestorage claiming unclaim -t location -A vmhba4 -C 0 -L 90
# Rule to unmask HARD DISK 7 scsi1:15 LUN ID 91 eui.001849000b5e0a97
esxcli corestorage claiming unclaim -t location -A vmhba5 -C 0 -L 91
esxcli corestorage claiming unclaim -t location -A vmhba2 -C 0 -L 91
esxcli corestorage claiming unclaim -t location -A vmhba3 -C 0 -L 91
esxcli corestorage claiming unclaim -t location -A vmhba4 -C 0 -L 91
esxcfg-rescan -A
sleep 5s
# Unclaim and reclaim the RDMs using their eui numbers
esxcli corestorage claiming reclaim -d eui.001849000b5e0a94
esxcli corestorage claiming reclaim -d eui.001849000b5e0a95
esxcli corestorage claiming reclaim -d eui.001849000b5e0a96
esxcli corestorage claiming reclaim -d eui.001849000b5e055d
esxcli corestorage claiming reclaim -d eui.001849000b5e0a98
esxcli corestorage claiming reclaim -d eui.001849000b5e0a97
# Display the claim rules
esxcli corestorage claimrule list
 
 
 
 

bad magic supernumber - esx boot crash

esx comes up but shows that one or several volumes are in accessible
shows their eui / naa
fsck.ext3: Unable to resolve UUID
make a note of the fcuked UUID

could try this
df lists all volumes, as /dev/sdxx where xx is hex number
fsck.ext3 -f /dev/sdxx   run this command on each volume on the df list to do a scandisk

still show up fcuked ?
then hash them out in the fstab file if applicable
Confirm these same values in the /etc/fstab file.

# cat /etc/fstab

  1. Log into the server using the root password.
  2. Remount the root filesystem in read-write mode with the command:

    # mount / -o remount,rw
  3. Open the /etc/fstab file in a text editor.
  4. Comment out or remove the line referring to the previous ESX installation by inserting a hash symbol (#) at the beginning of the line.
  5. Save the file and exit the editor.
  6. Reboot the ESX host.
Prob still end up rebuild the esx host

vmkernel storage device errors

error messages in the var log vmkernel file:
H:0x0 D:0x2 valid sense data 0x7 0x27 0x0
0x7 0x27 0x0 is the message the ESX host is getting back from the SAN
in this case its showing that the device (rdm) is read only and therefore not really an error unless patch below isnt applied

Interpreting SCSI sense codes VMWARE KB: 289902


APPLY PATCH
  • Rescan or add-storage operations that you run from the vCenter Client might take a long time to complete or fail with a timeout, and a log spew of messages similar to the following is written to /var/log/vmkernel: Jul 15 07:09:30 <vmkernel_name>: 29:18:55:59.297 <cpu id>ScsiDeviceToken: 293: Sync IO 0x2a to device "naa.60060480000190101672533030334542" failed: I/O error H:0x0 D:0x2 P:0x0 Valid sense data: 0x7 0x27 0x0.
    Jul 15 07:09:30 [vmkernel name]: 29:18:55:59.298 cpu29:4356)NMP: nmp_CompleteCommandForPath: Command 0x2a (0x4100b20eb140) to NMP device "naa.60060480000190101672533030334542" failed on physical path "vmhba1:C0:T0:L100" H:0x0 D:0x2 P:0x0 Valid sense data: 0x7 0x27 0x0.
    Jul 15 07:09:30 [vmkernel_name]: 29:18:55:59.298 cpu29:4356)ScsiDeviceIO: 747: Command 0x2a to device "naa.60060480000190101672533030334542" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x7 0x27 0x0.


    VMFS continues trying to mount the volume even if the LUN is read-only. This issue is resolved by applying this patch. Now VMFS does not attempt to mount the volume when it receives the read-only status.
  • esxtop hba busy ?

    esxtop
    a
    (adapter)
    in 'a' choose hba 1 (or one that looks busy)
    DAVG less than 12 good
    spikes are also OK
    problems when plateauing (word ?)


    esxtop
    d
    (disk)

    32 bit odbc srm4 & trivia logging

    on the srm server use the 32 bit driver to connect to the db
    C:\W\SysWOW64\odbcad32.exe

    enable trivia logging in the vmware-dr.xml file on the srm servers

    vmware-cmd vm power status

    power off vm
    vmware-cmd /path/to/vm/file/vmxfilename.vmx stop trysoft
    list vms registered on esx host
    vmware-cmd -l
    what is the current power state of a vm
    vmware-cmd vmxfilename.vmx getState

    vmware authentication

    • Unable to add and connect ESX host to VirtualCenter/vCenter Server inventory
    • VMware Infrastructure/vSphere Client connection directly to an ESX host works
    • This error is generated in vCenter Server:

      Failed to connect to host 
       
    • In some cases the vmware-vmkauthd daemon may have stopped responding. This prevents the required authentication from taking place when adding a ESX host to inventory.
    service xinetd status | start

    see hidden files in unix file system

    ls -a
    useful in VMware
    shows
    /.dvsData/

    vSphere faster reboots

    Change in the Advanced settings
    Scsi.CRTimeoutDuringBoot = 1

    Searching SRM log files

    Look for
    Posting event to vCenter
    value = "name of recovery plan"
    eg
    value = "Recover SQL cluster to HX site"

    VMware find euid / naa of an RDM

    vmkfstools -q name_of_vmdk_file.vmdk
    command returns:
    DISK is a passthrough RDM....maps to vml.00123456789
    ls-l /vmfs/devices/disks/ | grep -i vml.00123456789
    command returns:
    vml.00123456789 -> eui.00456789123

    OR
    just look in the VM edit settings rdm disk properties....

    VI **********VI**********

    cheat
    set file format, number lines, search

    if the script/file has been written on a windows machine (notepad etc) and it wont run on unix / esx then the format needs to be changed
    vi filename
    :set fileformat=unix

    want to see line numbers in vi
    vi filename
    :set number
    remove numbers
    :set nonumber

    search in vi
    search forward /
    search back ?
    repeat search n
    repeat search back N

    VMWare file copy between esx hosts

    1. start SSH Client in source and destination ESX host:
    service sshd start
    2. enable firewall for ssh client and server in both hosts:
    enable ssh client:
    esxcfg-firewall -e sshClient
    enable ssh server:
    esxcfg-firewall -e sshServer
    3 On destination ESX host go to dir where file is to be copied to:
    cd /path_to_where_file_should_end_up/
    4 from above dir run copy command using scp:
    scp root@ipAddressofSourceEsxHost:/path/filename.txt ./

    eg to copy proxy.xml from esx01 to esx02 from /etc/vmware/hostd to /etc/vmware/hostd
    log on to esx02 as root
    cd /etc/vmware/hostd
    scp root@esx01:/etc/vmware/hostd/proxy.xml ./

    and to push a file out to another esx host:
    scp local_file_name root@remote_host_ip:/path_where_file_should_go/

    MS SQL change DB owner

    change database owner

    USE vCenterDBname
    EXEC sp_changedbowner"username"
    eg
    USE srmdb
    EXEC sp_changedbowner"testdomain\srmadmin"

    vmware troubleshoot HA

    /var/log/vmware/aam
    sort by most recently modified
    ls -ltr

    display datastores that can be mounted

    esxcfg-volume -l

    useful to find missing datastores

    windows netstat find port in use

    example to find which port is being used by IP address (and anything else...) 145.16

    netstat -na | findstr 145.16

    vmware lun mapping to UID

    esx-cfg-scsidevs -m | less

    vmware hostd restart fails

    cd /var/run/vmware

    ls -l vmware-hostd.PID watchdog-hostd.PID

    cat vmware-hostd.PID
    returns the PID number of hostd daemon eg 1191

    Kill -9 1191

    delete the 2 above PID files:
    rm vmware-hostd.PID watchdog-hostd.PID

    service mgmt-vmware start
    OR ESXi = /sbin/services.sh start

    ADDITIONAL INFO:
    find hostd daemon PID
    ps-ef | grep hostd
    returns lines - look for the one that has hostd.config.xml in it - this is the correct PID

    view unix files without truncation

    less -Si filename

    find out the other users on the esx system

    who
    users
    finger - dont think this one works
    last

    scan hba for luns

    esxcfg-rescan vmhba4
    scans only hba 4
    -A scans all hbas

    vmware vm-support generating zipped esx log files for support team

    vm-support -l -w /vmfs/volumes/datastore_name -f

    -l  lists all the files being collected
    -w  working directory - where tar files will be deposited
    -f  force / allow use of vm datastores as target for -w

    vmware show path to luns, eui / naa - look for dead...

    esxcfg-mpath -b | less


    seach for dead to reveal paths that are down

    esxcfg-mpath -L | grep -i dead -B6

    -B6 or -B 6 writes out the 6 dead occurrences

    VMware Find out what hba is being used

    cd /proc/scsi/bfa/
    less 4
    shows link status up or down

    change UNIX Linux console screen size

    the screen size is in a file called
    /etc/X11/xorg.conf

    you can either edit this or use the command if available

    # system-config-display –reconfigure –set-resoltion=1024x768 –set-depth=16

    change UNIX / Linux hostname, ip address, subnet mask dns and default gateway

    The steps to change ethernet 0 ip configuration are:

    Stop the interface:
           # ifdown eth0

    Make the changes to the following files:
    change the default gateway and hostname:
          # vi /etc/sysconfig/network

    change the IP address and subnet mask:
         # vi /etc/sysconfig/network-scripts/ifcfg-eth0

    change the dns settings:
         # vi /etc/resolv.conf

    Start the interface:
       #ifup eth0

    Friday, 6 April 2012

    Windows and Unix command line equivalents

    http://www.lemoda.net/windows/windows2unix/windows2unix.html

    diskpart add storage to hyper-v server

    create volume simple disk=1
    select disk=1
    create partition primary
    select volume 3
    format fs=ntfs label=vms quick
    assign letter=E

    paravirtual scsi adapter

    designed for san based vm disks not direct attached storage.
    drivers for adapter are located in /vmimages/floppies.
    can be used for boot disk as well as data disks.
    KB: 1010398