HOWTO: Disk Mapping to lpar with dual VIOS via VSCSI and MPIO

root's picture

1. Update the adapter settings on both vios (if not done already - normally this should be already set if you have disks attached to clients):

# chdev -l fscsi0 -a fc_err_recov=fast_fail
# chdev -l fscsi0 -a dyntrk=yes

2. Make sure that SAN lun is available on both VIOS.

Also, in order to be safe and know exactly which disk you have to assign, do the following:
- discover the disk on one vio and add pvid to it:

gzvio1 $ cfgdev
gzvio1 $ chdev -l hdisk69 -a pv=yes
gzvio2 $ cfgdev

3. Set reserve_lock as no to disks on both vios:

gzvioX $ chdev -l hdiskX -a reserve_lock=no

4. Set the reserve_policy to no_reserve on both vios:

gzvioX $ chdev -dev hdiskX -attr reserve_policy=no_reserve

5. Not required if your disks are newly added but it's always good to check if the disk(s) is(are) not already used by another client or even by vio itself:

gzvioX # oem_setup_env
gzvioX # lqueryvg -Atp hdisk69
You should get this if the disk is empty:
0516-320 lqueryvg: Physical volume hdisk69 is not asigned to a volume group.
0516-066 lqueryvg: Physical volume is not a volume group member.

6. Map the disk to the client from both vios:

PS: I name virtual devices as VIO client hostname + disk size + Sequence number where sequence number is the next available one (if the client have already 5 disks, the next number would be 6).

gzvioX $ mkvdev -vdev hdiskX -vadapter vhost1 -dev server_size_number

7. Run cfgmgr on client and list the disks. The new ones normally are not assigned to any VG (None) and since you have already set ip the PVID on vio level, you should have the same here too. In our example it's hdisk5:

gzaix:~# cfgmgr 
gzaix:~# lspv
hdisk0          00cf5e9f0aa3aa13                    rootvg          active      
hdisk1          00cf5e9f2d5fbe03                    datavg          active      
hdisk2          00cf5e9fbe1e95ff                    datavg          active      
hdisk3          00cf5e9fbe1eb2e0                    datavg          active      
hdisk4          00cf5e9ff7560088                    dsmonvg         active      
[color=green]hdisk5          00cf5e9ff75600eb                    none[/color]

8. Check if you have both paths available:

gzaix:~# lspath -l hdisk5
Enabled hdisk5 vscsi0
Enabled hdisk5 vscsi1

9. At this point you may think you are free to include the disk into an existing volume group or create another one. Not just yet.
There are few steps to be done before, because once is added to the VG, you cannot do them unless you take the disk offline again.

a. Set up vscsi priority according to your design. The goal is to balance the CPU/Memory load across the vios. As idea, my standard is to direct path priority to vscsi0 for all even numbers and to vscsi1 for all odd numbers. It is up to you how you want to do it. If you do not know, ask the leader :).
gzaix:~# lspath -AE -l hdisk5 -p vscsi0
   priority 1 Priority True
gzaix:~# lspath -AE -l hdisk5 -p vscsi1
   priority 1 Priority True
gzaix:~# chpath -l hdisk5 -a priority=2 -p vscsi0
gzaix:~# lspath -AE -l hdisk5 -p vscsi1
   priority 2 Priority True
b. Set up queue_depth to match the value on vio (the value is 20 if you have SVC disks on SDDPCM).

More info about queue depth here: http://www-01.ibm.com/support/docview.wss?uid=isg3T1012636

gzaix:~# chdev -l hdisk5 -a queue_depth=20
gzaix:~# lsattr -El hdisk5 -a queue_depth
queue_depth 20 Queue DEPTH True
c. Set up hcheck_interval (usually to 60 seconds) to avoid enabling manually the paths when you take down the vios. If you have many disks/luns, this could be pain (you know where) so you must set-up this from the begining.
gzaix:~# chdev -l hdisk0 -a hcheck_interval=60
gzaix:~# lsattr -El hdisk5 -a hcheck_interval
hcheck_interval 60 Health Check Interval True

Remarks:

  • On both chdev commands above, if you have your disks in use and you cannot put them offline, you add "-P" at the end and the modifications will be done in the next reboot.

    Do not forget to reboot before taking off any vio.

  • If you ignore the step 9 and you put your disk in use, this is you will have to do:
    • unmount the file systems after stopping any processes using them;
    • deactivate any paging spaces in that volume group (swapoff or just use SMIT);
    • deactivate the volume group using varyoffvg then run the chdev command without the "-P" flag;
    • activate the volume group using varyonvg;
    • mount the file systems, reactivate paging space if applicable.

Thou shalt not steal!

If you want to use this information on your own website, please remember: by doing copy/paste entirely it is always stealing and you should be ashamed of yourself! Have at least the decency to create your own text and comments and run the commands on your own servers and provide your output, not what I did!

Or at least link back to this website.

Recent content

root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root