Wednesday, July 1, 2009

VIO Commands



VIO Server Commands


lsdev –virtual (list all virtual devices on VIO server partitions)
lsmap –all (lists mapping between physical and logical devices)
oem_setup_env (change to OEM [AIX] environment on VIO server)

Create Shared Ethernet Adapter (SEA) on VIO Server


mkvdev –sea{physical adapt} –vadapter {virtual eth adapt} –default {dflt virtual adapt} –defaultid {dflt vlan ID}
SEA Failover
ent0 – GigE adapter
ent1 – Virt Eth VLAN1 (Defined with a priority in the partition profile)
ent2 – Virt Eth VLAN 99 (Control)
mkvdev –sea ent0 –vadapter ent1 –default ent1 –defaultid 1 –attr ha_mode=auto ctl_chan=ent2
(Creates ent3 as the Shared Ethernet Adapter)

Create Virtual Storage Device Mapping


mkvdev –vdev {LV or hdisk} –vadapter {vhost adapt} –dev {virt dev name}
Sharing a Single SAN LUN from Two VIO Servers to a Single VIO Client LPAR
hdisk = SAN LUN (on vioa server)
hdisk4 = SAN LUN (on viob, same LUN as vioa)
chdev –dev hdisk3 –attr reserve_policy=no_reserve (from vioa to prevent a reserve on the disk)
chdev –dev hdisk4 –attr reserve_policy=no_reserve (from viob to prevent a reserve on the disk)
mkvdev –vdev hdisk3 –vadapter vhost0 –dev hdisk3_v (from vioa)
mkvdev –vdev hdisk4 –vadapter vhost0 –dev hdisk4_v (from viob)
VIO Client would see a single LUN with two paths.
spath –l hdiskx (where hdiskx is the newly discovered disk)
This will show two paths, one down vscsi0 and the other down vscsi1.





VIO command from HMC
#viosvrcmd -m -p -c "lsmap -all

(this works only with IBM VIO Server)

see man viosvrcmd for more information

VIO Server Installation & Configuration


IBM Virtual I/O Server
The Virtual I/O Server is part of the IBM eServer p5 Advanced Power Virtualization hardware feature. Virtual I/O Server allows sharing of physical resources between LPARs including virtual SCSI and virtual networking. This allows more efficient utilization of physical resources through sharing between LPARs and facilitates server consolidation.

Installation
You have two options to install the AIX-based VIO Server:
1. Install from CD
2. Install from network via an AIX NIM-Server

Installation method
#1 is probably the more frequently used method in a pure Linux environment as installation method #2 requires the presence of an AIX NIM (Network Installation Management) server. Both methods differ only in the initial boot step and are then the same. They both lead to the following installation screen:

IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM STARTING SOFTWARE IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM PLEASE WAIT... IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMElapsed time since release of system processors: 51910 mins 20 secs

------------------------------------------------------------------------------- Welcome to the Virtual I/O Server. boot image timestamp: 10:22 03/23 The current time and date: 17:23:47 08/10/2005 number of processors: 1 size of memory: 2048MB boot device: /pci@800000020000002/pci@2,3/ide@1/disk@0:\ppc\chrp\bootfile.exeSPLPAR info: entitled_capacity: 50 platcpus_active: 2This system is SMT enabled: smt_status: 00000007; smt_threads: 2 kernel size: 10481246; 32 bit kernel
-------------------------------------------------------------------------------




The next step then is to define the system console. After some time you should see the following screen:


******* Please define the System Console. *******Type a 1 and press Enter to use this terminal as the system console.


Then Choose language of installation


>>> 1 Type 1 and press Enter to have English during install.


This is the main installation menu of the AIX-based VIO-Server:



Welcome to Base Operating System
Installation and Maintenance
Type the number of your choice and press Enter. Choice is indicated by >>>.>>>

1 Start Install Now with Default Settings
2 Change/Show Installation Settings and Install
3 Start Maintenance Mode for System Recovery

88 Help ? 99 Previous Menu

>>> Choice [1]:


Select Hard disk where you need to install VIO base operating system as we do in AIX Base operating system.


Once the installation is over. You will get login Prompt similar to AIX server.

VIO server is nothing but AIX on top of that Virtualisation software loaded on it. Generally on VIO server we do not host any application. Its basically used for sharing I/O resources ( DISK & Network ) to the client LPAR hosted in same Physical server.


Initial setup
After the reboot you are presented with the VIO-Server login prompt. You can't login as user root as you have to use the special user id padmin. No initial default password is set. Immediately after login you are forced to set a new password.


Before you can do anything you have to accept the I/O Server license.
This is done with the license command

#license -accept

Once you are logged in as user padmin you find yourself in a restricted Korn shell with only a limited set of commands. You can see all available commands with the command help. All these commands are shell aliases to a single SUID-binary called ioscli which is located in the directory /usr/ios/cli/bin. If you are familiar with AIX you will recognize most commands but most command line parameters differ from the AIX versions.
As there are no man pages available you can see all options for each command separately by issueing the command help . Here is an example for the command lsmap:

$ help lsmap
Usage: lsmap {-vadapter ServerVirtualAdapter -plc PhysicalLocationCode
-all}
[-net] [-fmt delimiter]
Displays the mapping between physical and virtual devices.
-all Displays mapping for all the server virtual adapter
devices.
-vadapter Specifies the server virtual adapter device
by device name.
-plc Specifies the server virtual adapter device
by physical location code.
-net Specifies supplied device is a virtual server
Ethernet adapter.
-fmt Divides output by a user-specified delimiter.



A very important command is oem_setup_env which gives you access to the regular AIX command line interface. This is provided solely for the installation of OEM device drivers


Virtual SCSI setup

To map a LV
# mkvg: creates the volume group, where a new LV will be created using the mklv command
# lsdev: shows the virtual SCSI server adapters that could be used for mapping with the LV
# mkvdev: maps the virtual SCSI server adapter to the LV
# lsmap -all: shows the mapping information

To map a physical disk
# lsdev: shows the virtual SCSI server adapters that could be used for mapping with a physical disk
# mkvdev: maps the virtual SCSI server adapter to a physical disk
# lsmap -all: shows the mapping information

Client partition commands

No commands needed, the Linux kernel is notified immediately

Create new volume group datavg with member disk hdisk1
# mkvg -vg datavg hdisk1

Create new logical volume vdisk0 in volume group
# mklv -lv vdisk0 datavg 10G

Maps the virtual SCSI server adapter to the logical volume
# mkvdev -vdev vdisk0 -vadapter vhost0

Display the mapping information
#lsmap -all

Virtual Ethernet setup

To list all virtual and physical adapters use the lsdev -type adapter command.

$ lsdev -type adapter

name status description
ent0 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902)
ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902)
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
ide0 Available ATA/IDE Controller Device
sisscsia0 Available PCI-X Dual Channel Ultra320 SCSI Adapter
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
vhost3 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter

Choose the virtual Ethernet adapter we want to map to the physical Ethernet adapter.

$ lsdev -virtualname status description
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
vhost3 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter

The command mkvdev maps a physical adapter to a virtual adapter, creates a layer 2 network bridge and defines the default virtual adapter with its default VLAN ID. It creates a new Ethernet interface, e.g., ent3.
Make sure the physical and virtual interfaces are unconfigured (down or detached).

Scenario A (one VIO server)
Create a shared ethernet adapter ent3 with a physical one (ent0) and a virtual one (ent2) with PVID 1:

$ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1
ent3 Available
en3
et3

This has created a new shared ethernet adapter ent3 (you can verify that with the lsdev command). Now configure the TCP/IP settings for this new shared ethernet adapter (ent3). Please note that you have to specify the interface (en3) and not the adapter (ent3).

$ mktcpip -hostname op710-1-vio -inetaddr 9.156.175.231 -interface en3 -netmask 255.255.255.0 -gateway 9.156.175.1 -nsrvaddr 9.64.163.21 -nsrvdomain ibm.com

Scenario B (two VIO servers)
Create a shared ethernet adapter ent3 with a physical one (ent0) and a virtual one (ent2) with PVID 1:

$ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1


Configure the TCP/IP settings for the new shared ethernet adapter (ent3):

$mktcpip -hostname op710-1-vio -inetaddr 9.156.175.231 -interface en3 -netmask 255.255.255.0 -gateway 9.156.175.1 -nsrvaddr 9.64.163.21 -nsrvdomain ibm.com

Client partition commands
No new commands needed just the typical TCP/IP configuration is done on the virtual Ethernet interface that it is defined in the client partition profile on the HMC

Creating LPAR from command line from HMC

Creating LPAR from command line from HMC

Create new LPAR using command line

mksyscfg -r lpar -m MACHINE -i name=LPARNAME, profile_name=normal, lpar_env=aixlinux, shared_proc_pool_util_auth=1,min_mem=512, desired_mem=2048, max_mem=4096, proc_mode=shared, min_proc_units=0.2, desired_proc_units=0.5,max_proc_units=2.0, min_procs=1, desired_procs=2, max_procs=2, sharing_mode=uncap, uncap_weight=128,boot_mode=norm, conn_monitoring=1, shared_proc_pool_util_auth=1


Note :- Use man mksyscfg command for all flag information.

Onother method of creating LPAR through configuration file we need to create more than one lPAR at same time

Here is an example for 2 LPARs, each definition starting at new line:

name=LPAR1,profile_name=normal,lpar_env=aixlinux,all_resources=0,min_mem=1024,desired_mem=9216,max_mem=9216,proc_mode=shared,min_proc_units=0.3,desired_proc_units=1.0,max_proc_units=3.0,min_procs=1,desired_procs=3,max_procs=3,sharing_mode=uncap,uncap_weight=128,lpar_io_pool_ids=none,max_virtual_slots=10,"virtual_scsi_adapters=6/client/4/vio1a/11/1,7/client/9/vio2a/11/1","virtual_eth_adapters=4/0/3//0/1,5/0/4//0/1",boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,shared_proc_pool_util_auth=1
name=LPAR2,profile_name=normal,lpar_env=aixlinux,all_resources=0,min_mem=1024,desired_mem=9216,max_mem=9216,proc_mode=shared,min_proc_units=0.3,desired_proc_units=1.0,max_proc_units=3.0,min_procs=1,desired_procs=3,max_procs=3,sharing_mode=uncap,uncap_weight=128,lpar_io_pool_ids=none,max_virtual_slots=10,"virtual_scsi_adapters=6/client/4/vio1a/12/1,7/client/9/vio2a/12/1","virtual_eth_adapters=4/0/3//0/1,5/0/4//0/1",boot_mode=norm,conn_monitoring=1,auto_start=0,power_ctrl_lpar_ids=none,work_group_id=none,shared_proc_pool_util_auth=1

Copy this file to HMC and run:

mksyscfg -r lpar -m SERVERNAME -f /tmp/profiles.txt

where profiles.txt contains all LPAR informations as mentioned above.

To change setting of your Lpar use chsyscfg command as mentioned below.

Virtual scsi creation & Mapping Slots
#chsyscfg -m Server-9117-MMA-SNXXXXX -r prof -i 'name=server_name,lpar_id=xx,"virtual_scsi_adapters=301/client/4/vio01_server/301/0,303/client/4/vio02/303/0,305/client/4/vio01_server/305/0,307/client/4/vio02_server/307/0"'

IN Above mentioned command we are creating Virtual scsi adapter for client LPAR & doing Slot mapping with VIO servers. In above scenario there is two VIO servers for redundancy.


Slot Mapping

Vio01_server ( VSCSI server slot) Client ( Vscsi client Slot)
Slot 301 Slot 301
Slot 303 Slot 303

VIO02_server (VSCSI sever Slot) Client ( VSCSI client Slot)
Slot 305 Slot 305
Slot 307 Slot 307


These Slot are mapped in such a way if Any disk or logical volume are mapped to Virtuals scsi adapter through VIO command "mkvdev".

Syntax for Virtual scsi adapter


virtual-slot-number/client-or-server/supports-HMC/remote-lpar-ID/remote-lpar-name/remote-slot-number/is-required


As in command above mentioned command mksyscfg "virtual_scsi_adapters=301/client/4/vio01_server/301/0"

means

301 - virtual-slot-number
client-or-server - client (Aix_client)
4 -- Partiotion Id ov VIO_01 server (remote-lpar-ID)
vio01_server - remote-lpar-name
301 -- remote-slot-number (VIO server_slot means virtual server scsi slot)
1 -- Required slot in LPAR ( It cannot be removed from DLPAR operations )
0 --means desired ( it can be removed by DLPAR operations)


To add Virtual ethernet adapter & slot mapping for above created profile

#chsyscfg -m Server-9117-MMA-SNxxxxx -r prof -i 'name=server_name,lpar_id=xx,"virtual_eth_adapters=596/1/596//0/1,506/1/506//0/1,"'

Syntax for Virtual ethernet adapter


slot_number/is_ieee/port_vlan_id/"additional_vlan_id,additional_vlan_id"/is_trunk(number=priority)/is_required

means

So the adapter with this setting 596/1/596//0/1 would say it is in slot_number 596, Its is ieee, the port_vlan_id is 1, it has no VLAN id assigned, It is not a trunk adapter and it is required.

Listing LPAR information from HMC command line interface

Listing LPAR information from HMC command line interface

To list managed system (CEC) managed by HMC

# lssyscfg -r sys -F name

To list number of LPAR defined on the Managed system (CEC)

# lssyscfg -m SYSTEM(CEC) -r lpar -F name,lpar_id,state

To list LPAR created in your system use lsyscfg command as mentioned below.

# lssyscfg -r prof -m SYSTEM(CEC) --filter "lpar_ids=X, profiles_names=normal"

Flags

m-> Managed System name
lpar_ids -> Lpar ID (numeric Id for each LPAR created in the Managed system (CEC)
profile_name -> To choose profile of LPAR


To start console of LPAR from HMC

# mkvterm -m SYSTEM(CEC) --id X

m- > managed system (ex -p5-570_xyz)
id - > LPAR ID

To finish a VTERM, simply press ~ followed by a dot .!

To disconnect console of LPAR from HMC

# rmvterm -m SYSTEM(CEC) --id x

To access LPAR console for diffrent Managed system from HMC

#vtmenu


Activating Partition

hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100xxx -r lpar -F name,lpar_id,state,default_profile VIOS1.3-FP8.0,1,Running,default linux_test,2,Not Activated,client_default hscroot@hmc-570:~> chsysstate -m Server-9110-510-SN100xxxx -r lpar -o on -b norm --id 2 -f client_default

The above example would boot the partition in normal mode. To boot it into SMS menu use -b sms and to boot it to the OpenFirmware prompt use -b of.

To restart a partition the chsysstate command would look like this:

hscroot@hmc-570:~> chsysstate -m Server-9110-510-SN100xxxx -r lpar --id 2 -o shutdown --immed --restart

And to turn it off - if anything else fails - use this:
hscroot@hmc-570:~> chsysstate -m Server-9110-510-SN100xxxx -r lpar --id 2 -o shutdown --immed
hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100xxxx -r lpar -F name,lpar_id,state
VIOS1.3-FP8.0,1,Running
linux_test,2,Shutting Down


Deleting Partition

hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100xxxx -r lpar -F name,lpar_id
VIOS1.3-FP8.0,1
linux_test,2
hscroot@hmc-570:~> rmsyscfg -m Server-9110-510-SN100xxxx -r lpar --id 2
hscroot@hmc-570:~> lssyscfg -m Server-9110-510-SN100xxxx -r lpar -F name,lpar_id
VIOS1.3-FP8.0,1

Enabling the Advanced POWER Virtualization Feature

Enabling the Advanced POWER Virtualization Feature






Enabling the Advanced POWER Virtualization Feature

Before we could use the virtual I/O, we had to determine whether the machine was enabled to use the feature. To do this, we right-clicked on the name of the target server in the HMC’s ‘Server and Partition’ view and looked at that server’s properties. Figure 4 shows it did not have the feature enabled.



Users can enable this feature by obtaining a key code from their IBM sales representative using information that the HMC gathers about their machine when the user navigates to Show Code Information in the HMC. Figure 5 shows how to navigate there as well as how to get to the HMC dialog box used to enter the activation code which renders the system VIO-capable. We obtained an access code and entered it in the dialog box in Figure

VIO server setup example

VIO server setup example

Virtual I/O Example
A user who currently runs applications on a POWER4 system may want to upgrade to a POWER5 system running AIX 5.3 in order to take advantage of virtual I/O. If so, do these three things:
y Create a Virtual I/O Server. y Add virtual LANs. y Define virtual SCSI devices.
In our example, we had an IBM eServer p5 550 Express with four CPUs that was running one AIX 5.3 database server LPAR, and we needed to create a second application server LPAR that uses a virtual SCSI disk as its boot disk. We wanted to share one Ethernet adapter between the database and application server LPARs and use this shared adapter to access an external network. Finally, we needed a private network between the two LPARs and we decided to implement it using virtual Ethernet devices (see Figure 3). We followed these steps to set up our system:
1. Enabled the Advanced POWER Virtualization feature.
2.Installed the Virtual I/

Virtual I/O Server installation overview

Virtual I/O Server installation overview

The Virtual I/O Server The Virtual I/O Server is a dedicated partition that runs a special operating system called IOS. This special type of partition has physical resources assigned to it in its HMC profile. The administrator issues server partition IOS commands to create virtual resources which present virtual LAN, virtual SCSI adapters, and virtual disk drives client partitions. The client partition’s operating systems recognize these resources as physical devices. The Virtual I/O Server is responsible for managing the interaction between the client LPAR and the physical device supporting the virtualized service. Once the administrator logs in to the Virtual I/O Server as the user padmin, he or she has access to a restricted Korn shell session. The administrator uses IOS commands to create, change, and remove these physical and virtual devices as well as to configure and manage the VIO server. Executing the help command on the VIO server command line lists the commands that are available in padmin’s restricted Korn Shell session

Virtual I/O Server installation


 VIO Server code is packaged and shipped as an AIX mksysb image
on a VIO DVD
 Installation methods
– DVD install
– HMC install - Open rshterm and type “installios”; follow the
prompts
– Network Installation Manager (NIM)
 VIO Server can support multiple client types
– AIX 5.3
– SUSE Linux Enterprise Server 9 or 10 for POWER
– Red Hat Enterprise Linux AS for POWER Version 3 and 4


Virtual I/O Server Administration
 The VIO server uses a command line interface running in a restricted shell
– no smitty or GUI
 There is no root login on the VIO Server
 A special user – padmin – executes VIO server commands
 First login after install, user padmin is prompted to change password
 After that, padmin runs the command “license –accept”
 Slightly modified commands are used for managing devices, networks,
code installation and maintenance, etc.
 The padmin user can start a root AIX shell for setting up third-party
devices using the command “oem_setup_env”

We can get all commands by executing help on padmin user id

$ help
Install Commands
Physical Volume Commands
Security Commands
updateios
lspv
lsgcl
lssw
migratepv
cleargcl
ioslevel
lsfailedlogin
remote_management
Logical Volume Command
oem_setup_env
lslv
UserID Commands
oem_platform_level
mklv
mkuser
license
extendlv
rmuser
rmlv
lsuser
LAN Commands
mklvcopy
passwd
mktcpip
rmlvcopy
chuser
hostname
cfglnagg
netstat
Volume Group Commands
Maintenance Commands
entstat
lsvg
chlang
cfgnamesrv
mkvg
diagmenu
traceroute
chvg
shutdown
ping
extendvg
fsck
optimizenet
reducevg
backupios
lsnetsvc
mirrorios
savevgstruct
unmirrorios
restorevgstruct
Device Commands
activatevg
starttrace
mkvdev
deactivatevg
stoptrace
lsdev
importvg
cattracerpt
lsmap
exportvg
bootlist
chdev
syncvg
snap
rmdev
startsysdump

cfgdev
topas
mkpath
mount
chpath
unmount
lspath
showmount
rmpath
startnetsvc
errlog
stopnetsvc

Virtual I/O Server Overview

Virtual I/O Server Overview

What is Advanced POWER Virtualization (APV)
 APV – the hardware feature code for POWER5 servers that enables:
Micro-partitioning – fractional CPU entitlements from a shared pool of
processors, beginning at one-tenth of a CPU
Partition Load Manager (PLM) – a policy-based, dynamic CPU and
memory reallocation tool
– Physical disks can be shared as virtual disks to client partitions
Shared Ethernet Adapter (SEA) – A physical adapter or EtherChannel in
a VIO Server can be shared by client partitions. Clients use virtual
Ethernet adapters
 Virtual Ethernet – a LPAR-to-LPAR Virtual LAN within a POWER5 Server
– Does not require the APV feature code


Why Virtual I/O Server?
 POWER5 systems will support more partitions than physical I/O slots
available
– Each partition still requires a boot disk and network connection, but
now they can be virtual instead of physical
 VIO Server allows partitions to share disk and network adapter resources
– The Fibre Channel or SCSI controllers in the VIO Server can be
accessed using Virtual SCSI controllers in the clients
– A Shared Ethernet Adapter in the VIO Server can be a layer 2 bridge
for virtual Ethernet adapters in the clients
 The VIO Server further enables on demand computing and server
consolidation


 Virtualizing I/O saves:
– Gbit Ethernet Adapters
– 2 Gbit Fibre Channel Adapters
– PCI slots
– Eventually, IO drawers
– Server frames?
– Floor space?
– Electric, HVAC?
– Ethernet switch ports
– Fibre channel switch ports
– Logistics, scheduling, delays of physical Ethernet, SAN attach
 Some servers run 90% utilization all the time – everyone knows which
ones.
 Average utilization in the UNIX server farm is closer to 25%. They don’t
all maximize their use of dedicated I/O devices
 VIO is departure from “new project, new chassis” mindset


Virtual I/O Server Characteristics

 Requires AIX 5.3 and POWER5 hardware with APV feature
 Installed as a special purpose, AIX-based logical partition
 Uses a subset of the AIX Logical Volume Manager and attaches
to traditional storage subsystems
 Inter-partition communication (client-server model) provided via
the POWER Hypervisor
 Clients “see” virtual disks as traditional AIX SCSI hdisks, although
they may be a physical disk or logical volume on the VIO Server
 One physical disk on a VIO server can provide logical volumes for
several client partitions


Virtual Ethernet
 Virtual Ethernet
– Enable inter-lpar communications without a physical adapter
– IEEE-compliant Ethernet programming model
– Implemented through inter-partition, in-memory communication
 VLAN splits up groups of network users on a physical network onto
segments of logical networks
 Virtual switch provides support for multiple (up to 4K) VLANs
– Each partition can connect to multiple networks, through one or more adapters
– VIO server can add VLAN ID tag to the Ethernet frame as appropriate.
Ethernet switch restricts frames to ports that are authorized to receive frames
with specific VLAN ID
 Virtual network can connect to physical network through “routing"
partitions – generally not recommended


Why Multiple VIO Servers?
 Second VIO Server adds extra protection to client LPARS
 Allows two teams to learn VIO setup on single system
 Having Multiple VIO Servers will:
– Provide you Multiple paths to your OS/Data Virtual disks
– Provide you Multiple paths to your network
 Advantages:
– Highest superior availability to other virtual I/O solutions
– Allows VIO Server updates without shutting down client LPAR’s

Virtualization VIO basics

Virtualization VIO basics

The Virtual I/O Server is part of the IBM System p Advanced Power Virtualization hardware feature. Virtual I/O Server allows sharing of physical resources between LPARs including virtual SCSI and virtual networking. This allows more efficient utilization of physical resources through sharing between LPARs and facilitates server consolidation.

The Virtual I/O Server is software that is located in a logical partition. This software facilitates the sharing of physical I/O resources between AIX® and Linux® client logical partitions within the server. The Virtual I/O Server provides virtual SCSI target and Shared Ethernet Adapter capability to client logical partitions within the system, allowing the client logical partitions to share SCSI devices and Ethernet adapters. The Virtual I/O Server software requires that the logical partition be dedicated solely for its use.
The Virtual I/O Server is available as part of the Advanced POWER™ Virtualization hardware feature.
Using the Virtual I/O Server facilitates the following functions:
-->Sharing of physical resources between logical partitions on the system
-->Creating logical partitions without requiring additional physical I/O resources
-->Creating more logical partitions than there are I/O slots or physical devices available with the ability for partitions to have dedicated I/O, virtual I/O, or both
-->Maximizing use of physical resources on the system
-->Helping to reduce the Storage Area Network (SAN) infrastructure
The Virtual I/O Server supports client logical partitions running the following operating systems:
-->AIX 5.3 or later
-->SUSE Linux Enterprise Server 9 for POWER (or later)
-->Red Hat® Enterprise Linux AS for POWER Version 3 (update 2 or later)
-->Red Hat Enterprise Linux AS for POWER Version 4 (or later)
For the most recent information about devices that are supported on the Virtual I/O Server, to download Virtual I/O Server fixes and updates, and to find additional information about the Virtual I/O Server, see the Virtual I/O Server Web site.
The Virtual I/O Server comprises the following primary components:
-->Virtual SCSI
-->Virtual Networking
-->Integrated Virtualization Manager
The following sections provide a brief overview of each of these components.


Virtual SCSI
Physical adapters with attached disks or optical devices on the Virtual I/O Server logical partition can be shared by one or more client logical partitions. The Virtual I/O Server offers a local storage subsystem that provides standard SCSI-compliant logical unit numbers (LUNs). The Virtual I/O Server can export a pool of heterogeneous physical storage as an homogeneous pool of block storage in the form of SCSI disks.
Unlike typical storage subsystems that are physically located in the SAN, the SCSI devices exported by the Virtual I/O Server are limited to the domain within the server. Although the SCSI LUNs are SCSI compliant, they might not meet the needs of all applications, particularly those that exist in a distributed environment.
The following SCSI peripheral-device types are supported:
-->Disks backed by a logical volume
-->Disks backed by a physical volume
-->Optical devices (DVD-RAM and DVD-ROM)


Virtual networking
Shared Ethernet Adapter allows logical partitions on the virtual local area network (VLAN) to share access to a physical Ethernet adapter and to communicate with systems and partitions outside the server. This function enables logical partitions on the internal VLAN to share the VLAN with stand-alone servers.


Integrated Virtualization Manager
The Integrated Virtualization Manager provides a browser-based interface and a command-line interface that you can use to manage IBM® System p5™ and IBM eServer™ pSeries® servers that use the IBM Virtual I/O Server. On the managed system, you can create logical partitions, manage the virtual storage and virtual Ethernet, and view service information related to the server. The Integrated Virtualization Manager is packaged with the Virtual I/O Server, but it is activated and usable only on certain platforms and where no Hardware Management Console (HMC) is present.

Introduction to VIO

Introduction to VIO


Prior to the introduction of POWER5 systems, it was only possible to create as many separate logical partitions (LPARs) on an IBM system as there were physical processors. Given that the largest IBM eServer pSeries POWER4 server, the p690, had 32 processors, 32 partitions were the most anyone could create. A customer could order a system with enough physical disks and network adapter cards to so that each LPAR would have enough disks to contain operating systems and enough network cards to allow users to communicate with each partition.
The Advanced POWER Virtualization™ feature of POWER5 platforms1 makes it possible to allocate fractions of a physical CPU to a POWER5 LPAR. Using virtual CPU's and virtual I/O a user can create many more LPARs on a p5 system than there are CPU's or I/O slots. The Advanced POWER Virtualization feature accounts for this by allowing users to create shared network adapters and virtual SCSI disks. Customers can use these virtual resources to provide disk space and network adapters for each LPAR they create on their POWER5 system
(see Figure ).



There are three components of the Advanced POWER Virtualization feature: Micro-Partitioning™, shared Ethernet adapters, and virtual SCSI. In addition, AIX 5L Version
5.3 allows users to define virtual Ethernet adapters permitting inter-LPAR communication. This paper provides an overview of how each of these components works and then shows the details of how to set up a simple three-partition system where one partition is a Virtual I/O Server and the other two partitions use virtual Ethernet and virtual SCSI to differing degrees. What follows is a practical guide to help a new POWER5 customer set up simple systems where high availability is not a concern, but becoming familiar with this new technology in a development environment is the primary goal.


Micro-Partitioning
An element of the IBM POWER Virtualization feature called Micro-Partitioning can divide a single processor into many different processors. In POWER4 systems, each physical processor is dedicated to an LPAR. This concept of dedicated processors is still present in POWER5 systems, but so is the concept of shared processors. A POWER5 system administrator can use the Hardware Management Console (HMC) to place processors in
a shared processor pool. Using the HMC, the administrator can assign fractions of a CPU to individual partitions. If one LPAR is defined to use processors in the shared processor pool, when those CPUs are idle, the POWER Hypervisor™ makes them available to other partitions. This ensures that these processing resources are not wasted. Also, the ability to assign fractions of a CPU to a partition means it is possible to partition POWER5 servers into many different partitions. Allocation of physical processor and memory resources on POWER5 systems is managed by a system firmware component called the POWER Hypervisor.


Virtual Networking
Virtual networking on POWER5 hardware consists of two main capabilities. One capability is provided by a software IEEE 802.1q (VLAN) switch that is implemented in the Hypervisor on POWER5 hardware. Users can use the HMC to add Virtual Ethernet adapters to their partition definitions. Once these are added and the partitions booted, the new adapters can be configured just like real physical adapters, and the partitions can communicate with each other without having to connect cables between the LPARs. Users can separate traffic from different VLANs by assigning different VLAN IDs to each virtual Ethernet adapter. Each AIX 5.3 partition can support up to 256 Virtual Ethernet adapters


In addition, a part of the Advanced POWER virtualization virtual networking feature allows users to share physical adapters between logical partitions. These shared adapters, called Shared Ethernet Adapters (SEAs), are managed by a Virtual I/O Server partition which maps physical adapters under its control to virtual adapters. It is possible to map many physical Ethernet adapters to a single virtual Ethernet adapter thereby eliminating a single physical adapter as a point of failure in the architecture.
There are a few things users of virtual networking need to consider before implementing it. First, virtual networking ultimately uses more CPU cycles on the POWER5 machine than when physical adapters are assigned to a partition. Users should consider assigning a physical adapter directly to a partition when heavy network traffic is predicted over a certain adapter. Secondly, users may want to take advantage of larger MTU sizes that virtual Ethernet allows if they know that their applications will benefit from the reduced fragmentation and better performance that larger MTU sizes offer. The MTU size limit for SEA is smaller than Virtual Ethernet adapters, so users will have to carefully choose an MTU size so that packets are sent to external networks with minimum fragmentation.


Virtual SCSI
The Advanced POWER Virtualization feature called virtual SCSI allows access to physical disk devices which are assigned to the Virtual I/O Server (VIOS). The system administrator uses VIOS logical volume manager commands to assign disks to volume groups. The administrator creates logical volumes in the Virtual I/O Server volume groups. Either these logical volumes or the physical disks themselves may ultimately appear as physical disks (hdisks) to the Virtual I/O Server’s client partitions once they are associated with virtual SCSI host adapters. While the Virtual I/O Server software is
packaged as an additional software bundle that a user purchases separately from the AIX 53 distribution, the virtual I/O client software is a part of the AIX 5.3 base installation media so an administrator does not need to install any additional filesets on a Virtual SCSI client partition. Srikrishnan provides more details on how the Virtual SCSI feature works