Официален блог на WebEKM EKM очаквайте сайта онлайн скоро.

Download Free Templates http://bigtheme.net/ free full Wordpress, Joomla, Mgento - premium themes.

FC SAN ports virtualization NPV and NPIV
Node Port Virtualization & Node Port ID Virtualization

fc-switch-npv_npivBoth features mentioned in the title, play a special role and are very important when data center grows up, when new switches, hosts and virtual machines are needed. They allow us to expand SAN infrastructure (fabric) and not spend additional money by the way.


What is needed to establish connection between ports within a fabric ?

Each N_port has its own burnt-in 2 World Wide Names:
World Wide Node Name – which characterizes whole fabric device eg. Host Bus Adapter card, I could compare WWNN to the serial number of each device.
World Wide Port Name – which characterizes each port in a fabric device, it can be compared to MAC address in Ethernet world.

But the most important address is FCID – Fibre Channel ID aka Fibre Channel Address (N_port ID) which is being assigned by switch dynamically to each N_port. FCID is mapped to the World Wide Port Name

FCID consists of 24 bits number (3 octets with 8 bits).
Domain ID – first octet, which is being acquired by switch or host from the Principle Switch when joins to the fabric. Domain ID is limited to the number from 1 up to 239.
Area ID – second octet, identifies a groups of fabric ports on switches, number from 1 up to 256
Port ID – third octet, identifies a particular port in given Area ID, number from 1 up to 256

aaa1

NPIV – Node Port ID Virtualization

N_Port ID virtualization (NPIV) enables a single N_Port (such as an FC Host Bus Adapter port) function as multiple virtual N_Ports. Each virtual N_Port has a unique WWPN identity in the FC SAN. This allows a single physical N_Port obtains multiple FC addresses. Hypervisors use NPIV to create virtual N_Ports on the FC HBA and then assign the virtual N_Ports to virtual machines (VMs). The virtual N_Port acts as a virtual FC HBA port. This enables a VM direct access to the LUNs assigned to its. NPIV enables an administrator restrict access specific LUNs to specific VMs using security techniques like zoning and LUN masking; similarly to the assignment of a LUN to a physical compute system. In order to enable NPIV, both the FC HBAs and the FC switches must support NPIV. Wasn’t it clearly enough? Below picture illustrates this description.

npiv
 

aaa1

NPV Node Port Virtualization
The proliferation of compute systems in a data center causes increased use of edge switches in a fabric. As the edge switches population grows, the number of domain IDs may become a concern because of the limitation of the number of domain IDs in a fabric (only 239). N_Port virtualization (NPV) handles with this problem by reducing the number of domain IDs in the Fabric. Edge switches supporting NPV do not require a domain ID but they are capable to pass traffic between the core switch and the compute systems. NPV-enabled edge switches do not perform any fabric services but they forward all fabric activity, such as login and name server registration to the core switch. All ports on the NPV edge switches that connect to the core switch are established as NP_Ports(not E_Ports – ISL link). The NP_Ports are connected to an NPIV-enabled core director or switch. If the core director or switch is not NPIV-capable, the NPV edge switches do not function. As the switch enters or exits from NPV mode, the switch configuration is erased and then switch reboots itself. Therefore, administrators should pay attention when enable or disable NPV on the edge switch.

npv

, ,

Onlain bookmaker bet365.com - the best bokie

Menu