Официален блог на WebEKM EKM очаквайте сайта онлайн скоро.

Download Free Templates http://bigtheme.net/ free full Wordpress, Joomla, Mgento - premium themes.

VMware vSphere Networking
virtual Standard and Distributed switches
features, utilization and migration

d5a_distributed_network_layersEven very well implemented servers farm within VMware vSphere will be working in limited scale and inefficiently if the network part won’t be deployed correctly. VMware in regard to networking provides many features, which are good to know and solutions like virtual Standard Switch and Distributed Switch. How to implement properly networking part, what are virtual switches, what is the difference between them and how to securely change vSS to vDS in working environment ?

Before I go over Standard and Distributed switches we need to understand, how networking on ESXi VMware works, what type of ports do we have, what is their role and what features do they provide.

1. Ports

In VMware environment we distinguish a few names and kinds of ports :

VM Port Group” – L2 ports (vNIC) which provide connectivity between Virtual Machines with the Virtual Switch. “VM Network” is default Port Group on vSS Switch0 which is being created “out of the box”. VM Port Group are used in order to separate policies like traffic shaping for different Virtual Machines.

VM kernel Port” – L3 ports (vNIC) on ESXi host, with assigned IP addresses for providing vSphere services like vMotion, Management, Fault Tolerance, vSAN, Network Based Storage (iSCSI). Having many L3 ports assigned within the same subnet on the same virtual switch is allowed.

vmnic – physical NIC interfaces. We may have many virtual switches on single ESXi hypervisor, but every single vmnic may be assigned only to ONE virtual switch!

vNIC – virtual NIC, ports L2 or L3 on ESXi

vmhba – physical HBA interface. May be added thanks to PCI extension card (hardware iSCSI) 

vHBA – virtual Host Bus Adapter, responsible for iSCSI or FCoE transfer, can be created in VMware environment.(software iSCSI) (not a part of this article because refers to SAN not to LAN, but FYI it exists). 

Take a look at a picture below (I hope some issues will clarify itself) which originates from my VMware ESXi environment. As you can see I have 8 vmnic (physical NIC interfaces), actually only 1 but we may virtualize what we want in VMware Workstation. I deployed 3 virtual switches: switch0 destined for management only, switch1 for SAN traffic and switch2 for Virtual Machines traffic. Pay attention on ports names and addressing.  


2. “Soup to nuts” – Security, Traffic Shaping and NIC Teaming

STN provides basic funcionality for virtual switches. Below are options which may be utilized and have influence on virtual switch and its clients behaviour.



Promiscuous mode – allows interception of the traffic originated from Virtual Machines.

MAC Address Changes – virtual switch allows for incoming traffic to the Virtual Machines with changed MAC Address (from out-to-in)

Forged Transmits – virtual switch allows for outgoing traffic from Virtual Machine with bogus MAC Address (from in-to-out)

Traffic Shaping

vswitch-traffic shaping

Allows us to shape traffic (slow its down) if requires on specified ports (for given Virtual Machines). We may shape traffic in inbound and outbound direction but in case of Standard Switch we may shape only outbound traffic.

NIC Teaming


Defines the rules of connection with “outside world”. Provides load balancing based on MAC address, IP address or “IP Hash” – which works alike etherchannel (port aggregation) and bundles a few physical links (vmnic’s) into one logical link.We may choose “Use explicit failover” also, and set up active and standby adapters in case of failure. The remained options define how uplinked physical switch will be notifying about failure by Virtual Switch, how failures will be detected and what reaction should be undertaken in a case of recovery the failed vmnic.

3.Virtual Switches

Are point of contact between virtual machines placed on ESXi host and physical network. Below picture illustrates behaviour of network part of ESXi host and dependencies between its and the “real world”.


(Picture has been downloaded from www.vmware.com)

What is the diffeence between vSS and vDS? With vSS we have to manage each witch on each host separately, even if the configuration of switches on different esxi hosts is the same. vDS allows us management of all switches on single switch.

Some features are common for vSS and vDS :

VLANs – Port groups may be assigned to VLANs
802.1Q – virtual switches have possibilty to tagg frames which will be sent towards physical switches
NIC Teaming – equal load balancing between vmNICs
Outbund Traffic Shaping – traffic pruning from ESXi to the “real world”
IPv6 support
Cisco Discovery Protocol – by CLI on vSS and by GUI on vDS

But some features are accessible only for vDS :

Centrilized Managemenet – we may manage of many ESXi and their switches from one place (vCenter or vHost)
Private VLANs – allows us, among other, to separate Virtual Machines which belong to the same VM Port Group (alike standard Private Vlans)
Load Based Teaming (LBT) – more advanced load balancing, we may shape how traffic will be divided between vmNICs (i.e 30% vmNIC1 and 70% vmNIC2)
Inbound Traffic Shaping – traffic prunning from “the real world” to ESXi
LLDP- Link Layer Discovery Protocol, discovering other network devices
LACP – Link Aggregation Control Protocol – enables etherchannel negotiations with physical switches.
Network I/o Control – enables QoS, access-lists (reject unwanted traffic)
NetFlow – gathering information about traffic which passed through vDS
Port Mirroring – sending frames originated from one port to another in order to analyze the traffic
Configuration Backup/Restore

4. Migration from vSS to vDS

Migration and management of vDS is possible only via vCenter or vHost. Please don’t treat this part of article as tutorial, it is only an 1000-foot view ! There are some steps which have to be done in order to make migration from vSS to vDS succesfuly which I am going to talk over:

1. Creating Distributed Switch and rename uplinks (to make them  more understandable, for our convenience). Don’t draw your attention on the left side, those VM kernel and Port Groups with assigned addresses originate from the other ESXi which has been added to Distributed Switch already. So pretend that left side of the switch is blank. As you see I added vMotion feature which is necessary for freely transfer Virtual Machines states between ESXi Hosts and is required for many vSphere features.

migracja tworzenie ds 1

2. Creating Distributed Switch Port Groups (which contains VM kernel Ports and VM Port Groups)

migracja tworzenie ds port groups 2

3. Inviting ESXi host for using Distributed Switch (mapping vmnic to uplinks)

migracja zaproszenie hosta 3

4. Migration VMkernel Ports / Virtual Machines to Distributed Switch Port Groups

migracja manage vmkernel netw adapters 4

Finally my vSS after migration to vDS look in this way . 

migracja final 5

Networking in my opinion is the most hard, demanding and thankless part of virtualization but important also ! I strongly recommend you, if you are doing migration from vSS to vDS for the first time (especially in working environment) reading a few in-depth tutorials. If you do something wrong you may loose connectivity with you ESXi host !

, , , , , ,

Onlain bookmaker bet365.com - the best bokie