Официален блог на WebEKM EKM очаквайте сайта онлайн скоро.

Download Free Templates http://bigtheme.net/ free full Wordpress, Joomla, Mgento - premium themes.

VxLAN and DCI – explanation, implementation and configuration on IOS XE and NX-OS

The other way of providing Data Center Interconnectivity besides OTV is VxLAN. VxLAN has ben developed through a few years and despite at the beginning was considered as a solution that enabled spanning VLANs across the single Data Center, nowadays is considered as a solution equal or even better to compare with OTV and enables us spanning VLANs across different geographical locations.

 

Why would you want to use VxLANs ?
Let me give you the one of examples. There is a feature that is called vMotion in vmware that enables moving any virtual machine between hosts in case when the host resources are running out. Let’s assume we have 2 ESXi hosts (servers). One of them is running 50 virtual machines, the other one 10. vMotion is able to detect this and can move the VM from one host to the other. So far nothing fancy especially if you are familiar with potential of vmware. Nothing, really? Unfortunatelly vMotion despite it is a cool feature requires L2 connectivity between hosts. When we move VM from one host to another via L3 environemnt that is the Spine-Leaf Fabric in modern Data Center, VxLAN comes to play, beacuse enables transport of L2 information through L3 transport via logical VNI tunnel.

 

What characterizes VxLAN ?
For VxLAN stands Virtual Extensible LAN. The technology that encapsulates ethernet frame in UDP tunnel (layer 2 in layer 3 overlay tunnel). The difference between old VLANs is, VLAN is 12 bit value (VLAN ID) and let us use up to 4095 VLAN, in VxLAN we use 24 bit value (VNI- VxLAN identifier) and we can use up to 16,777,216 VLANs. The VNI is a bridge domain, separate segment of the network that works on the top of layer 3. 

 

As an underlay layer 3 protocol that carries VxLAN we may use IS-IS or OSPF protocol. Because there is 50 bytes of overhead, we have to run “Jumbo frames”. VxLAN can be supported in hardware or software way (HyperV or ESX). If VxLAN works as an overlay on the top on layer 3 that means, we don’t have to worry about Spanning Tree Protocol, also we get Equal Cost Multi Path, because all links in the fabric are active, layer 3  and take part in the routing process.

TRAFFIC

VTEP (aka NVE logical interface) – switch interface that takes part in VxLAN information exchanging. Each VTEP has an IP address and assigned VNI (VxLAN ID). Between VTEPs are created stateless tunnels on the time of delivery the packets. On the VTEP interface the frame is encapsulated in UDP packet. How particular VTEP interfaces get to know about each other? It happens in 2 ways by Control Plane learning and Data Plane learning 

 

BUM TRAFFIC
BUM traffic Broadcast Unknown unicast Multicast- traffic that is sent to more then one destination. In etherent networks the ARP is the example of BUM traffic, VxLAN has 2 ways that can handle with BUM raffic via Multicast and head end replication

Multicast
Each VNI is mapped to a single Multi Cast group, each multi cast group can be mapped to a few VNIs. When VTEP interface is online, it uses IGMP and joins to the multicast group that uses a specified VNI for sepcified VLAN. VTEP doesn’t have to join to the group that cointains VNIs that VTEP doesn’t use. Multicast is very well scalable.

 

Head end replication
Is an alternative to multicast, if we have access only to unicast network. When BUM traffic arrives the VTEP creates several unicast packets and sends each to remote VTEPs that uses given VNI. This is not so efficient as multicast and is not so well scalable also is fine solution for 20 VTEPs or less.

The learning process

Data Plane learning (Bridging) -similar to the ethernet standard, the same mechanism (Flood and Learn), works only at layer 2, there is no support for routing if we want to reach another network we have to use the external router, also if we want to route traffic between VNIs, it is simillar to ” the router on the stick” solution. Data plane learning is easier to deploy than Control Plane learning.

 

 

Control Plane learning– MAC adresses of VTEP interfaces and hosts in the network are being learned before are needed. Switches are peering to each other by using iBGP and share addresses that they know about. They use EVPN address family, not default ipv4. Each switch run iBGP and maintains the neighberhood with the other switches as usual in full mesh topology. Thanks to this switches learn where are the other switches with VTEP interfaces. If we want, we may use VTEP authentication for better security and to avoid rogue switches. Hosts addresses are added to the local BGP processes on each switch, next are propagated to the other iBGP peers. If the HostA wants to reach the HostB, then switch connected to HostA knows where “the next hop VTEP” with HostB behind is, because maintains entire MAC addresses database with all Hosts and Switches. In this way switches don’t have to send local ARP queries to the other switches. If it happens that the switch doesn’t have needed MAC address in its database, then ARP query is processed by Multicast or Head End Replication.

 

Routing and multi tenancy

When we use “Control Plane learning” method and BGP EVPN we may use integrated routing and bridging. Unlike “Data Plane learning” we don’t need external router to provide routing between VNIs. VNI may be configured as L3VNI or L2VNI, because L2 and L3 information is carrying by BGP. L2VNI is being used for bridging, forwarding frames within the same VNI (segment), L3VNI is being used for routing between VNIs. VTEPs know about L2VNIs that serve locally and L3VNIs thanks to that can support the feature called “Anycast Gateway”. Each switch keeps the same default gateway (the same IP and MAC virtual  address) within each VNI for the hosts.Thanks to this each host within the same VNI has the same default gateway, regardless which switch is connected to. This feature enables moving virtual machines between switches.. In order to support multi tenancy, Layer 3 VNIs are attached to the VRFs if we use MPLS L3 VPN. 

 

 

CONFIGURATION on CSR

Firstly we gonna configure VxLAN feature on CSR routers in GNS3, next I will show how it looks on Nexus switches.

MULTICAST CORE ROUTER

Simple PIM Sparse mode is unidirectional, bidirectional PIM allows for sending traffic back and forth, The rest is the same as in Sparse mode. Also we configure OSPF in the Core.

interf e1/1
no shutdown
ip address 80.80.80.2 255.255.255.0
ip pim sparse-mode
ip ospf 1 area 0

interf e1/0
no shutdown
ip address 70.70.70.2 255.255.255.0
ip pim sparse-mode
ip ospf 1 area 0

int lo0
ip address 100.100.100.100 255.255.255.255
ip pim sparse-mode
ip ospf 1 area 0

ip multicast-routing
ip pim bidir-enable
ip pim rp-address 100.100.100.100 bidir

 

CSR 1

Firstly we start with configuration of interfaces , OSPF and bidirectional multicast

interface gigabitethernet 1
ip address 70.70.70.1 255.255.255.0
no shut
ip ospf 1 area 0
ip pim sparse-mode

int lo0
ip address 1.1.1.1 255.255.255.255
ip pim sparse-mode
ip ospf 1 area 0

int lo1
ip address 10.10.10.10 255.255.255.255
ip pim sparse-mode
ip ospf 1 area 0

ip pim bidir-enable
ip pim rp-address 100.100.100.100 bidir

ip multicast-routing distributed

Under Inside Gigabitethernet 2 we configure service instances for each VLAN that we are going to carry further over VxLANs. Service instance is a logical interface that connects bridge domain to a physical interface.

interface gigabitethernet 2
no shutdown

service instance 10 ethernet
encapsulation dot1q 10

service instance 20 ethernet
encapsulation dot1q 20

We create NVE interfaces (VTEPs), separate NVE for each VNI (each VxLAN). Also each NVE has its own ip address “borrowed” from the loopbacks interfaces. Each VNI is mapped to different multicast group.

interface nve 1
member vni 5000 mcast-group 225.1.1.1
source-interface lo0
no shut

interface nve 2
member vni 5001 mcast-group 225.1.1.2
source-interface lo1
no shut

At the end we configure bridge domain. We configure membership by pointing out appropriate VxLANs and logical interfaces (service instance) under specified interfaces

bridge-domain 10
member vni 5000
member gigabitethernet 2 service-instance 10

bridge-domain 20
member vni 5001
member gigabitethernet 2 service-instance 20

 

CSR 2

interface gigabitethernet 1
ip address 80.80.80.1 255.255.255.0
no shut
ip ospf 1 area 0
ip pim sparse-mode

int lo0
ip address 2.2.2.2 255.255.255.255
ip pim sparse-mode
ip ospf 1 area 0

int lo1
ip address 20.20.20.20 255.255.255.255
ip pim sparse-mode
ip ospf 1 area 0

ip pim bidir-enable
ip pim rp-address 100.100.100.100 bidir

ip multicast-routing distributed

interface gigabitethernet 2
no shutdown

service instance 10 ethernet
encapsulation dot1q 10

service instance 20 ethernet
encapsulation dot1q 20

interface nve 1
member vni 5000 mcast-group 225.1.1.1
source-interface lo0
no shut

interface nve 2
member vni 5001 mcast-group 225.1.1.2
source-interface lo1
no shut

bridge-domain 10
member vni 5000
member gigabitethernet 2 service-instance 10

bridge-domain 20
member vni 5001
member gigabitethernet 2 service-instance 20

 

Now, let’s make some verification from the CSR1 point of view.

Firstly let’s see bridge domains:

Next let’s check configuration of NVE (VTEPS) interfaces

Also, NVE (VTEPs) to VNI (VxLANs) mapping

At the end we may see peering between VTEPs, but this we only see when traffic went through the tunnel already (for example by ping command)

 

 

CONFIGURATION on NEXUS platform (single Data center)

I am going to configure VxLANs on NX-OS with using of Data Plane learning (Flood and Learn). 2 switches are linked directly (underlay IP network). 

We implement VxLANs in a few stages:

1st Enabling features
2nd Setting mtu
3rd Mapping VLAN to VNI
4th Routed link between Nexuses
5th Server ports on the switch
6th Loopback interfaces
7th Rendezvous points
8th VTEP/NVE interface

1st Let’s enable necessary features and OSPF process
feature ospf – underlay protocol
feature pim – multicast (BUM traffic)
feature nv overlay – enables VxLAN nv means network virtualization
feature vn-segment-lan-based – enables tagging frames with VxLAN header

router ospf 10

2nd We have to set up jumbo frames by inreasing MTU
system jumbomtu 9216 

3rd Mapping VLAN to VNI. We define VLAN 1000 for the hosts and maps its to VNI 5000.
vlan 1000 
vn-segment 5000 

 

We may need to run but only on NX-OS before 7.3 below command that changing   the routing template. It needs rebooting.

system routing template-vxlan-scale  

4th Configuration of the link between switches on the Outside port
interface eth1/49
no switchport
ip address 10.10.10.1/30
ip router ospf 10 area 0
ip pim sparse-mode
no shutdown

5th Configuration of the port towards the server
interface ethernet 1/1
switchport
switchport access vlan 1000
no shutdown

6th We configure loopbacks interfaces for 2 reason :

as the address of VTEP interface different on each switch
interface loopback 0
ip address 1.1.1.1/32
ip router ospf 10 area 0
ip pim sparse-mode

and randez-vouz points in multicast topology
interface loopback 1
ip address 3.3.3.3/32
ip router ospf 10 area 0
ip pim sparse-mode

7th and we configure ip multicast
ip pim rp-address 3.3.3.3 group-list 224.0.0.0/4
ip pim anycast-rp 3.3.3.3 1.1.1.1
ip pim anycast-rp 3.3.3.3 2.2.2.2

Let’s stopp for a while. This is something different to compare with CSR/ We have the same addresses 3.3.3.3 of the loopbacks interfaces on both switches configured as a Randesvouz Points. It is possible when we use “anycast-rp“. We state that we have redundant RPs. We point out how to reach them (next hop is the VTEP interface, our own and the neighbor). Instead of having RP somewhere in the Core, now we have configured RP on locall switch and remote switch. 

8th At the end we configure NVE (VTEP) interface, with source ip address of loopback 0. We also set up membership of VxLAN (VNI) 5000 and point out what multicast group will be used by this VxLAN

interface nve1
no shutdown
source-interface lo0
member vni 5000 mcast-group 230.1.1.1

,

Onlain bookmaker bet365.com - the best bokie

Menu