In the course of time the old-fashioned model od Data Center became inefficient especially when requirements regarding the traffic from North-South (client-server) have changed itself to East-West (server-server) and demanding applications started to be “hungry” for more than Gigabit links bundled via Portchannel. Also the better redundancy with non blocking links has started to be desirable. All this problems resolved new Spine-Leaf architecture and Application Centric Infrastructure as an easing in management.
Spine-Leaf architecture has been invented in order to provide high performance and low latency for east to west traffic, just the traffic between the servers. Owing to virtualization the needs in Data Center have changed, also the model of modern Data Center has changed as well. Nowadays the trafic between servers in Data Center is 80% and traffic south-north between client and the server is only 20%. Conversly then 10-15 years ago when mandatory architecture was Core-Aggregation-Access.
The Spine Leaf architecture provides only 2 layers: “Spine and Leaf”. Between Spine and Leaf switches are deployed the 40G or 100G links, between Leaf and servers we have 10G links. “Leaf” layer also works as a “gateway” to the outside world, so as we see, completely differently than in Core-Aggregation-Access architecture. The “leaf” switches that provides this funcionality are being called Border Leafs.The main advantage of Spine-Leaf architecture is that there is no need to block any links even between Leafs and servers due to Spanning Tree, because STP doesn’t take place. We may use freely Virtual Port Channel on the Leaf switches, thanks to which 2 upstream switches are visible for downstream server as a single chassis. As you can see on the picture there are always only 2 hops between any servers. “Spine Leaf” topology is very scalable, we just add another “Leaf” and “Spine” into the topolgy. In this topology we may run underlay layer 3 protocol IS-IS or OSPF between Spine and Leafs also we may deploy VXLAN as an overlay protocol that provides layer 2 functionality across entire fabric also on the other sites. We also run MP-BGP (l2vpn evpn address-family) as an control plane protocol to share information about MAC addresses of particular switches and servers, hosts. VXLAN gives us possibility of moving any virtual machine between leaf switches with retaining of an IP address. So amazingly we have full layer 2 funcionality and even more and ECMP at layer 3. Entire Fabric can be managed as a whole with ACI Controller, so we don’t have to manage each device individually.
Application Centric Infrastructure
ACI is a good example od SDN – Software Definied Network. The idea that stands behind the ACI is simplyfing and unifying of infrastructure management. ACI enables us management of virtual and physical devices. Is being considered as the most opened standard that has been released by Cisco so far, because thanks to ACI we may manage not only Cisco gears but also Citrix, F5, Palo Alto, Fortinet, Checkpoint and the list is constantly growing up. Also ACI is very secure (zero trust nodel) devices are not able talk to each other unless we allow them via policy. With ACI we create completely stateless network and we define in the profile how application should act on the network (security, QoS, layer 4-7 services). ACI consists of 3 parts hardware – Nexus 9000 series switches (that also can be run in NX-OS standalone mode and involved into ACI later), software Controller with centralized policy model used for fabric management and Policy Model that is used to manage everything.