In the course of time the old-fashioned model od Data Center became unefficient especially when the model with the traffic from North-South (client-server)has changed itself to East-West (server-server) and demanding applications started to be “hungry” for more then Gigabit links bundled via Portchannel. Also the better redundancy with non blocking links has started to be desirable. All this problems resolved Application Centric Infrastructure and new Spine-Leaf architecture.
ACI has been invented in order to provide high performance and low latency for east to west traffic, just the traffic between the servers. Owing to virtualization the needs in Data Center have changed, also the model of modern Data Center has changed as well. Nowadays the trafic between servers in Data Center is 80% and traffic south-north between client and the server is only 20%. Conversly then 10-15 years ago when mandatory architecture was Core-Aggregation-Access.
The Spine Leaf architecture provides only 2 layers Spine and Leaf. Between Spine and Leaf switches are deployed the links 40G or 100G , between Leaf and servers we have 10G links. Leaf layer also works as a “gateway” to the outside world, so as we see, completely differently then in Core-Aggregation-Access architecture. The leaf switches that provides this funcionality are being called Border Leafs.The main advantage of Spine-Leaf architecture is that there is no need to block any links even between Leafs and servers due to Spanning Tree, because STP doesn’t take place. We may use freely Virtual Port Channel on the Leaf switches, thanks to which 2 upstream switches are visible for downstream server as a single chassis. As you can see on the picture there are always only 2 hops between any servers. Spine Leaf topology is very scalable, we just add another Leaf and Spine into switches to topolgy.
Application Centric Infrastructure Fabric
All this advantages of Spine Leaf topology are achieved thanks to ACI Fabric. With ACI Fabric we run underlay layer 3 protocol IS-IS or OSPF between Spine and Leaf also we may deploy VXLAN as an overlay protocol that provides layer 2 functionality across entire ACI and more. We also run MP-BGP (l2vpn evpn address-family) as an control plane protocol to share information about MAC addresses of particular switches and servers, hosts. VXLAN gives us possibility of moving any virtual machine between leaf switches with retaining of an IP address. So amazingly we have full layer 2 funcionality and even more and ECMP at layer 3. Entire Fabric can be managed as a whole with Fabric Controller, so we don’t have to manage each device individually.