The Leaf-and-Spine Fabric Architectures webinar describes the leaf-and-spine (Clos fabric) concepts, architecture, and single- and multistage designs that can be used to build large layer-2 or layer-3 all-point-equidistant Data Center networks.
Traditional data center networks used a 3-tier design that was
mostly mandated by hardware limitations, resulting in unequal
bandwidth between endpoints based on their locations. In the last
few years the networking industry rediscovered the work of Charles
Clos (from 1953) and everyone started promoting leaf-and-spine fabrics.
After mastering the basic principles of leaf-and-spine fabrics
described in the Introduction section we're moving on to the
physical design: how do you build a leaf-and-spine fabric given number
of edge ports and oversubscription ratio? What if you need less than
100 ports? What if you need 50.000 ports? What do you do if you have
to support low-speed edge interfaces?
Layer-3 Fabrics with Non-Redundant Server Connectivity
We're starting the design part of the webinar with the simplest
possible scenario – each leaf switch is a single IP subnet
– and focus on routing protocol selection, route summarization,
leaf-to-spine link aggregation, and core link addressing.
Based on the work done by Petr Lapukhov at Microsoft, every vendor
talks about using BGP as the routing protocol in leaf-and-spine
fabrics. Does it make sense? You'll find some of the answers in
this section presented by Dinesh Dutt (Cumulus Networks).
We're leaving the stable world of L3-only fabrics and entering
the realm of large VLANs that most enterprise data centers have
to deal with. We'll cover numerous design scenarios, from
traditional bridging to routing on layer 2 and MAC-over-IP
Avaya is one of the few data center switching vendors that still
uses routing on layer 2 (SPB) technology instead of VXLAN
encapsulation. In this guest presentation Roger Lapuh (Avaya)
explains how SPB works and how you can use it to build
layer-2 or layer-2+3 data center fabrics.
Most data center fabrics have to combine elements of large VLANs
and routing. In this section we'll explore the various combinations,
from traditional routing on spine switches to anycast routing
on leaf switches.
Major data center switching vendors use VXLAN to build large
layer-2 domains across IP fabrics, and EVPN control plane to
build flooding trees and exchange MAC address reachability information.
In this section Lukas Krattiger (guest speaker from Cisco Systems)
explains how VXLAN transport and EVPN control plane work on Nexus
Should you stretch a single fabric across multiple sites? Does it
make sense to split a large fabric into smaller fabrics (pods)? What
could you do to improve the scalability of VXLAN-based EVPN fabrics?
This section contains the design guidelines and technology details
you need to answer these questions.
Multi-Site and Multi-Pod Fabrics Design Guidelines