Leaf-and-Spine Fabric Architectures
Home » Webinars » Data Center Infrastructure » Leaf-and-Spine Fabric Architectures
Last modified on 2024-01-07 (release notes)
Leaf-and-Spine Fabric Architectures
1:24:47 Introduction |
||
Traditional data center networks used a 3-tier design that was mostly mandated by hardware limitations, resulting in unequal bandwidth between endpoints based on their locations. In the last few years the networking industry rediscovered the work of Charles Clos (from 1953) and everyone started promoting leaf-and-spine fabrics. |
||
Challenges of Traditional Data Center Networks | 16:04 | 2017-11-05 |
Clos Networks and Leaf-and-Spine Fabrics | 16:39 | 2017-11-05 |
Additional Resources |
||
Slide deck | 1.3M | 2017-11-04 |
52:04 From the ipSpace.net Design Clinic |
||
Leaf-and-Spine Fabrics Outside of Data Centers | 22:59 | 2021-12-27 |
Integrating Storage in Leaf-and-Spine Fabrics | 22:32 | 2022-04-02 |
Migrating to a Leaf-and-Spine Fabric | 6:33 | 2022-04-02 |
Further Reading |
||
Demystifying DCN Topologies: Clos Networks | ||
Demystifying DCN Topologies: Fat trees and leaf-and-spine fabrics | ||
45:54 Physical Fabric Design |
||
After mastering the basic principles of leaf-and-spine fabrics described in the Introduction section we're moving on to the physical design: how do you build a leaf-and-spine fabric given number of edge ports and oversubscription ratio? What if you need less than 100 ports? What if you need 50.000 ports? What do you do if you have to support low-speed edge interfaces? |
||
The examples in the following videos use switches with 10GE/40GE ports. Don't be bothered by that; the same considerations and design calculations apply to fabrics built with higher-speed switches, for example switches with 100GE server ports and 400GE uplinks. |
||
Physical Leaf-and-Spine Fabric Design | 14:55 | 2017-11-05 |
Small Fabrics and Lower-Speed Interfaces | 9:07 | 2017-11-05 |
Building Very Large Fabrics | 21:52 | 2017-11-05 |
Physical Leaf-and-Spine Fabric Design | 1.7M | 2017-11-04 |
2:39:15 Free items Layer-3 Fabrics with Non-Redundant Server Connectivity |
||
We're starting the design part of the webinar with the simplest possible scenario – each leaf switch is a single IP subnet – and focus on routing protocol selection, route summarization, leaf-to-spine link aggregation, and core link addressing. The second half of this section describes detailed guidelines on using BGP and OSPF as the underlay routing protocols in a leaf-and-spine fabric. |
||
44:27 Overview and Design Principles |
||
Introduction to Leaf-and-Spine Designs | 4:39 | 2017-02-10 |
Layer-3 Fabric with Non-Redundant Server Connectivity | 6:52 | 2017-02-10 |
Routing Protocol Selection | 19:50 | 2017-02-10 |
Route Summarization and Link Aggregation | 6:40 | 2017-02-10 |
Core Link Addressing | 6:26 | 2017-02-10 |
Slide deck | 1.4M | 2016-03-25 |
1:04:14 Using BGP in Leaf-and-Spine Fabrics |
||
Based on the work done by Petr Lapukhov at Microsoft, every vendor talks about using BGP as the routing protocol in leaf-and-spine fabrics. Does it make sense? You'll find some of the answers in this section presented by Dinesh Dutt (Cumulus Networks). |
||
Using BGP in Leaf-and-Spine Fabrics | 10:19 | 2017-02-10 |
Simplifying BGP Configurations | 19:30 | 2017-02-10 |
Troubleshooting and Managing BGP | 8:19 | 2017-02-10 |
BGP in Data Centers - Sample Deployments | 3:15 | 2017-02-10 |
BGP in 3-tier Clos | 8:45 | 2021-04-03 |
BGP QA | 14:06 | 2021-04-03 |
Slide Deck: Operationalizing BGP in the Data Center | 1.9M | 2016-03-04 |
Slide Deck: BGP in 3-Tier Clos Topology | 890K | 2021-03-13 |
50:34 Using OSPF in Leaf-and-Spine Fabrics |
||
While everyone talks about using BGP or yet-to-be-implemented routing protocols in leaf-and-spine fabrics, OSPFv2 works just fine in some of the biggest fabrics in the world. This section describes some of the design guidelines you should use when deploying OSPF as a fabric IGP. |
||
Why Would You Use OSPF | 16:01 | 2021-04-03 |
OSPF Design | 20:02 | 2021-04-03 |
Configuration Snippets | 10:04 | 2021-04-03 |
OSPF Footnotes | 4:27 | 2021-04-03 |
OSPF in Clos Topology | 1.2M | 2021-03-13 |
1:06:44 Layer-3 Fabrics with Redundant Server Connectivity |
||
After establishing the baseline in the Layer-3 fabrics with non-redundant server connectivity section we'll add complexity at the fabric edge: redundantly connected servers. |
||
Layer-3 Fabrics with Redundant Server Connectivity | 18:54 | 2016-12-13 |
Link Aggregation between Servers and Network | 5:26 | 2016-12-13 |
Active-Standby Server Connectivity | 8:18 | 2016-12-13 |
Slide deck | 1.7M | 2018-01-17 |
34:06 From the ipSpace.net Design Clinic |
||
Multi-Homed Servers | 34:06 | 2022-01-18 |
31:15 Free items Layer-3-Only Data Centers |
||
Is it possible to build a pure layer-3 data center fabric that supports redundant server connectivity and IP address mobility? You'll find out in this section. |
||
6:53 Design Guidelines |
||
Host Routing | 6:53 | 2016-12-13 |
Slide deck | 1.3M | 2016-03-25 |
24:22 Building a Pure L3 Data Center with Cumulus Linux |
||
Building a Pure L3 Data Center with Cumulus Linux | 24:22 | 2016-12-13 |
Slide deck | 949K | 2016-03-29 |
36:40 Free items Routing on Servers |
||
Another approach to building a pure layer-3 fabric is to extend the fabric routing protocol into the servers and announce servers' loopback IP addresses using BGP. |
||
Runinng Routing Protocols on Servers | 10:55 | 2016-12-13 |
Routing from Hosts - Deep Dive | 10:24 | 2016-12-13 |
Examples from Real World | 8:08 | 2016-12-13 |
7:13 From the ipSpace.net Design Clinic |
||
VXLAN and EVPN on Linux Hosts | 7:13 | 2022-01-18 |
1:53:15 Free items Layer-2 Fabrics |
||
We're leaving the stable world of L3-only fabrics and entering the realm of large VLANs that most enterprise data centers have to deal with. We'll cover numerous design scenarios, from traditional bridging to routing on layer 2 and MAC-over-IP encapsulation. |
||
1:00:15 Design Guidelines |
||
Layer-2 Fabrics | 14:49 | 2017-03-22 |
Traditional Bridging | 10:05 | 2017-03-22 |
Routing on Layer-2 | 13:12 | 2017-03-22 |
MAC-over-IP Encapsulation | 13:25 | 2017-03-22 |
Redundant Server-to-Network Connectivity | 8:44 | 2017-03-22 |
Slide deck | 1.8M | 2016-04-01 |
53:00 Shortest Path Bridging in Avaya Fabric |
||
Avaya is one of the few data center switching vendors that still uses routing on layer 2 (SPB) technology instead of VXLAN encapsulation. In this guest presentation Roger Lapuh (Avaya) explains how SPB works and how you can use it to build layer-2 or layer-2+3 data center fabrics. |
||
Introduction to SPB and Avaya Fabric Connect | 18:25 | 2017-03-22 |
SPB Deep Dive | 18:17 | 2017-03-22 |
Building Data Center Fabrics with SPB | 16:18 | 2017-03-22 |
Slide deck | 2.1M | 2016-04-06 |
1:53:31 Free items Mixed Layer-2 + Layer-3 Fabrics |
||
Most data center fabrics have to combine elements of large VLANs and routing. In this section we'll explore the various combinations, from traditional routing on spine switches to anycast routing on leaf switches. |
||
31:05 Design Guidelines |
||
Layer-2+3 Fabrics | 6:45 | 2017-04-05 |
Routing on Spine Switches | 9:04 | 2017-04-05 |
Routing on Leaf Switches | 15:16 | 2017-04-05 |
Slide deck | 1.4M | 2016-04-20 |
1:18:52 VXLAN with BGP EVPN on Cisco Nexus OS |
||
Major data center switching vendors use VXLAN to build large layer-2 domains across IP fabrics, and EVPN control plane to build flooding trees and exchange MAC address reachability information. In this section Lukas Krattiger (guest speaker from Cisco Systems) explains how VXLAN transport and EVPN control plane work on Nexus switches. |
||
Overlays in Data Center Fabrics | 15:07 | 2017-04-05 |
Overview of VXLAN with BGP EVPN | 15:59 | 2017-04-05 |
Introduction to BGP EVPN | 15:29 | 2017-04-05 |
BGP EVPN Deep Dive | 15:39 | 2017-04-05 |
EVPN Integrated Routing and Bridging | 16:38 | 2017-04-05 |
Slide deck | 12M | 2016-04-21 |
3:34 From the ipSpace.net Design Clinic |
||
VXLAN and EVPN in Small Data Center | 3:34 | 2022-04-02 |
2:16:42 Free items Multi-Pod and Multi-Site Fabrics |
||
Should you stretch a single fabric across multiple sites? Does it make sense to split a large fabric into smaller fabrics (pods)? What could you do to improve the scalability of VXLAN-based EVPN fabrics? This section contains the design guidelines and technology details you need to answer these questions. |
||
44:42 Multi-Site and Multi-Pod Fabrics Design Guidelines |
||
What Problem Are We Trying to Solve? | 13:38 | 2019-01-28 |
Physical Multi-Site Topologies | 11:54 | 2019-01-28 |
Data-, Control- and Management-Plane Failure Domains | 17:05 | 2019-01-28 |
Conclusions | 2:05 | 2019-01-28 |
Slide Deck | 2.1M | 2018-03-29 |
Related Podcasts |
||
Open-Source Hybrid Cloud Reference Architecture | ||
Related Webinars |
||
Data Center Interconnects | 4:31:00 | |
Designing Active-Active and Disaster Recovery Data Centers | 3:37:00 | |
VMware NSX, Cisco ACI or Standard-Based EVPN | 6:30:00 | |
1:32:00 Using VXLAN and EVPN in Multi-Pod and Multi-Site Fabrics |
||
Introduction to Multi-Pod and Multi-Site Fabrics | 6:58 | 2019-01-28 |
Multi-Pod Fabrics | 27:04 | 2019-01-28 |
Multi-Site Fabrics | 33:19 | 2019-01-28 |
Multi-Site Packet Forwarding | 19:10 | 2019-01-28 |
Conclusions | 5:29 | 2019-01-28 |
Slide Deck | 29M | 2018-03-27 |
Related Webinars |
||
EVPN Technical Deep Dive | 11:49:00 | |
VXLAN Technical Deep Dive | 3:42:00 |