Webinar Management System

Leaf-and-Spine Fabric Architectures

Home » Webinars » Data Center » Leaf-and-Spine Fabric Architectures

39:00 Introduction

Traditional data center networks used a 3-tier design that was mostly mandated by hardware limitations, resulting in unequal bandwidth between endpoints based on their locations. In the last few years the networking industry rediscovered the work of Charles Clos (from 1953) and everyone started promoting leaf-and-spine fabrics.
Challenges of Traditional Data Center Networks 18:55 2017-06-14
Clos Networks and Leaf-and-Spine Fabrics 20:05 2017-06-14

1:16:06 Physical Fabric Design

After mastering the basic principles of leaf-and-spine fabrics described in the Introduction section we're moving on to the physical design: how do you build a leaf-and-spine fabric given number of edge ports and oversubscription ratio? What if you need less than 100 ports? What if you need 50.000 ports? What do you do if you have to support low-speed edge interfaces?
Physical Leaf-and-Spine Fabric Design 18:39 2017-06-14
Small Fabrics and Lower-Speed Interfaces 11:37 2017-06-14
Building Very Large Fabrics 29:03 2017-06-14

16:47 Implementation examples

Leaf-and-Spine Clos Fabric with Dell Force10 switches 4:17 2012-11-12
Large Leaf-and-Spine Clos Fabric Using 10GE Links 5:14 2013-02-19
Multi-Stage Clos Fabrics With Dell Force10 Switches 7:16 2013-02-19

Additional resources

Slide deck 2.1M 2017-06-10

44:27 Layer-3 Fabrics with Non-Redundant Server Connectivity

We're starting the design part of the webinar with the simplest possible scenario – each leaf switch is a single IP subnet – and focus on routing protocol selection, route summarization, leaf-to-spine link aggregation, and core link addressing.

44:27 Overview and Design Principles

Introduction to Leaf-and-Spine Designs 4:39 2017-02-09
Layer-3 Fabric with Non-Redundant Server Connectivity 6:52 2017-02-09
Routing Protocol Selection 19:50 2017-02-09
Route Summarization and Link Aggregation 6:40 2017-02-09
Core Link Addressing 6:26 2017-02-09
Slide deck 1.4M 2016-03-25

41:23 Using BGP in Leaf-and-Spine Fabrics

Based on the work done by Petr Lapukhov at Microsoft, every vendor talks about using BGP as the routing protocol in leaf-and-spine fabrics. Does it make sense? You'll find some of the answers in this section presented by Dinesh Dutt (Cumulus Networks).
Using BGP in Leaf-and-Spine Fabrics 10:19 2016-06-06
Simplifying BGP Configurations 19:30 2016-06-06
Troubleshooting and Managing BGP 8:19 2016-06-06
BGP in Data Centers - Sample Deployments 3:15 2016-06-06
Slide deck 1.9M 2016-03-04

32:38 Layer-3 Fabrics with Redundant Server Connectivity

After establishing the baseline in the Layer-3 fabrics with non-redundant server connectivity section we'll add complexity at the fabric edge: redundantly connected servers.
Layer-3 Fabrics with Redundant Server Connectivity 18:54 2016-12-12
Link Aggregation between Servers and Network 5:26 2016-12-12
Active-Standby Server Connectivity 8:18 2016-12-12
Slide deck 1.3M 2016-03-25

31:15 Layer-3-Only Data Centers

Is it possible to build a pure layer-3 data center fabric that supports redundant server connectivity and IP address mobility? You'll find out in this section.

6:53 Design Guidelines

Host Routing 6:53 2016-12-12
Slide deck 1.3M 2016-03-25

24:22 Building a Pure L3 Data Center with Cumulus Linux

Building a Pure L3 Data Center with Cumulus Linux 24:22 2016-12-12
Slide deck 949K 2016-03-29

29:27 Routing on Servers

Another approach to building a pure layer-3 fabric is to extend the fabric routing protocol into the servers and announce servers' loopback IP addresses using BGP.
Runinng Routing Protocols on Servers 10:55 2016-12-12
Routing from Hosts - Deep Dive 10:24 2016-12-12
Examples from Real World 8:08 2016-12-12

1:53:15 Layer-2 Fabrics

We're leaving the stable world of L3-only fabrics and entering the realm of large VLANs that most enterprise data centers have to deal with. We'll cover numerous design scenarios, from traditional bridging to routing on layer 2 and MAC-over-IP encapsulation.

1:00:15 Design Guidelines

Layer-2 Fabrics 14:49 2017-03-21
Traditional Bridging 10:05 2017-03-21
Routing on Layer-2 13:12 2017-03-21
MAC-over-IP Encapsulation 13:25 2017-03-21
Redundant Server-to-Network Connectivity 8:44 2017-03-21
Slide deck 1.8M 2016-04-01

53:00 Shortest Path Bridging in Avaya Fabric

Avaya is one of the few data center switching vendors that still uses routing on layer 2 (SPB) technology instead of VXLAN encapsulation. In this guest presentation Roger Lapuh (Avaya) explains how SPB works and how you can use it to build layer-2 or layer-2+3 data center fabrics.
Introduction to SPB and Avaya Fabric Connect 18:25 2017-03-21
SPB Deep Dive 18:17 2017-03-21
Building Data Center Fabrics with SPB 16:18 2017-03-21
Slide deck 2.1M 2016-04-06

1:49:57 Mixed Layer-2 + Layer-3 Fabrics

Most data center fabrics have to combine elements of large VLANs and routing. In this section we'll explore the various combinations, from traditional routing on spine switches to anycast routing on leaf switches.

31:05 Design Guidelines

Layer-2+3 Fabrics 6:45 2017-04-04
Routing on Spine Switches 9:04 2017-04-04
Routing on Leaf Switches 15:16 2017-04-04
Slide deck 1.4M 2016-04-20

1:18:52 VXLAN with BGP EVPN on Cisco Nexus OS

Major data center switching vendors use VXLAN to build large layer-2 domains across IP fabrics, and EVPN control plane to build flooding trees and exchange MAC address reachability information. In this section Lukas Krattiger (guest speaker from Cisco Systems) explains how VXLAN transport and EVPN control plane work on Nexus switches.
Overlays in Data Center Fabrics 15:07 2017-04-04
Overview of VXLAN with BGP EVPN 15:59 2017-04-04
Introduction to BGP EVPN 15:29 2017-04-04
BGP EVPN Deep Dive 15:39 2017-04-04
EVPN Integrated Routing and Bridging 16:38 2017-04-04
Slide deck 12M 2016-04-21

Webinar description

The Leaf-and-Spine Fabric Architectures webinar describes the leaf-and-spine (Clos fabric) concepts, architecture, and single- and multistage designs that can be used to build large layer-2 or layer-3 all-point-equidistant Data Center networks.
You started this section on %started%