Centre de données

What the Move Toward Leaf-Spine Architecture Means for Data Center Cabling

Ron Tellas

As technology advances, more data center traffic is moving from server to server instead of moving in and out of the data center.

Because traditional networks were designed with a north-south traffic pattern in mind (traffic moving between the data center and the rest of the network), data centers are turning away from this conventional three-tier architecture and moving toward full-mesh leaf-spine architecture.

 

The Traditional Three-Tier Architecture

Historically, data centers relied on a three-tier hierarchical architecture. This approach segments servers into "pods" and deploys three separate layers:

 

1. Core layer: considered the network backbone, featuring high-performing routers that combine geographically separated networks. This layer's routers move data quickly, with switches that switch packets as fast as possible.

 

2. Distribution/aggregation layer: implements access lists and filters to provide boundary definition and define network policy. It includes high-end, layer 3 switches to ensure that packets are properly routed between subnets and VLANs.

 

3. Access layer: where access switches connect to and ensure that packets are delivered to devices.

 

The three-tier hierarchical architecture accommodates traffic between servers connected to the same access switch, but traffic between different access switches needs to transmit through higher-level switch tiers in a north-south pattern.

 

This traffic pattern can introduce differences in speed and latency created by additional connections between switches. This leads to performance problems in data centers with large data flows due to reliance on technologies like virtualization, software-defined networking and shared compute and storage resources that support emerging data-intensive and time-sensitive applications.

 

In smaller data centers where speed and latency aren’t absolutely critical, the traditional three-tier hierarchical architecture may still make sense for now.

 

The Move to Leaf-Spine Fabric Architecture

To reduce latency and improve server-to-server communications, many data centers are transitioning to full-mesh leaf-spine fabric architecture (a two-layer network topology).

 

This topology connects every leaf switch to every other leaf and spine switch within the fabric. The path is randomly chosen so traffic load is evenly distributed among top-tier switches. If one of the switches fails, performance is only slightly degraded.

 

This approach reduces the number of switches needed and supports direct pathways between devices. Introducing an east-west (internal) traffic pattern significantly improves performance in virtualized server environments where resources are often distributed across many servers. It allows data to take shortcuts from where it is to where it’s actually going.

 

With a leaf-spine architecture, no matter which leaf switch a server is connected to, traffic always crosses the same number of devices to get to another server (unless the other server is on the same leaf). This maintains predictable latency because, to reach its destination, a payload only needs to hop to a spine switch and another leaf switch.

 

Cabling to Support Leaf-Spine Architecture

With the shift to a leaf-spine architecture, the overall design of the cabling infrastructure is also experiencing a shift: Many large data centers are transitioning to EoR (end-of-row) deployments and structured cabling.

 

In this design, switches are placed in a cabinet and connected to servers across an entire row using balanced, twisted-pair Category 6A cabling. This allows any two servers in a row to experience low-latency communication because they're connected to the same switch. EoR deployments require distances of about 30 m to reach servers in adjacent cabinets.

 

For long-distance 25 Gbps EoR deployments, active optical cable (AOC) assemblies and fiber optic cabling with separate transceivers are also options. AOCs embed optical transceivers into connectors and use fiber optic cable to support distances of up to 100 m. AOCs may appear less complex and offer slightly lower power consumption, but they also offer a few drawbacks:

 

  • They don’t have to comply with industry interoperability standards
  • Their embedded transceivers don’t support multiple generations of applications, which means they’ll need to be replaced when an upgrade is required

 

The current trend is to deploy cross connects where patch panels that mirror leaf switch ports connect via permanent cabling to patch panels that mirror spine switch ports. Connections between the leaf and spine are made at patch panels via patch cords. This creates an “all-to-all” connectivity where any spine port can connect to any leaf port.

 

Want to learn more about the future of data center architecture and how structured cabling is supporting high-speed, low-latency deployments? We worked with the Communications Cable & Connectivity Association (CCCA) to pen a recent Mission Critical article. Read the entire piece here.