This blog is designed to provide information about Routing, switching, Security, Data Center technologies for CCNA /CCNP/CCIE aspirants. Every effort has been made to make this blog as complete and as accurate as possible, but no warranty or fitness is implied
For three-tiered designs, increasing the uplink speeds between
the access and distribution layer switches may also require increasing uplink
speeds between the distribution and core layer switches in order to maintain
the desired oversubscription ratio.
Determining the oversubscription ratio of the uplink between the
distribution and core layer switches is fairly straight forward. You need
to take into consideration the number of ports connecting the distribution
layer switches to the access layer switches or switch stacks, as well as the
speeds at which the ports are operating.
For example, let’s say your distribution layer switch is a
StackWise Virtual pair that supports a building with 4 floors. Each floor
has two IDFs (wiring closets). Each IDF has an access layer switch stack
consisting of four 48-port switches along with a 2 x 25 Gbps uplink module in
two of the switches within the stack. The total number of 25 Gbps ports
required at the distribution layer switches is 4 uplinks x 2 IDFs per floor x 4
floors = 32 ports.
This configuration would provide up to 32 x 25 Gbps = 800 Gbps
bandwidth between the distribution layer and access layer switches.
Simply keeping existing 2 x 40 Gbps uplinks would only provide up to 80 Gbps
between the distribution layer and core layer switches. This would
provide an oversubscription ratio of 800:80 or 10:1 between the distribution
and core layers. Depending upon your business requirements, this may be
insufficient.
Increasing Uplink Speeds
You could choose to add additional 40 Gbps links between the
distribution and core layer switches, possibly operating in a Layer 3
EtherChannel configuration. However, this would require additional 40
Gbps switch ports at every distribution layer and core layer switch. More
importantly, it would require additional fiber optic pairs between the
distribution layer switches and the core layer switches.
In a large campus deployment, the core layer switches may be
located in a centralized data center in a different building. If
insufficient optical pairs exist, then additional optical cabling would need to
be pulled between the centralized data center and each of the buildings.
This could be a very expensive proposition, as existing conduit space between the
buildings may not be capable of supporting additional cabling, and you run the
risk of damaging the existing cabling in the conduit – resulting in an extended
outage. Installing new conduit may involve getting the necessary
right-of-way to trench and install underground conduit – on top of the cost to
install the new fiber optic cable.
An alternative may be to upgrade the uplink speeds between the
distribution layer and core layer switches to 100 Gbps.
This would provide an oversubscription ratio of 800:200 or 4:1
between the distribution and core layers.
As with the access layer, when deciding to upgrade the uplink
speeds between the distribution layer switches and the core layer switches, you
should keep in mind the following:
● The
optical transceiver modules which connect the distribution layer switches to
the core layer switch platforms have to interoperate with each other and have
to be compatible with the fiber optic cabling between buildings.
Due to the increased distances between buildings, single mode
fiber (SMF) may already be installed between the distribution and core layer
switches. This may help facilitate the migration from 40 Gbps to 100 Gbps
between the distribution and core layers.
5G
transformations are challenging the telecom providers to develop the
data center networks of the future, which should seamlessly scale,
automate and integrate their infrastructure from the edge to the central
data center and across the transport network. This requires the
adoption of an end-to-end programmable SDN enabled approach across the
data center applications and SP transport backbone.
To meet 5G low latency requirements, mobile services are moving
closer to the subscriber edge, and drive the demand for distributed
compute at the edges of the SP network. The new SP data center will be
where the data is and Cisco ACI delivers the automation capability
needed for the 5G telco cloud. ACI 5.0 delivers:
Support for Segment Routing MPLS (SR-MPLS) and EVPN handoff. Service providers can inter-connect their ACI based telco cloud to 5G transport backbone network with end-to-end segmentation.
Cross domain policy that automates mapping of 5G
application and transport slices for end-to-end SLA that can
differentiate low latency applications from non-critical applications.
Service Providers can now simplify and scale to 1000’s of application slices between data center and transport network using a single BGP EVPN peering.
With ACI Multisite Orchestrator (MSO) SR-MPLS policies can be centrally automated across the 5G Telco Cloud sites (central, regional and edge data centers).
The Cisco ACI 5.0 release delivers the tools to build a simple to manage, agile, and secure telco cloud.
Refer to Figure 1 for an example of a distributed ACI telco cloud leveraging an SR-MPLS transport.
Enable Simple To Manage Multicloud Deployments
Our customers are adopting Multicloud architectures and Cloud ACI provides the tools to have a consistent policy driven automation and security posture for their deployments.
Cloud ACI now supports the AWS Transit Gateway (TGW) automation
for efficient and high-performance interconnect between multiple Amazon
AWS VPCs. The ACI 5.0 release supports automation of the TGW lifecycle
along with automated route-programming on TGW route-tables for all
combinations of East-West and North-South traffic patterns. Figure 2
shows an example.
Coming soon for Azure is support for VNET Peering, Shared service
deployments, native and third party L4-7 service automation functions.
Cloud ACI support for Azure VNET peering enables customers to
seamlessly connect networks as a single entity within the Azure Virtual
Network, and leverage Azure backbone for low-latency, high bandwidth
interconnects between virtual networks.
The solution will also enable customers to leverage a hub and spoke model for hosting their shared services in the hub VNET.
As customers begin to leverage native and third party L4-7 services
in the cloud, they need automated traffic redirection to these services.
That capability is available for On-Premises ACI fabrics already and
the ACI 5.0 releases extends similar service chaining capabilities to
Cloud ACI.
Cisco ACI 5.0 delivers for Multicloud deployments:
Enterprise grade segmentation and multi-tenancy
Policy based L4-L7 services automation, incuding native services such as load balancers, and 3rd party firewalls
Enable automation of high performance interconnect (i) Between AWS VPCs (ii) Between Azure Virtual Networks
Secure automated connectivity from on-premises to public clouds, and across public clouds
Keep Pace with Customer Designs and Operations
400G Ready: Customers can now deploy 400G capable Nexus 9508 chassis in their fabric spines and add 400G line cards later this year.
Per Leaf RBAC: Building upon the built-in
multi-tenancy capabilities, ACI 5.0 enables new RBAC capabilities for
physical multi-tenancy, that allows tenants to have management
privileges at per leaf physical switch granularity.
Ease of Use: ACI 5.0 release continues to improve the ease of use of the ACI controller for daily operations:
Centralized view of cloud resource inventory within AWS and Microsoft Azure
Optimize time required for fabric upgrades, along with upgrade status indicators
New Day 0 wizard providing a guided way to complete Day 0 Configuration for SNMP/Syslog policy
Security: Enhancements include increased Role Based
Access Control (RBAC) for multi-tenancy, additional two factor
authentication (TFA) capabilities with integration with Cisco’s DUO, and
improved security policy for ACI Applications with App Center RBAC
integration.
We are also introducing new flexible policy construct ‘Endpoint
Security Group (ESG)’, that gives you the ability to group endpoints
based on L3 attributes, decoupled from Bridge Domain dependency, and
apply contracts between ESGs.
In addition, there are enhancements to Policy Based Redirect (PBR)
capabilities to support additional service devices, symmetrical PBR for
L1/L2 devices in cluster mode.
Scale: ACI 5.0 now supports upto 500 leafs per site in a Multi-Pod data center, 15 Virtual data centers in VMware vCenter Integration.
Kubernetes Orchestration: This new release enables
several microservice deployment upgrades to support containerized
workloads, including support for ACI-CNI with OpenShift 4.3 on
OpenStack and AWS, Docker Enterprise Release 3, and ACI Neutron Plugin
support for bare-metal Servers with OpenStack.
Customers are looking for proactive capabilities with deep insights into their networks to simplify their Day 2 Operations. Cisco enhances it’s existing Network Insights product to include:
Multi-fabric support: Monitor and troubleshoot geographically distributed multiple fabrics with a single instance of Network Insights
Multicast control plane visibility: Resolve issues through anomaly detection on PIM, IGMP & IGMP snooping control plane protocols.
Customizable dashboards: Customize the observable parameters to suit your preferred way of monitoring.
AppDynamics Integration: Detect, locate and troubleshoot application connectivity issues faster, by correlating network and application telemetry
Topology view (BETA): Explore the power of
overlaying logical constructs such as Tenant, VRF, EPG over physical
infrastructure to zoom in on the problematic nodes and identify
anomalies.
Through these innovations, customers can transform their Day 2
Operations from being reactive to proactive, and reduce their OPEX and
downtime by automating detection, location, and efficiently root-cause
problems.
Keeping our eyes to the future
Innovation continues to thrive at Cisco and our customers rely on
our technology, partnership, and support to keep their businesses
running and enable their digital transformations.
Cisco ACI helps our customers to build for the future. Stay tuned
for new capabilities in upcoming releases in the months to come!
To learn more about Cisco ACI, ACI partners, as well as software licensing visit Cisco’s ACI homepage