DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image

Metro cloud requires a fundamental shift in network architecture

Thu, 27th Oct 2016
FYI, this story is more than a year old

The evolution of cloud, video scale and the explosion of mobile and the Internet of Things (IoT) are driving growth in telecom traffic and forcing fundamental changes in network design and service delivery. Cloud, data center interconnect (DCI) and video applications are also causing a traffic shift: although backbone traffic continues to grow, metro traffic will grow even faster because video content delivery and DCI applications are best served close to the customer. This article argues for a dynamic, flexible metro network – very different from the traditional hierarchical service provider network model – based on a two layer architecture and recognizing the different requirements of metro DCI and metro aggregation. Jay Gill, principal product marketing manager at Infinera explains.

Video and cloud/data center applications are major drivers for traffic growth. Sandvine's 2015 Global Internet Phenomenon reports show the dominance of streaming video and audio traffic in most regions worldwide – accounting for over 70% of North American downstream traffic in the peak evening hours on fixed access networks. And the majority of this traffic comes from leading Internet Content Providers' cloud networks. As well as traffic growth, these applications are also driving a shift from backbone networks toward metro traffic. While backbone traffic will continue to grow significantly – witness the current boom in 100 gigabit per second (100G) long-haul transport – the need to be close to the customer means that a growing majority of traffic will remain in the metro.

For operators the change will be even more significant, because the cloud model is based on sharing storage and computing resources that have been virtualized across distances. Efficient sharing, however, requires the communications network to be both dynamic and flexible. This marks a dramatic change from enterprise networking's old communications model based on static pipes.

That traditional way of building networks and delivering services breaks down because static and proprietary telecom networks can no longer meet the needs of cloud services. After several years of struggling, operators are admitting that the best solution must be to adopt the very same technologies that have made cloud possible. Hence the emergence of carrier software-defined networking (SDN), network function virtualization (NFV), software-based network automation and open source software initiatives for telecom network equipment – and much of this is being driven by the network operators themselves.

This is a global trend, driven by common challenges worldwide and reflected in the global membership of ETSI's NFV Industry Specification Group (ISG) – (including AT-T, BT, China Mobile, Verizon, Telefónica, NTT, Telstra and 31 other leading service providers around the world) - as well as service provider enthusiasm for open SDN initiatives such as OpenDaylight and open network operating system (ONOS). And the network transformation is increasingly described in terms of two layers: the cloud services layer ("Layer C") and the intelligent transport layer ("Layer T").

Layer C

NFV takes functions that previously resided on purpose-built hardware and recreates them as software functions running on virtual machines in standard off the-shelf server hardware. AT-T alone has identified 200 functions in its network that have the potential for virtualization, and ETSI's NFV ISG is driving NFV standardization with the creation of appropriate proofs of concept. These virtualized functions form a significant part of the cloud services layer, Layer C.

Some commonly-cited examples of virtualized functions are: evolved packet core, deep packet inspection, firewalls, load balancers, wide-area network accelerators, mobile network nodes, session boarder controllers, content delivery networks, customer premises equipment functions, and even some fundamental routing functions, such as broadband remote access server (B-RAS) and provider edge (PE) routing.

Layer T

For efficiency and economy, all network functions that can effectively be virtualized to run on general-purpose hardware eventually will be. All other network functions will be left in Layer T, the transport layer. Layer T's job is to provide the most efficient and lowest-cost transport for the Layer C applications.

Optical communications cannot be virtualized in that way because they operate in the analog domain of photons. So wavelength-division multiplexing (WDM) transport and optical switching will be the foundation of Layer T, but some digital and packet processing functions will also remain in that layer to enable dynamic and efficient allocation of optical network capacity. Heavy Reading defined a category of equipment as packet-optical transport systems (P-OTS) that integrate transport functions in the same element under the same management systems – a global market that grew to $2 billion by 2014.

The key building blocks for Layer T in next-generation metro networks will be:

  • Scalable optics, including use of coherent electronics and photonic integration
  • Converged packet-optical transport capabilities with agile, efficient switching at optical, digital and packet layers
  • Open, programmable interfaces for rapid, operator-driven software innovation
  • Open SDN control 

Figure 1 illustrates this migration from the traditional, many-layered model to the new two-layer model.

Figure 1: Migration From Traditional Networking to Cloud and Intelligent Transport. Source: Infinera, 2015 

The Growing Metro Market

The metro transport market is not all the same. In fact it can be analyzed into two main sub-markets: DCI and metro aggregation. Each has its own characteristics and requirements and different equipment is being developed to meet the needs of these specific markets.    Where DCI equipment currently supports one single application – connecting data centers – metro aggregation equipment serves many different purposes. These include:

  • Mobile backhaul  
  • Residential broadband backhaul
  • Carrier Ethernet services for business
  • Video transport – both broadcast and on-demand
  • Time division multiplexing (TDM)-based private line services for business 

The DCI Market

DCI was initially driven by Webscale providers such as Google, Yahoo, Facebook, Amazon and Microsoft, but as traditional telecom operators move into data center businesses, they too are deploying DCI equipment. In either case the need is the same: to interconnect data centers within a provider's network or to connect a user's data center to a data center in the provider's network. So the main characteristics are similar:

  • Hyperscale – 100G is essential
  • Minimal power consumption and footprint  
  • Operational simplicity
  • Open application programming interfaces (APIs) for easy programming
  • Suitability for DCI without unnecessary extra features 

 As a result, a lot of purpose-built DCI equipment is coming to market all built around high-speed optics and pared down to exclude the usual packet-switching fabrics, which DCI applications do not need, and which would therefore add unnecessary cost and complexity.

The Metro Aggregation Market

In the last decade, P-OTS equipment has been developed to address the diverse needs of metro aggregation:

  • High-capacity packet switching/aggregation
  • Transport and aggregation for legacy private line, TDM, etc.
  • Superior operations, administration and management (OAM)
  • Carrier-class reliability and transport performance in terms of latency, jitter and synchronization support 

According to Heavy Reading, the combined P-OTS and Carrier Ethernet transport (CET) metro market totaled $3.3 billion in 2014, and this nextgeneration segment is expected to reach $5.4 billion by 2019. Meanwhile, the metro aggregation network has undergone several transitions, of which the most significant has been the migration since 2007 from multi-service provisioning platforms (MSPPs) to metro P-OTS. Since then, each generation of equipment has revealed new changes. We are now poised for another major transition, which Heavy Reading believes will be as significant as that from MSPP to POTS.

 The reasons for this transition include the following:

  • Hardware modularity. P-OTS equipment integrates multiple functions within the same chassis/system while legacy systems offer rigid solutions that waste valuable rack space and power
  • Emphasis on Sonet/SDH. P-OTS networks were conceived to link existing Sonet/SDH to the new Ethernet/IP networks. Today's operators need optical and packet-layer innovations, but current-generation equipment is focused too heavily on the old TDM capabilities 
  • Proprietary closed management. Many long-haul networks migrated to generalized multi-protocol label switching (GMPLS), but metro networks never made this transition and remained static. Cloud applications require flexible networks that can adjust to changes in traffic and application demands, so the way metro transport networks are managed today does not suit operators' needs and their customers' demands 

A New Generation of Metro Aggregation Equipment

100G is already the default for long-haul networks, and we are beginning to see its appearance in metro networks. Webscale providers drove initial demand for 100G, but now cloud, video, fixed broadband, Carrier Ethernet and mobile broadband are pushing 100G into metro aggregation networks. While DCI networks may only need high line rates, metro aggregation networks need a mix of 10G, 100G, 200G and more to provide for wide diversity of applications. Heavy Reading forecasts that 100G and higher line-side ports will increase at 56 percent CAGR from 2014 to 2019.    As suggested, high-capacity packet switching and aggregation will remain a key part of Layer T, even as several packet functions are virtualized into Layer C. Some operators will require OTN switching in the metro core to bridge between legacy TDM and new Ethernet/IP services, while others will move straight to circuit emulation over packet networks. The need to add capacity and features as requirements change will drive greater modularity and flexibility.    With the advent of Layer C, many router functions will be converted to software running on standard hardware. Basic packet services such as multi-cast, Ethernet and MPLS will be performed by Layer T, eliminating the need for most of the traditional routers. This collapse of multiple layers will dramatically simplify the scaling and operation of the metro aggregation network.     All metro transport networks – whether DCI or metro aggregation – will aim to separate the control planes (in Layer C) and data planes (in Layer T) using open, standards-based SDN. In this regard, metro networking is following the lead of the Webscale providers, who were able to innovate using SDN without any pressure for backward integration with legacy networks and services.     As SDN is deployed, metro network operators will initially need to operate their networks with hybrid control between SDN and legacy element and network management systems (EMS/NMS) in order to ensure protection of existing services and smooth migration to the new architecture. Over time, all network OAM functions can be migrated to newer, SDN-compatible open APIs and become part of the virtualized application environment of Layer C.    Operators, suppliers and standards bodies must work together to further widespread adoption of SDN in future metro networks. Even where SDN operation is not immediately needed, operators will look for roadmaps to SDN control and open APIs for metro aggregation networks.    Conclusion

The gap between the flexible and on-demand requirements of cloud and the static nature of legacy metro networks is unprecedented, and is leading operators to radically rethink how networks are built and operated. Leading operators are recognizing the need to move toward a simplified model of Layer C, where every application and control function that can be virtualized is, and Layer T, built on scalable optics and providing efficient scalable packet-optical transport, with the layers linked together by open SDN interfaces.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X