DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image
Complexities & challenges with ever-changing data center landscape
Wed, 5th Apr 2017
FYI, this story is more than a year old

IT, data center and networking professionals are regularly faced with complexities and challenges as business and customer requirements change over time.

Today, more than ever, they are greeted with unprecedented pressure to achieve more agility and efficiency and deliver their projects faster than before.

But what are the key considerations that installers, consultants, and end users should be aware of when selecting a fibre optic cabling solution for their evolving data center?

Easy does it

Today, the enterprise data center needs infrastructure that is both easily installed and reconfigurable with minimum effort and disruption.

Time is a major factor during installations, giving rise to the need for plug and play cabling solutions that simplify and accelerate project rollouts as well as quicker nd simpler moves, adds and changes (MACs).

The requirement for faster project delivery is running concurrent with trends for increasing storage capacity, more big data initiatives, and a greater number of connected devices.

This, in turn, drives demand for the economics of density – where organisations seek to squeeze more connection terminations into the same, increasingly expensive and sensitive dedicated area.

The onset of Internet of Things (IoT) based initiatives will only add to these pressures, impacting network designs and driving the need for more connectivity.

Enterprise IT also often looks to outsource facilities or cloud providers to meet their needs. At the same time, these cloud companies, facility operators, and colocation providers are building larger and larger data centers.

These so-called mega, hyper-scale, or supersized data centers are built to realise economies of scale that can significantly reduce costs.

Size matters

The beauty of optical fibre connectivity is that individual fibres can be very thin, and still provide high bandwidth capacity. The use of FOCIS 5 compliant MTP connectivity for structured cabling design based on data center design standards TIA 942-A and EN50173-5 provides high flexibility. It's only natural that new technologies progress to support less risk, faster execution, and repeatable quality.

With the evolution of pre-terminated plug and play cabling solutions, many large enterprise data centers are enjoying faster installs and smoother ongoing MACs as a result.

The provision of factory terminated and tested cables is a logical approach for projects where the prerequisite is for high-quality connections and high availability. This also means reduced onus on the need for highly advanced installation skills, now that pre-terminated systems enable fast, clean, and simple connections.

Fears about network outages caused by cabling problems are also largely banished with technology that enables greater flexibility in network designs.

Next step

While the above problems have been addressed, others in the field will be wary of what tomorrow will bring. For cabling professionals, the pressure to embrace 40 Gigabit Ethernet and 100 Gigabit Ethernet now, with 400 Gigabit Ethernet beyond, all of which use SR4 based parallel optics networks, will also drive the need for 25 Gigabit Ethernet and 50 Gigabit Ethernet port disaggregation options.

However, we find real diversity in the types of transceivers that switch, server, and storage makers use and the optical transceiver roadmap guiding the industry from 10 Gigabit Ethernet to 40, 100, up to 400 Gigabit Ethernet.

Data center owners and operators need to understand how their choice(s) impact the underlying infrastructure.

With regard to IEEE standardisation of multimode versions, both 40 Gigabit Ethernet and 100 Gigabit Ethernet have migrated to eight fibre techniques (four for transmit, four for receive as per SR4) and this pattern continues as we look further down the road towards 400 Gigabit Ethernet.

However, there are also duplex versions of 40 Gigabit Ethernet, which offer 40Gb/s serial transmission over two fibres. Although deployment of these technologies can save on fibres, these technologies are incompatible with each other (and with parallel optics), which may add a level of management complexity in a multi-vendor environment.

Need for speed

When one aggregates 10Gb/s server links, the switch uplinks need to operate at 40Gb/s or higher. These higher speeds, currently 100 Gigabit Ethernet and 400 Gigabit Ethernet (and other planned parallel technologies), can be easily adopted through an eight fibre based design enabled with SR4 technology, which can provide further cost savings through port disaggregation.

At the same time, we are seeing three tier (core, distribution, and access) switching architectures evolve to two tier (spine and leaf) in line with software-defined networking (SDN) with the corresponding need for increased fibre density.

As such, data center managers with an eye on the future are deploying fibre trunks to their switches so that they can migrate to higher speeds and evolve their network architecture without major cabling disruption.

That means they can take advantage of the other fibre benefits now – lower utilisation of raceways, reduced impact on airflow, and reduced power consumption.

To add to the dilemma, the current discussions on 400 Gigabit Ethernet speeds within the IEEE are based on several options for new generations of transceiver design, all based on parallel optics.

Therefore, it's important that data center managers consider the longevity of their cabling to ensure that there will be no major disruption when migrating to higher speeds, whether it's 40 Gigabit Ethernet today or 100 Gigabit Ethernet and beyond tomorrow.

This means that the structured cabling in place must provide a modular upgrade path, leaving the existing hardware and trunk cables in place, otherwise major additional cost and disruption will be incurred.

The key consideration here is that transceivers, for the foreseeable future, are dominated by two fibre and eight fibre solutions.

A future-ready infrastructure needs to be able to support a transition to higher speeds with any mix of transceiver types, serial and/or parallel, without having any costly, time consuming or disruptive upgrades to cabling infrastructure, or compromising on density.

Ready and able

One approach to future ready infrastructure that is already being taken advantage of is that of 40 Gigabit Ethernet disaggregation for use with 10 Gigabit Ethernet applications.

Disaggregating 40 Gigabit Ethernet ports into four 10 Gigabit Ethernet ports – currently only possible with parallel optics – through harnesses or port breakout modules provides significant density advantages both in terms of the attached electronics and the housings in the wiring areas.

Transceiver vendors estimate around half of 40Gb/s QSFP ports shipped are being used to disaggregate and breakout to four 10 Gigabit Ethernet ports.

For example, using 40Gb/s instead of 10Gb/s QSFP line cards for 10 Gigabit Ethernet connectivity reduces overall cost per port for the attached electronics, as well as a two to three times smaller footprint, and it also means that customers will already have the technology in place when they are ready to upgrade to 40 Gigabit

So what about the hyper scale data centers, typically used to deliver cloud services that are driving Open Compute Project (OCP) standards?

Taking advantage of port disaggregation on leaf switches is a key consideration in providing economies of scale for connecting server ports. In addition, spine switches should be placed to optimise inter-switch connection reach in order to use with lower cost multimode optical electronics.

Ticking all the boxes 

It's clear that data center connectivity provision needs to remain ahead of the rising demand for applications, networking, server, and storage equipment.

Moreover, cost effective, flexible capacity is needed to accommodate rapid and efficient scalability demands, facilitating efficient migration to higher speeds with minimal disruption.