DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image

Technology enabling the next generation of data centres

Thu, 16th Aug 2018
FYI, this story is more than a year old

Over the past 10 years, the rapid advancement of technology has resulted in not only an explosion of data, but a reliance on data centers never before seen. The emergence of disruptive technologies such as virtualisation, IoT devices and 5G are coalescing into huge amounts of bandwidth, requiring data centers to quickly evolve to meet escalating new demands and support latency-sensitive communications.

For data centers to provide consistent performance to customers and easily meet their ever-growing demands, data center operators need to bring in new technologies and plan for their future capacity needs. Ensuring that the data center is future-ready requires intelligent design and implementation of new technology.

Objectives to consider to ensure future-ready design

Data Center and enterprise network facilities today are under unrelenting pressure to deliver higher capacity, highly reliable systems with sound technology robustness for the future. High data rate scalability, reduction in pathway and space utilisation, low latency and ease of testing and installation are all critical in meeting these demands.

Demand for capacity is constantly increasing. By not planning for future capacity demands, data center operators risk "death by patch cord", constantly adding new fibre cables with increased capacity to the same space. While increasing capacity, this practice in effect reduces the data center's long-term ability to meet the needs of the future. There will come a time where there is no room or space available to add more cables, and the data center operator would need to overhaul the entire cabling infrastructure of the facility.

Data Center operators need to understand the throughput between facilities via the point-of-presence (POP) room to within the data center. This interconnection is where density is key and capacity is required. It's important to leverage options that provide high capacity within the one cable. High fibre count (HFC) trunks with 1728 fibres within the one trunk, and now extreme-density cables with 3456 fibres, provide future-ready capacity while reducing duct utilisation in the outside plant.

What technologies are next generation?

As we look to the future, data center operators need to start exploring technology solutions to address the increasing need for greater capacity.

Vertical-cavity-surface-emitting lasers (VCSEL) have long supported the low-cost deployment of multimode fibre in the data center. 10G and 25G lanes can be run in parallel via quad small form-factor pluggable (QSFP) transceivers to efficiently achieve 40G and 100G links suited for breakout into the individual lanes. Short wavelength division multiplexing (SWDM) and bidirectional (BiDi) options exist to maximise legacy OM3/4 infrastructure, however these do not support breakout. Roadmaps to 400G exist and distances are typically less than 150 metres, placing them generally within the server area of data center activity.

When connecting at greater distances, single mode offers better options. Within the single mode camp there are two transceiver styles available, simplex and parallel. Dense or coarse wavelength division multiplexing (DWDM, CWDM) can deliver very high levels of traffic up to 10km. While costs of these transceivers are reducing, the cost remains a multiple of the 8-fibre version parallel single mode 4 lane (PSM4) which offers connectivity up to 2km. For large data center campuses, PSM4 is the favoured option driving up facility connectivity fibre count towards extreme-density cables.

Traditional 3-tier switching model is giving way to 2-tier spine-and-leaf architecture within the data center industry. Spine-and-leaf architecture helps to facilitate faster movement of data across physical links in the network, significantly reducing latency when accessing data. Every spine switch is connected to every leaf switch, allowing for the easy deployment of additional cables when required due to its high density. Spine-and-leaf architecture is increasingly the networking architecture of choice for cloud providers as it is a massively scalable, future-ready infrastructure.

While spine-and-leaf architecture offers smarter, faster systems, the migration dramatically increases the number of fibres required to serve interconnection in the data center campus. Only a few years ago, 864 fibres were standard in campus networks, today 1728 fibres are common. Even higher fibre counts, such as 3456 fibres, are now available – all within standard duct systems. Extreme-density cables offer easier and faster installation, faster cable restoration in the event of cuts, reducing downtime.

Together, these technologies are enabling data center operators to optimise connectivity density and enable next-gen architectures, to plan for future capacity now.

How important is it to implement these technologies now?

Data Center managers that fail to embrace available new technologies or provision ahead will find themselves behind the competition very quickly. Quite often, it's less about installing the actual cabling, but more about provisioning the spacing and ducting within your data center.

When you're in the planning and design stages of a data center, it's important to consider your facility's desired life-span and end-point capacity. Technology has demonstrated its capacity for rapid change and increasing demand. Telecommunications companies plan ahead, often forecasting "day 1, day 2, day N" and then doubling it to address future congestion issues, and this concept is important for data center operators too.

Considering the longevity of your cabling when migrating to higher speeds, and opting to ensure your infrastructure can support all network architectures and speeds to 400G will ensure that your data center can easily keep ahead of future demands. If there's no real reason to delay, then don't delay.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X