Thursday, June 30, 2016
Dealing with hyperscale - building the network for mega data centers
Hyperscale configurations are becoming more common as cloud and colocation services rise and the mega data center movement takes hold. These data-dense, large-scale configurations create new network challenges that are pushing traditional connectivity boundaries. Being able to freely mix and match different cabling formats, particularly between fiber-optic and copper infrastructure, is particularly important as organizations work to address the challenges associated with mega data centers.
Looking at the dynamics of the mega data center movement
Just a few years ago, much of the conversation in the data center industry focused on a few key topics: Moving strategically into the cloud, consolidating the infrastructure that remains and taking advantage of server virtualization to reduce the hardware footprint of facilities. Since then, big data has taken hold and the cloud has become such a powerful tool that many businesses are blending public and private cloud services to deliver cloud-like flexibility and scalability across the entire configuration.
This shift has created a fairly straightforward dynamic that is staggering in scale - businesses are increasingly shrinking their internal data centers to minimal levels and establishing hosted private clouds or public cloud subscriptions for many of their other needs. Enterprise-class organizations that depend heavily on in-house IT systems are increasingly forced to adopt cloud-like internal configurations, leading to large, incredibly complex data center environments.
These data center developments have created a situation where cloud and colocation providers, not to mention major tech giants, increasingly need to maintain extremely large data centers to meet customer demands. This has fueled a rise of mega data centers - facilities where:
- Power demands skyrocket.
- Data moves at large scale both between systems in the facility and out to users.
- System density is extremely high as creating value from every square-foot of facility space is key.
- Virtualization across the entire configuration - not just servers - leads to higher data densities.
All of these factors add up to an incredibly powerful data center where data moves in diverse directions at breakneck pace. Network transformation is critical in hyperscale data centers.
"Network transformation is critical in hyperscale data centers."
3 network considerations for the mega data center
Supporting hyperscale configurations depends on having a network that can operate flexibly and deal with data workflows that are incredibly distinct from traditional layered network models. Three essential issues to keep in mind when supporting this infrastructure model are:
1. Aggregate networks
Many organizations have run into a situation where they have high-performance network links interconnecting different parts of their facility or providing a channel for data to move to external locations. In many cases, these are 100 Gbps connections, and fiber-optic cabling is often the easiest cabling format to hit that performance mark, though copper is still an option. The problem is that there is a huge gap between 10 Gbps and 100 Gbps, and most organizations are stuck dealing with that chasm.
There has been significant discussion around new standards for 25 Gbps and 50 Gbps Ethernet, and the potential to simplify aggregate networks through that technology is key. These new standards are on the cusp of ratification, but they aren't settled yet. Even when they are, the costs of taking advantage of the engineering framework could be high. Being able to easily interconnect 10 Gbps copper links with 100 Gbps fiber is critical in the meantime, and the media converters used there could still pay off as organizations implement 25 Gbps Ethernet. Don't let current standards limitations hold back data in your aggregate network. Instead, media converters give you the flexibility to more easily adopt solutions now without hampering your future flexibility.
2. Flattened architectures
The need to handle traffic between systems and out to users simultaneously creates a situation where network architectures must be revised. Historically, data centers have focused on moving data out, not within themselves. As this is no longer the case, the traditional three-layer topology doesn't work as well, and many organizations are exploring new methodologies to get the job done. The need for flexibility in these configurations creates a situation where being able to use fiber and copper strategically based on changing circumstances at any time is incredibly valuable. Flattened networks can create flexibility, but only if the underlying cabling configuration provides enough bandwidth to create that wiggle room.
Hyperscale configurations are creating a wide range of new network challenges.
3. Physical space challenges
The sheer volume of systems in hyperscale configurations makes it incredibly difficult to effectively run cabling out to systems. Cable ducts and trays can fill quickly when trying to roll bundles of copper cabling out to hardware supporting high-performance workloads. Being able to strategically use fiber lets you replace groups of copper cabling with just one or two optical links. This saves important physical space, giving you more flexibility when it comes to system architectures and airflow design.
Hyperscale data center configurations are creating new challenges in how data moves through facilities and out to users. Flexibility within the network is critical in meeting these demands, and the ability to blend fiber and copper into an adaptable architecture is essential.
Perle has an extensive range of Managed and Unmanaged Fiber Media Converters to extended copper-based Ethernet equipment over a fiber optic link, multimode to multimode and multimode to single mode fiber up to 160km.