Monday, June 25, 2012
Low-latency iWARP networks becoming necessary in data centers
Cloud computing and virtualization have had an impact on many aspects of the data center. Typically, most data center traffic is north-south, from servers out to end users. Just a small portion of traffic is going east-west, which is between servers. However, the rise of the cloud has created a need for much more east-west traffic, leading to a major shift in data center networking.
According to a recent EE Times report, wide-area remote direct memory access protocol (iWARP) networks have emerged as a popular networking option to deal with the rise in east-west data center traffic. The solution is an optimal fit for emerging data center requirements because it is capable of handling the processor offloading and low-latency network requirements needed to deal with east-west data transmissions.
The news source explained that virtualization packs multiple virtual machines into a single physical server, pushing all of that data through an individual network port. This means that the infrastructure needs to be designed to handle much greater bandwidth requirements. However, simply adding more data throughput is not enough to address the unique challenges of east-west traffic. Instead, network adaptors are needed to control the flow of traffic and ensure that the data is segregated based on the priority of applications, ensuring that the most important information gets through the network without latency.
To provide this level of intelligence within the network, remote direct memory access technologies are necessary, the report said. Currently, iWARP is emerging as the most popular protocol to meet this need because it provides for RDMA capabilities while running on a traditional TCP/IP infrastructure. This allows for better latency and performance capabilities than traditional Ethernet architectures.
While technologies like iWARP are handy solutions to deal with unique demands in the data center, advanced systems that optimize traffic flows in the network cannot completely overcome a lack of bandwidth. Because of this, it is vital that organizations consider their data center networking infrastructure, especially their underlying cabling systems, and ensure they provide the foundation needed to support ongoing requirements for cloud computing and virtual server environments. Without the right technologies in the underlying infrastructure, the benefits of low-latency network solutions can be limited by inadequacies in the cabling, switching and adaptor architectures.
Perle’s wide range of 1 to 48 port Perle Console Servers provide data center managers and network administrators with secure remote management of any device with a serial console port. Plus, they are the only truly fault tolerant Console Servers on the market with the advanced security functionality needed to easily perform secure remote data center management and out-of-band management of IT assets from anywhere in the world.