Once data centers were the primary driver for higher Ethernet speeds; today applications are playing an ever-increasing role in defining new Ethernet incarnations. The need for Ethernet speeds exceeding 100 Gigabit is being driven by exponentially increasing capacity demands across the board in a wide variety of use cases. In response, the industry is moving rapidly to deploy new signaling types to achieve these higher data rates.
In 2013, Ethernet was the dominant traffic seen by backbone networks. Consumers, enterprises, data centers, and mobile usage all contributed to the expanding Ethernet ecosystem. During this time, Facebook hit one billion users (now it's more than doubled), and YouTube and Netflix were putting web traffic pipelines to the test. Work on 100G Ethernet was well underway, with products based on that technology projected to hit the market in 2017 to meet the demand. However, it was clear that more speed and reliability were necessary to address the predicted explosion of data needs. Work began within the IEEE on a new 400G/s Ethernet data rate.
(Image: Shutterstock)
In addition to speed, scalability also needed to be considered given that the targeted use case for 400G Ethernet technology was data centers. The number of ports would be in the tens of thousands, and a smaller form factor would mean less power, less complexity, and ultimately less cost. In 2015, it was decided to piggyback the 400G work already in progress to include a single 50G lane option. The 50G speed could be packaged into a QSFP form factor to reach 200G speeds. The variety of CPU architectures and CPU’s per system, not to mention the software applications running on these servers, made it hard to fit everything into a “one size fits all” standard. Discussions were started between those who wanted a brand new 400G port and those who wanted to combine many of their older lanes to keep costs down. As a result, two standards - IEEE 802.3bs and IEEE 802.3cd - were drafted covering 26 unique flavors of Ethernet to give implementers different options relating compute power and bandwidth. This variety will drive volume adoption in the long run.
The tradeoffs between the options above led to a foray into a new signaling scheme - Pulse Amplitude Modulation over 4 levels (PAM4). This signaling scheme put pressure on every aspect of the link to design a better product. When shifting to PAM4 over the same channel, you effectively lose about 10 dB of signal to noise ratio from squeezing three eyes into the voltage swing normally reserved for one. Having a working standard means better transmitters, better cables, and better receivers are necessary. Silicon and optics companies have been slowly announcing products since 2014 hinting at these capabilities, and cable manufacturers have also been creating new form factors that address the scalability and performance requirements like OSFP and QSFP-DD. With all these enhancements, the need for new tests and equipment was also necessary.
The paradigm shift to four levels meant the test and measurement companies also needed to bootstrap their measurements because something simple like a rise or fall time measurement is very complex with three different level changes. Test and measurement (T&M) vendors had to create new hardware to make and receive PAM4 signaling as well as develop software to perform tests that go up to 400G. TDECQ (transmitter distortion eye closure for PAM4 signals) is the prime example of how T&M vendors adapted to the shift. This test emulates a reference receiver, which involves a large setup with variability and optimization. Nailing down the ranges of variabilities has resulted in years of research and development for the IEEE 802.3 Working Group, implementers and the T&M companies testing the parts. For these reasons, this necessary measurement will be discussed and disputed for years to come.
Rome wasn't built in a day, and neither were these standards. With IEEE 802.3bs finally concluded and IEEE 802.3cd almost completed, the same implementers are already planning ahead to make the next logical step: 50G to 100G per lane. This is still in the very infant stages and will likely be in development for years to come. Moving to 100G per lane is on the roadmap for Ethernet to achieve the Terabits per second mark. Meanwhile, people will still be developing 50/400G products and continue to use and test 25/100G products, proving the long term scalability and adaptability that Ethernet offers users.