We began with standard throughput tests at various packet sizes. The smaller the packet, the harder the device under test had to work because it needed to inspect each header for an Ethernet address in the case of Layer 2 and for an IP address in the case of Layer 3. Then the device had to look up the addresses in their respective tables. Because there's a header on every packet, and more packets can be sent in the same amount of time if they're small, this method resulted in a corresponding increase in headers requiring inspection. Both boxes handled these tests at both layers without a hiccup. As expected, they performed at roughly 8 Gbps because of the bottleneck between each card and the backplane.
We ratcheted the test up by turning on a 100-line access list on each box. We set up the access list so that the IP packets made it do a lookup on all 100 lines for every packet that came through, with a permit on the very last line. We also cranked up the difficulty on the Ixia box by setting it so that as it was doing the throughput tests, it would cycle through 10,000 unique IP addresses. This challenges the device under test, which will attempt to cache IP flows to increase the efficiency of the lookups. We were impressed that, in spite of this stress, both the Extreme and the Foundry boxes nailed this test, maintaining the exact same levels of throughput they experienced without access lists turned on.
We then turned on QoS and sent alternating high- and low-priority packets with corresponding settings to the ToS (Type of Service)/DiffServ (Differentiated Services) bits. We observed how well the devices could handle having their bandwidths oversubscribed. To accomplish this, we added an extra 1 gigabit of input, so that the ports capable of 8 gigabits of output had to deal with the normal maximum traffic plus an extra gigabit's worth. We made sure there was always enough bandwidth to handle the high-priority traffic by itself, then checked to see if all the high-priority traffic arrived. We also varied the number of high- and low-priority packets sent at one time. We started by alternating between three high- and three low-priority packets, which is the lowest the Ixia device could handle, and worked our way up, sending larger bursts of each kind of traffic. Both the BlackDiamond and BigIron did fine ... until we started hitting 500-packet bursts. At this point, the Foundry box started dropping some high-priority packets. By the time we got to 10,000-packet bursts, the BigIron didn't appear to be giving any preference to the high-priority packets. The company said it performed a similar test (with better results) with a Spirent Communications' SmartBits box, but it was with beta software. We suspect that the SmartBits test was sending different traffic patterns.
It's worth mentioning that the Foundry box did better with QoS tests when we exceeded the 8-gigabit capacity on the input ports with the Ixia tester, which was easily able to generate a full 10 Gbps of traffic. Because the BigIron does QoS on incoming traffic, it performed better than the BlackDiamond. Extreme doesn't prioritize traffic as it comes into the interface but pointed out that it would have done better if flow control were turned on. In reality, it's unlikely either box would be connected to another device capable of generating a full 10 Gbps of traffic, though it's likely that each vendor will release next-generation equipment that doesn't have the 8-gigabit constraint. We also discovered that the Foundry BigIron is set up so that it always gives higher priority to traffic from 1-gigabit ports than to traffic from 10-gigabit ports. We'd prefer more flexibility.
Interoperability Tests
While it's all well and good to adhere to a standard, it's not worth much if you can't play nice with other vendors' devices in the real world. With this in mind, we plugged the Foundry and Extreme boxes directly into each other via one 10-gigabit port each. We used the remaining 10-gigabit ports to connect each box back to the Ixia tester. We then ran throughput tests for all the packet sizes and found no compromise in performance.