optic

Using 10GBase-T on Nexus 9000 switches

Way back at the start of my IT career in the mid-1990s, I bought a book (remember them) about the hot new thing in computer networking, something called Fast Ethernet. Back then, we thought the dizzying speeds of 100Mbps would be more than we’d ever need. Time marches on, however, and we’ve now reached 10Gbps over copper cable.

There are some challenges with this, however. Leaving aside the requirement for Cat6a or Cat7 cables if you want to do this over any meaningful distance, the transceivers draw a fair bit of power in order to shoehorn that much data down our cables.

This presents a problem when you want to use these on standard top-of-rack switches to hook up servers in a datacentre. For example, the workhorse Cisco Nexus 93180YC-FX switch imposes limitations on how many 10GbaseT transceivers can be used, where they can be used, and what they can have next to them. The full detail is hidden away in the Hardware Installation Guide here, but briefly:

  • only ports 1, 4, 5, 8, 9, 12, 13, 16, 37, 40, 41, 44, 45 and 48 will support a 10GbaseT transceiver at all
  • if you install a 10GbaseT transceiver in any of these ports, the adjacent ports (above/below/left/right) will only support a passive copper DAC
1357911131517192123252729313335373941434547495153
24681012141618202224262830323436384042444648505254
Cisco 93180YC-FX ports with limitations: see below

So, for example, if you install a 10GBaseT transceiver (SFP-10G-T-X, which draws 2.5W) in port 5, ports 3 / 6 / 7 can only support a transceiver drawing 100mW max. This means passive DACs (10G or 25G, up to 5 metres only). And nothing else: even a 1000baseT transceiver like the GLC-TE won’t work.

Also, note that ports 17-36 do not support 10GBaseT transceivers, full stop. Tough luck if you are running out of ports. (Ports 49-54 don’t either, but then you’d probably be using these QSFP ports for uplinks to your end-of-rows or spines anyway.)

I’m guessing that the restrictions here are more to do with thermal management than anything else. I find it hard to believe that the switch itself couldn’t deliver sufficient power for 48 10GbaseT transceivers, but (especially if the switch is configured for portside intake) the additional thermal load on the ASICs could be a problem.

So, the question I have… why would you bother? The cost of the 10GbaseT transceiver is the same as the equivalent 10G-SR multimode fibre optic, and both are significantly more expensive than a DAC (twinax) or AOC interconnect. The only reason I can see would be if you’d bought servers or other hardware that had 10GBaseT ports only rather than SFP slots.

So… maybe don’t do that, eh?