The determining factor for performance, power, and chip die size in servers, storage appliances, and network switches is the resistive and inductive nature of interconnections. Copper is resistive, which limits chip speed and slows down signals. It also creates heat, which limits how many transistors can sit on a die.
Attempts to find a room-temperature superconductor haven’t been very successful yet, though the temperature threshold has risen a great deal. Potentially, a cryogenically cooled supercomputer could be built within five years, but that’s some ways from production systems. Despite these problems, chips have grown in complexity because the current and capacitance of the transistors has fallen as each iteration of die-shrinking has taken place. This makes the chip-to-chip interconnect problem all the worse.
Complex chips need lots of connections to do useful things. The electrical characteristics of chip-to-chip connections demand high-powered signals. This forces chip designers to build very large driver transistors into each interconnection, taking up a lot of space and using a good deal of power.
As this interconnect gets faster to keep up with workloads, the power budget continues to increase, effectively placing limits on designs. We’ve standardized on differential systems for transmitting most signals, allowing lower power operation at speeds up to 25 Gbit/s, but we are running into a speed ceiling. The result is a search for alternatives.
This is where photonics comes into the picture. Rather than connecting with electrical connections, the idea is to send data between chips using beams of light. We already do that, at a much coarser level, with fiber optic links, and that’s allowed us to figure out many of the characteristics needed for efficient interchip connectivity.
However, there are a number of challenges with implementing photonic networks. How to replace the traditional multi-layer circuit board with a multi-layered fiber system is one. Handling the sharp bends seen in layouts is another. But a major issue is the compatibility of silicon substrates and light-emitting diodes (LEDs).
Still, silicon photonics has been a focal point of the effort to harness optical interconnects, with good success. Intel has demonstrated 100 Gbit/s connections, and IBM has announced producible devices. Work on wave guiding has proceeded, too. Multiple wavelength connections also are promising. A broader technology effort in Europe, the Phoxtrot project, is aiming to drive resolution of the packaging issues of photonic interconnect, and is fostering work on motherboard waveguide systems and embedded micro-mirrors to solve corner-turning issues.
Taken together, this suggests we are on the verge of moving photonics from the lab to the datacenter. Most likely, the first products will be backplane interconnect systems. Mesh-type connections such as we see in blade servers are an obvious candidate, especially as the short distances involved lower the technology threshold for devices and waveguides.
After blade server backplanes, we’ll see rack uplinks migrating to fast photonic links. Demonstrations of terabit-plus links indicate the technology direction, and the demand for ever-growing bandwidth dictates the need.
Inter-chip connection likely will come from the main CPU vendors first. They all have a need for power and die space saving, with faster interconnect. Efforts such as the Hybrid Memory Cube architecture will provide a major increase of bandwidth in copper-connected modules, so the pressure on CPU and system vendors to go photonic may be held off for a few years. With technical approaches rapidly evolving, this might allow a better set of solutions anyway.
Photonics approaches offer great future potential. Multi-wavelength connects will boost performance to incredible levels. Graphene -- a form of carbon that comes in sheets just one atom thick -- may give us faster chips with much lower power, allowing 3D stacking in computer dies with better photonic devices to provide the needed connection speeds.
Production of devices with through-silicon vias is leading to piggyback chip designs with the optical modules in a mezzanine above the primary chip. This approach could allow greater chip functionality and more powerful or faster photonics. There is a lot of research into direct beam switching, which could make very dense switches possible.
One thing is clear: While photonics hasn’t arrived yet, it’s well on its way, and it will change computing. It’s also clear that we are at the very start of a technical revolution with consequences that we can’t even guess at yet.