By: Chris A. Ciufo, Editor-in-Chief, Embedded, Extension Media Publishing
Outside the box USB is ubiquitous; inside the box? It’s PCI Express. Here’s why it’s essential to bridge PCIe to all kinds of digital channels like USB.
Before there were digital devices, the world’s most common electrical interconnect was the AC power cord. In North America, the 2- and 3-prong analog cord was (and remains) attached to perhaps billions of devices¹. In the digital realm, USB is the most common external interconnect. However, inside the box, PCI Express reigns supreme between ICs. Bridging the two digital channels is a common desire. Here’s what you need to know to make it happen in today’s embedded designs.
PCI Express Bridge or Switch?
In embedded systems, PCI Express Gen 2 has become the most common interoperable, on-board way to add peripherals such as SATA ports, CODECs, GPUs, WiFi chipsets, USB hubs and even legacy peripherals like UARTs. But most new CPUs, SoCs, MCUs or system controllers often lack sufficient PCI Express (PCIe) ports for all the peripheral devices designers need. Plus, as IC geometries shrink, system controllers also have lower drive capability per PCIe port and signals degrade rather quickly eating into PCIe margin and potentially slowing channel speeds.
The solution to these host controller problems is a PCIe switch to increase fanout by adding two, three, or even eight additional PCIe ports with ample per-lane current sourcing capability. And since USB is common outside the box, bridging PCIe to USB with some embedded intelligence—we’ll call it a “swidge”—might just be an embedded designer’s killer app. But what’s a bridge and how does it differ from a switch?
No Bridge Too Far
In network-speak, a “bridge” has a specific connotation that joins the two lowest layers of the OSI reference model—Physical and Data Link—between two or more LAN segments (Figure 1). Bridges can be used to subdivide large networks into smaller segments, or join together separate segments to facilitate communications. The terminology gets confusing as the term “switch” is also used to denote a bridging capability; however, a switch often has multiple ports while a bridge typically contains only two (segment 1 and segment 2). Think of a switch as a “multi-port bridge”.
Figure 1: OSI reference model showing a bridge connecting two LAN segments. A bridge joins the PHY and MAC layers (layers 1 and 2). (Courtesy: Wikipedia and Wiki Commons.)
Bridges may or may not have intelligence, allowing all traffic to pass between the ports, or inspecting the frame header information to decide if the frame should pass. Bridges come in various types as shown in Table 1. While the original network bridge had the same kind of media on both sides (Ethernet), the advent of PCI, PCI Express and USB has created a need to bridge multiple channel types.
For example, a bridge that passes one PCI Express port’s data to a single USB 2.0 port could be a considered a simple transparent bridge that makes an electrical conversion (from PCIe to USB) and passes the data payload between ports. Alternatively, the bridge might include intelligence such that only USB destination data are allowed to pass while other data present at the PCIe port is blocked.
Table 1: Different kinds of traditional network bridges.
Similarly, switches can also bridge multiple ports and media flavors (such as PCI to PCI Express) or single x4 PCIe lanes to multiple x1 lanes—and all permutations thereof. Some examples of how PCI Express switches can be used as lane changers are shown in Figure 2.
Figure 2: Examples of PCI Express “lane changer” switches. (Courtesy: Pericom Semiconductor.)
Any Port in a Storm?
As previously mentioned, consumer embedded devices use USB cables on the outside, while inside those same embedded boxes PCI Express is the preferred communications channel. Just about any standalone peripheral a system designer could want is available with a PCIe interface. Even esoteric peripherals—such as 4K complex FFT, range-finding, or OFDM algorithm IP blocks—usually come with a PCIe 2.0 interface.
Unfortunately, modern device/host controllers are painfully short on PCIe ports. If a designer chooses an Intel or AMD CPU, they’re in good shape. A 4th Gen Intel Core i7 with Intel 8 Series Chipset has six PCIe 2.0 ports spread across 12 lanes which is plenty for most systems. Similarly, an AMD A10 APU has four PCIe (1x as x4, or 4x as x1). But these are desktop/laptop/server processors and they’re not so common in embedded.
AMD’s new G-Series SoC for embedded is an APU with a boatload of peripherals but it’s got just one PCIe Gen 2 port (x4). As for Intel’s new Bay Trail-based Atom processors running the latest red-hot laptop/tablet 2:1’s, the E3800 versions contain 2-4 PCIe Gen 2 ports.
Similarly…what about a Qualcomm Snapdragon 800 or an Nvidia Tegra 4 or even the new Nvidia K1? Datasheets on these devices are closely held for customers only but I found Developer References that point to at best one PCIe port in these embedded SoCs/GPUs. ARM-based Freescale processors such as the i.MX6, popular in set-top boxes from Comcast and others, have one lone PCIe 2.0 port (Figure 3).
What’s a designer to do if more PCIe ports are needed? A PCIe switch solves the one-to-many dilemma.
Figure 3: Freescale i.MX ARM-based CPU is loaded with peripheral I/O, yet has only one PCIe 2.0 port. (Courtesy: Freescale Semiconductor.)
The answer to the fanout dilemma is to use either a PCIe-to-PCIe switch (one to many), or a PCIe-to-xxx bridge or switch. For instance, if the peripheral uses USB, then a PCIe to USB bridge or switch is the answer. (Recall: the bridge has two ports; the switch has many.) Add in a ReDriver at the Tx or Rx end, and signal integrity problems over long traces and connectors all but disappear. Switches from companies like Pericom come in many flavors, from simple lane switches that are essentially PCIe muxes, to packet switches with intelligent routing functions as described in Table 1.
One simple example of a Pericom PCIe switch is the 9X2G303EL. This PCIe 2.0 three port/three lane switch has one x1 Up and two x1 Down and would add two ports to the i.MX6 shown in Figure 1. Multiple 9X2G303EL’s can be ganged together to add more PCIe ports or for flexible PCI Express lane configurations. This particular device, aimed at the low power consumer devices mentioned earlier, boasts some advanced power saving modes and consumes under 0.7W.
Examples of a switch combined with a bridge are Pericom’s PI7C9X442SLB/PI7C9X440SLB devices. The so-called “swidge” ICs—switch and bridge—allow system architects to fan out PCIe ports while also adding USB 2.0 ports without having to gang together multiple ICs. That is: one IC does it all (Figure 4).
Four integrated high-speed USB channels bridge to a single PCIe port. In full duplex mode, the IC operates at 2.5 Gbps bandwidth (max.) and supports intelligent bridging (Table 1) via several programmable modes.
Figure 4: This is an example of a “swidge”—a bridge and switch combination. Here one PCIe x1 lane fans out to two, plus four USB 2.0 channels. This device saves real estate and increases the PCI Express fanout on an embedded CPU/GPU/SoC.
ReDrivers Open the Eye
In the examples above, PCIe bridges and switches are used to add additional PCIe ports to processors to increase fanout, add non-PCIe peripherals onto a PCIe channel via a conversion bridge (Figure 4), or to swap the configuration of PCIe lanes (Figure 1). But as the notional set top box system diagram in Figure 5 shows, there’s often a notable electrical distance between the signal’s source and destination. And modern CPUs don’t have the electrical capability to drive long signal traces.
Figure 5: Typical embedded set top box example showing CPU and multiple peripherals. Long traces between peripherals and the CPU degrade signals. (Courtesy: Pericom Semiconductor.)
This diagram shows a single PCIe channel on the host CPU that has to connect to four peripherals: SATA, GbE, WiFi (via USB) and a co-processor. The PCB trace connections to all of these ICs will be subject to noise, cross-talk and other EMI-related signal integrity challenges as they route around an embedded PCB.
Even if the PCIe bridge or switch is mounted adjacent to the processor’s PCIe bus, PCI Express’s 5 GHz signals will have degraded before they reach the bridge and become further attenuated the longer they travel from the source. One simple solution is to use active signal amplifiers called ReDrivers at the destination.
Designed specifically for PCIe, redrivers open the eye diagram by recovering signal integrity, cleaning up PCIe signals, and increasing the margin in designs (Figure 6). When used in conjunction with PCIe bridges or switches, PCI Express signals have a greater chance of remaining within PCI-SIG specifications. ReDrivers can also give designers flexibility in board layout when distance between PCI Express devices and peripheral controllers is unavoidable.
Figure 6: Using a redriver at the end of a PCB trace—such as prior to a PCIe bridge that’s far from a processor’s PCIe port—can recover signal integrity. The original signal on the left is degraded (top right) but recovered via the ReDriver (bottom right). (Courtesy: Pericom Semiconductor.)
PCI Express remains the most common interconnect inside embedded systems. Because many embedded processors, microcontrollers, SoCs and GPUs have limited PCIe channels, bridges and switches are used to increase one-to-many fanout. Some switches, such as those from Pericom, bridge from PCIe to USB and other common I/O types. Combining a bridge with an intelligent switch creates a “swidge”, a unique IC that saves real estate in constrained designs. Finally, special signal integrity amplifiers called ReDrivers improve degraded signals caused by long PCB traces, connectors, or the challenges of tight embedded designs.