Optical communications systems have been a key enabler to the buildout of our information infrastructure. Many data centers used to store and transmit information have miles of fiber and thousands of laser/photodetector receivers to send and receive information across the fiber. There is relentless commercial pressure to increase capacity and the process to develop new systems that operate at higher data rates continues. This is not a process of simply designing systems that move more information. The cost of these systems need to drop. Data centers are sometimes described in terms of acres and megawatts, indicating that the power required to run the data center is huge. There is a strong motivation to find ways to not only operate at higher capacities, but to do it while using less energy.

The basic optical communications system has a laser transmitter that converts electrical data into modulated light, an optical fiber, and a photodiode receiver to convert the modulated light back to an electrical signal. Designing the optical communications system is complicated by the fact that in the data center environment, there is rarely a requirement that the optical link be manufactured by a single vendor. The transmitter, fiber, and receiver are likely to be produced by three different companies. This concept, known as interoperability, gives the data center designer flexibility, and it facilitates competition among vendors leading to more innovation and lower costs. The downside to this is that designing the system and specifying the components within that system becomes more complex.

A standards organization such as IEEE 802.3 offers a public forum to define communications systems. Meetings are open to all and are attended by data center designers, as well as the manufacturers of network equipment, transceivers, and fiber. Since the standard will define performance and how it is verified, test and measurement companies also participate. One of the essential outputs of the standards group is a set of specifications for the transmitters and a set of specifications for the receivers. Again, the two sets exist to promote interoperability. Most recently the IEEE 802.3cu working group has released the 100Gbps per wavelength draft document, which will be the key specification for future fiber-based interconnects.

Specifications typically start with the receiver, where limits to the signal strength will determine how reliably a photo detector can convert the optical signal to electrical data. If the signal level drops below a recommended sensitivity, the receiver will make too many mistakes, which are typically regarded as bit errors. This threshold is known as the receiver sensitivity limit.

There will usually be an objective for the distance that the signal must travel, perhaps as short as 100 meters or as long as 40 kilometers. The attenuation caused by the fiber is well known, so working backwards from the receiver, accounting for the expected loss of the fiber, then defines the minimum signal power level a transmitter must produce. In reality, it is more complicated as there are a variety of mechanisms that can cause a system to generate bit errors beyond simply having the power drop below the receiver sensitivity limit.

From the perspective of a receiver, two lasers operating at the same power level may generate very different signals. State-of-the-art systems today operate in excess of 50GBaud (PAM4). That is, the light must be turned on and off at the transmitter at a rate of up to 50 billion times a second. The receiver must detect that the light is either on or off, and a lower quality transmitter may be slow. The laser may generate a signal that is not stable when the receiver makes a decision. Thus, the quality of the laser signal needs to meet a minimum level. Similarly, we cannot expect to have perfect transmitters, so receivers need to have some tolerance for non-ideal input signals. This leads to some important requirements for transmitters and receivers:

Optical Transmitter Evaluation

  • Optical Modulation Amplitude (OMA): The difference between the transmitter logic levels.

  • Relative intensity noise (RIN): A measure of the amount of noise a transmitter generates.

  • Transmitter dispersion and eye closure: TDEC or TDECQ (for PAM4 modulation) is a statistical measure of the signal quality, indicating the likelihood that the signal will generate errors in the receiver (Figure 1).

  • Overshoot/Undershoot: New metrics recently defined in IEEE 802.3cu to protect receivers from severe transients on the input signal.

Optical Receiver Impaired Signals

Stressed receiver sensitivity (SRS): The bit-error-ratio or expected frame-loss-ratio below the expected level when the signal going into the receiver is the worst case expected signal from the transmitter (and channel).

Figure 2. Stressed Receiver Sensitivity (SRS) impaired optical signal.

Test instruments have been developed to offer instrument grade optical impairments (to specific TDECQ, ER, and OMA targets) for stressed receiver testing. Figure 2 illustrates a typical optical SRS signal which would be generated for test purposes.

Transmitters are generally tested with a specialized digital communications analyzer oscilloscope. These instruments have built-in optical reference receivers and firmware to execute the measurements required by these standards. Similarly, for receivers, SRS test systems (Figure 3), including a calibrated ‘impaired’ signal and a bit-error-ratio tester (BERT), are available to verify standards conformance.

Figure 3. Typical 400G Electro/Optic test setup.

Link performance at 100Gbps—be it electrical or optical—both operate at higher bit error rates than their lower speed 25 or 50Gbps counterparts. Current 100Gbps interfaces operate at native link error rates as high as 2E-4 BER and rely on modern Reed-Solomon forward error correction (RS-FEC) techniques to correct for random and isolated bit errors that naturally occur in transmission.

Forward error encoding is a process that starts on data at the Physical Coding Sublayer (PCS) before data transitions into Physical Media Attachment (PMA). This PCS/PMA interface manages data error coding, interleaving, scrambling and alignment contributions. This PCS/PMA encoding system poses error rate analysis challenges as the process of observing the physical bit error generating root cause is now obscured under a considerable amount of digital error correction and interleaving circuitry. The desire to examine the physical errors in an optical transmission that leads to unrecoverable data frames is a complex process, and one that test instrumentation suppliers are actively advancing today. Specialized tools such as layer1 BERT and KP4 FEC multiport analysis systems now play an integral part of receiver tolerance and general FEC aware debug tools (Figure 4).

Figure 4. FEC Aware physical layer analysis.

The PCS/PMA gap that exists between a FEC corrected optical signal and its actual raw physical transmission can be bridged with Keysight's 400G FEC aware receiver test system, which analyzes FEC encoded data streams and can direct an oscilloscope to localize (trigger on) the physical optical interface at locations where errors are occurring and offers system designers a tool for the first time which connects the post FEC error analysis with analysis and visualization of the physical transmission side by side.

Summary

Currently, the highest capacity direct modulated data-communications systems operate at 400Gbps. These systems have multiple lanes of 100Gbps, either using four transmitters and four fibers or having four wavelength transmitters and a single fiber. First generation 800Gbps links will be 2× scaled up 400Gbps systems through higher density connectors such as QSFP-DD and OSFP interconnects. In this scenario, with just more lanes of 100Gbps to aggregate to 800Gbps, the specifications and test methods will remain similar to what they are for 400Gbps systems. Native ×4 lane wide 800Gbps links will depend on advancements in both electrical and optical specs which are currently underway. This next speed class at 800Gbps will most likely advance to a native 200Gbps per lane speed both electrically and optically while adhering to strong market needs to reduce overall power consumption and cost.

When 200Gbps single lane transmission is achieved, the test methods and techniques from 100Gbps will likely be highly leveraged, however, the 200Gbps field will likely employ advances in modulation methods as a focus on increased transmission efficiency and managing known bandwidth bottlenecks are key pressure points on this industry. The measurement partners at Keysight are integral contributors to these cutting-edge standards efforts to ensure that effective test solutions continue be available as these technologies evolve to 800Gbps and 1.6Tbps for next generation data center architectures.

This article was written by Greg D. Le Cheminant, Measurement Applications Specialist, Digital Communications Analysis, Internet Infrastructure Solutions; and John Calvin, Strategic Planner and Datacom Technology Lead, IP Wireline Solutions; Keysight Technologies (Santa Rosa, CA). For more information, visit here .