Modern oscilloscopes come equipped with a host of different attributes, and many vendors tout their latest additions as “must have” features. With so many attributes and marketing messages, recalling the importance of a long-held attribute such as memory depth can become lost in the noise. However, any engineer who has grappled with shallow memory on an oscilloscope will be vocal about the frustration of the experience. Those who aren’t vocal simply haven’t stumbled on an issue that required it — yet.

Figure 1: The memory depth or record length needed on an oscilloscope is directly impacted by sample rate/resolution and acquisition.

Memory depth of an oscilloscope is the primary determinant of how much “time” you can record. It is often referred to as Record Length for this reason. Many modern oscilloscopes come standard with “megasamples” of memory, which, considering that 10 years ago this was “kilosamples,” may seem adequate. But as devices combine elements that operate in “human time” (seconds and milliseconds) such as power supplies or control buses, and devices that operate in “machine time” (nanoseconds and picoseconds) such as I/O drivers, you can quickly run out of memory or create an improper setup for your signal. Because memory depth is intimately tied to the sample rate of the oscilloscope (see Figure 1), the common action of turning the scale knob on your scope to see more time may be impacting your measurements in many ways, including lowering your sample rate.

Issues During Power Supply Boot-Up

The boot time of a power supply is a classic example of something that occurs in “human time.” A common source of frustration, project managers often add months to a project schedule to account for testing new or untested power supply used in a design. Deep memory in an oscilloscope can unravel problems faster when something taking place during the power supply turn-on impacts other parts of the system. Electromagnetic interference (EMI), crosstalk/cross-coupling, and related issues can plague many different parts of a design. Often, the device impacted is operating at full speed, but the source of the problem — the power supply — is operating in milliseconds or seconds. A boot sequence of a power supply shown in Figure 2 is taking place across a few milliseconds. With a shorter record length of 2M or 4M samples, you might be limited to 100 or 200 MSa to catch this same sequence of time, and the issue on an adjacent channel at higher speed could be obscured.

Figure 2: The reboot sequence of 5V to 1.8V Buck converter (Yellow Trace) and an adjacent I2S Signal (Green Trace) showing cross-coupled high-frequency noise.

An example shown in Figure 2 is where noise was spotted on an I2S audio signal (in green at top). The signal displays a somewhat repetitive burst of noise that, when seen in a zoom window, appears to be high-frequency noise, which is causing an issue in the system. By investigating the surrounding signals with shallow memory, we are unable to locate the source of the high-frequency noise. But expanding the time record, while preserving the high sample rate, allows observation of the source of the noise. The nearby 5V to 1.8V Buck (step down) converter is emitting this high-frequency noise at the same interval as the noise on the I2S signal as it is going through a reboot sequence. The ability to cross-correlate the phase of power supply bring-up with high-speed channels can often be the debug method that saves the day (or the project schedule).

Debugging Low-Speed Serial Buses

Figure 3: Zoom window of I2C bus with no glitch showing at 10 MSa/s sample rate.

Low serial buses such as I2C, SPI, CAN, or LIN are often used as the control elements for digital designs. In many cases, as these are the source of change or action on the system, they are used to troubleshoot and understand system behavior. While protocol triggers might assist, visibility across many bursts or packets of data is often required to gain insight. Two commonly used methods to view multiple bursts — reducing sample rate on the scope or viewing multiple packets with segmented memory — can be fraught with issues. These methods are often driven by a lack of deep enough memory on an oscilloscope (or user misunderstanding of how to use the deep memory).

Method 1: Reducing the sample rate on the scope to view a large sequence of time. Since all oscilloscope measurements use a single time base, compromising time base to view multiple serial packets will also compromise the accuracy of the very item you might need to debug. This mix of high-speed items and slower items again presents a problem that deep memory can solve. In Figure 3, the I2C bus is operating at 100 KHz. A packet of date is sent about every 12 ms. Because the data and clock rates are slower, we can utilize a lower sample rate, but we need to be careful not to slow it down too much. In this example, we are viewing 20+ packets of data. The two examples in Figures 3 and 4 expose a glitch that is seen only with a high enough sample rate. The first capture (Figure 3) is taken at a 10 MSa/s rate. We might be forced to use a slow rate like this due to shallow memory on our scope. Alternatively, we might find this as an automatic setting chosen using the Autoset (or Autoscale) button on the oscilloscope. Unfortunately, there is a glitch that is not visible since the sampling rate is too slow to catch it.

Figure 4: Zoom window of the same I2C bus shown in Figure 3 with glitch detected at 100 MSa/s sample rate.

In Figure 4, the sample rate has been dialed up to 100 MSa/s. This requires additional memory depth (40 MSa used here) to capture the same number of data packets, but the result is much different. Note that both will accurately reconstruct the serial data packets. The difference is what can be seen “inside” the serial traffic with a higher sampling rate that utilizes deeper memory.

Method 2: Viewing multiple packets with segmented memory. Because segmented memory isn’t storing information in-between packets, a high sample rate can be paired with a long time record. This method might actually aid in capturing the glitch issue shown in Figure 4. However, two additional considerations are important when utilizing this method. First, any asynchronous event occurring between packets will not be visible. Since the debug process involves looking for events that impact a system, we want to be sure we can detect issues in-between packets. The second issue can be that the segmented re-arm time may not allow for capture of sequential packets. Oscilloscope re-arm times can range in the μS range. Utilizing this technique requires that you know enough about your serial bus operation to gain confidence that nothing is left to chance.

Fortunately, having deep memory in your oscilloscope conquers both of these challenges. With 20M+ of memory, many common serial buses can be captured for multiple sequential packets at high resolution.

Viewing High-Frequency Noise Impacting Slower Signals

Figure 5: Zoom of low-speed signal viewed from an Autoset maximizing sample rate on a memory depth of 4M.

There are times when a noisy device in the system or a bad ground connection might cause high-frequency noise to be present on a signal. Although we might try to filter this, a common debug or validation task is to probe many different signals on the board to ensure we have done this properly, and aren’t sending a noisy signal to the rest of the system. Take, for example, the output of a slow-moving digital-to-analog converter (DAC) that controls part of a positioner or robotic arm. We want to ensure a clean signal is presented downstream after the filter you have in place, so a quick look with an oscilloscope should give this confidence. In Figure 5, a slow-speed signal is viewed with common settings, which presents a clean signal on first glance.

However, by increasing the sample rate, which will utilize deeper memory for the same signal view, we see that our slower sample rate was obscuring higher-speed noise injected into this signal. The second capture of the same signal with the same setup but a higher sample rate (Figure 6) shows spikes of high-frequency noise that were obscured by the undersampling of the original autoset capture. This simple change highlights the power of deep memory to sample at a high rate, even on slower signals.

Figure 6: The same signal taken in Figure 5 at a higher sample rate (500MSa/s) reveals high-frequency noise.

In conclusion, it is important to be reminded of a core oscilloscope attribute that may have been crowded out by large screens and fast update rates: memory. Bandwidth and sample rate are primary considerations for oscilloscope selection, but as shown, the memory depth of an oscilloscope really defines how we can utilize that sample rate for real-world measurements. Choosing a scope without enough memory, or not properly setting up a scope with enough memory, can have measurement and debug impacts that are far reaching.

For today’s measurements, oscilloscopes with 10 M-20 MSa as a standard configuration should suffice. The ability to expand this to 100s of megasamples is nice to have, just in case your problem requires it. An important practice in addition to having enough memory is to ensure that you are maximizing your sample rate within the time scale for your signal. Often the automatic configuration (autoset or autoscale) will not do this. A quick step to ensure the maximum sample rate is selected before critical measurements can save hours of frustration later.

This article was written by Dave Rishavy, Rohde & Schwarz America’s Director of Oscilloscope Product Family (Columbia, MD). For more information, Click Here .