Like many people in this industry, I love technology. Working for Tektronix, I am excited when we are the first to introduce new technology, like the Tektronix MDO4000 Mixed-Domain Oscilloscope. But I am also interested to innovation from other test and measurement suppliers.
Last year, Teledyne LeCroy introduced the WaveRunner HRO, or High-Resolution Oscilloscope. They rebranded it this year as the HDO, or High-Definition Oscilloscope. The idea is to use a 12-bit digitizer instead of an 8-bit digitizer to acquire a waveform. With an 8-bit digitizer, there are 256 voltage levels that define the wave shape. In theory, 12-bits means that there are 4096 distinct voltage levels, a great improvement in resolution. More recently, Agilent introduced the DSO9000H, a competing product that also promises 12-bits of performance through oversampling and processing. Such an oscilloscope should allow you to see small signals in the presence of big ones, provide greater accuracy of DC gain, and less noise on a signal.
So why doesn’t Tektronix provide a 12-bit oscilloscope? Aren’t more bits always better? Let’s review.
For those who are new to the industry, higher bit oscilloscopes aren’t a new idea. Below are examples from Nicolet (12-bits) and LeCroy (10-bits), but the idea never really caught on.
Why not? To begin, let me start with a simplified example. Below is a photo that I took back in 2006 when I started with Tektronix and visited the factory in Oregon for the first time. It is a photo of the garden and fountain in front of the main building on campus. The source of the photo was a high resolution RAW image taken on my digital camera, downsized to fit on this blog. The resolution is high enough that you can clearly see the sign reads “Howard Vollum Plaza Dedicated 2005” (Howard Vollum was the founder of Tektronix).
Below, I took the same photo and reduced the resolution even further. There are now fewer pixels in it and less detail. You can no longer decipher the sign in the garden.
With a simple photo editing tool, I was able to take the low resolution picture and “enhance” it back up to full resolution. However, despite the image having full resolution, you can’t read the sign. The difference between the image below and the top image is the source. In this case, the source lacked the detail I needed to make out the sign, even with full resolution. So more resolution does not provide me more detail because I have now introduced sources of error.
More resolution doesn’t mean more information
Now let’s take a look at how this applies to 12-bit oscilloscopes.
The assertion with a 12-bit digitizer is that the individual voltage steps are smaller, so the waveform will have greater fidelity. In the figure below, the staircase is the digitizing level, or quanta. Suppose the signal was 100mV peak to peak, filling the screen of the oscilloscope. In theory, an 8-bit oscilloscope could only display a signal feature as small as 390uV (100mV / 256), but a 12-bit digitizer could show one as small as 24uV (100mV/4096). But how does this theory work in the real world?
The first thing is to remember that while high-resolution digitizers can be effective at low frequencies, they have far fewer effective bits at full bandwidth. Effective number of bits (ENOB) is the true resolution of the A/D once imperfections are included, such as non-linearities, gain errors, distortion, and noise. Just as the image above is a high-resolution representation of a low-resolution source, the same is true of an oscilloscope that has a high-resolution digitizer but other sources of error. If there is noise on the signal, that extra resolution is just extra bits of noise, and the waveform like the text in the sign above will remain blurry.
Hence, the first thing I did when I saw the new datasheets was look for “effective bits,” a common measurement you can find on Tektronix technical references. Unfortunately, neither LeCroy nor Agilent publish an effective bit specification. The actual “effective bit” is the effective quantization size once errors are considered. With increased bandwidth comes increased noise, and hence lower effective bits. Without this number, it is hard to evaluate the 12-bit oscilloscopes on their merits.
Making RF Spectral Measurements
Since I could not find any reference to effective bits, I did find on the LeCroy WaveRunner HDO datasheet a Signal-to-Noise Ratio (SNR) specification of 55dB. SNR is a similar figure of merit to effective bits. In this case, SNR indicates how small of a signal can be discerned in the presence of a large signal. It would stand to reason that if the quantization step was smaller, then a smaller signal could be seen riding on a large one. Curiously, this specification is nowhere to be found on the Agilent DSO9000H datasheet. Since more digitizing resolution should indicate the ability to see smaller details (such as the Howard Vollum sign above), a “high resolution” oscilloscope should have an improved Signal to Noise Ratio. The Tektronix MDO4104-6 uses a specially optimized 8-bit digitizer in its RF path, and achieves a typical 60dB spurious free dynamic range. In narrow spans, the actual signal to noise ratio (excluding spurs) can approach 100dB. To learn more about how Tektronix accomplishes this task, there is an excellent paper available here: http://www.tek.com/document/whitepaper/secrets-behind-mdo4000-series-spectrum-analyzer-dynamic-range
So from an SNR perspective, the 8-bit Tektronix digitizer in the MDO RF path is outperforming the reported performance of the 12-bit LeCroy HDO, and the performance of the Agilent 9000H is not reported at all.
DC Gain Accuracy
Looking at other specifications, one would hope that since both LeCroy and Agilent advertise that they have 16x the resolution of an 8-bit oscilloscope, that the measurements would be 16x better. Yet consider a specification that they do report, DC vertical gain accuracy. Shouldn’t an oscilloscope with 16x the resolution be 16x more accurate? Yet the 12-bit Agilent DSO9000H has a reported 2% gain accuracy, versus a 1% gain accuracy for the 8-bit Tektronix DPO7000 series. The LeCroy HDO does improve this number with a laudable gain accuracy of 0.5%, but it is not exactly 16x better.
However, even the gain accuracy specification doesn’t tell the entire story. Suppose you put in a small positively biased signal (no negative swing) and the oscilloscope set to 10mV/div. On the LeCroy oscilloscope, the screen will go from -40mV to +40mV since there are 8 divisions. You need to set a -40mV offset to use the full digitizer range. Using their datasheet, the claimed accuracy of this 40mV offset can be off by up to 8.5%. With the same full scale settings, the Tektronix DPO7000 would only be off by 5.9%. So even a 0.5% gain accuracy advantage can vanish once you use the offset knob to center a signal.
What about those screenshots?
But doesn’t the marketing literature from these two companies show 8-bit vs 12-bit versions of the same waveform, with the 12-bit one showing less noise and more detail? They do, but often the screenshots are cropped to not show the timebase. The timebase is critical, because a 12-bit oscilloscope will outshine an 8-bit oscilloscope running in normal mode when noise is not an issue, namely on low frequency signals such as those found in power supplies.
So does that mean 12-bit oscilloscopes are the tool of choice for power testing? Lost in this discussion is that 8-bit oscilloscopes do have a method for viewing low frequency data with 12-bit resolution, and that is called “hi-res mode.” Below is the exact same impressive screenshot you are used to seeing from those 12-bit oscilloscopes, but it was made with an 8-bit oscilloscope (a Tektronix DPO7000) being used properly. In this case, the main image shows the top of the square wave is fuzzy due to being sample with an 8-bit oscilloscope in regular sample mode. The inset image is the same signal capture with an 8-bit oscilloscope in Hi-Res mode, displaying the small ripple that increased resolution can discover.
Hi-Res Mode on an 8-bit Oscilloscope
In hi-res mode, the data is significantly oversampled, and then a boxcar average is performed in acquisition hardware to real-time average even with single-shot data. In layman’s terms, it means that the 8-bit oscilloscope can provide up to 12-bits of detail thanks to its high oversampling. This makes the high-frequency 8-bit oscilloscope extremely versatile. You can use the high sample rate for your high-speed signals such as USB 2.0, but the hi-res mode for your fine measurements requiring up to 12-bits.
From this, it’s not clear that 12 bit oscilloscopes offer notable real-world advantages. Moreover, you have to consider that an oscilloscope is more than just its digitizer. There is the front-end, track and hold, and critically, the probe technology. Consider the new ASIC Tektronix developed for its front end which enables 1GHz passive probes (read more here: http://www.tek.com/document/application-note/improve-measurement-accuracy-and-reduce-cost-tektronix-passive-probes ). This allows probing of voltages up to 300V with a simple passive probe. The same technology enables the TPP0502 500MHz probe with a mere 2x of attenuation. For more detail, see my blog post on probe attenuation here: http://www.effectivebits.net/2011/09/probe-attenuation-overlooked.html. If you knock the signal down by 10x before the front-end, you are adding noise to the signal and reducing your effective bits. While 12-bit ADC’s are off-the-shelf technology today, the ASIC to enable the dynamic range of a passive probe with such performance is unique to Tektronix.
Head-to-head – Noise Performance
I recently had the chance to test a LeCroy HRO with its 10x passive probe versus a Tektronix DPO5000 using its 2x passive probe in hi-res mode. Both oscilloscopes were matched in bandwidth to 500MHz. The first thing I decided to test was noise performance, because one of the biggest sources of error which reduces effective bits is vertical noise. If a 12-bit oscilloscope is to have greater effective bits than an 8-bit oscilloscope, it would be expected that the vertical noise should be much better. In a head to head test, I found that the noise advantage of the LeCroy was marginal without the probe, less than 0.1% full scale noise improvement, despite having a 16x quantization advantage. When I put a 10x probe on the LeCroy and a 2x probe on the Tektronix, the noise of the LeCroy was far higher at some settings because it was amplifying a more attenuated signal. The Tektronix DPO5000 was able to discern smaller signal details with less noise because an oscilloscope is more than just a digitizer – it is the sum of everything in the signal path.
I have yet to test an Agilent DSO9000H, but their datasheet does indicate some noise specifications at a single volts per division setting of 100mV/div. At this setting, it has a mere 0.2% noise advantage over an equivalent 8-bit Tektronix DPO7000C, and near identical noise performance to the 8-bit Tektronix DPO70000C series.
The conclusion here is that with no input signal attached, the 12-bit oscilloscopes appear to have marginally better noise performance, but hardly enough to make a difference in real-world measurements. Add on a probe and that marginal noise advantage can disappear quickly
Probes make a huge difference
Similarly, many want to measure small amounts of current, and hope that a 12-bit oscilloscope can measure the top and bottom better in a switching supply. For a regular 100MHz 30A current probe, the Tektronix TCP0030 has a minimum full scale setting of just 10mA, versus 160mA for the LeCroy CP031. Once again, the less sensitive the probe, the higher the noise of the total measurement.
For extremely small voltages, the Agilent 9000H has 8 vertical divisions on the screen, and a minimum sensitivity of 5mV/div in high-impedance. Anything below 5mV is just a software zoom and has no additional bits of resolution. Put on a 10x probe, and the most sensitive setting is 8 divisions * 5mV * 10x, or 400mV full screen. If every one of the 4096 bits were perfect, the minimum quantization level would 400mV/4096, or 97µV. The Tektronix DPO5000 has 10 vertical divisions and a minimum sensitivity of 1mV. Add in the 2x probe, and the most sensitive setting is 10 divisions * 1mV * 2x or 20mV full screen. Divide it by 256, and the minimum quantization level is 20mV/256 or 78µV.
In a word, even if you assume that the 12-bits are perfect and ENOB was not a concern, then an 8-bit oscilloscope with a more versatile front-end and more sensitive probes can see smaller signals with less noise than a 12-bit oscilloscope.
Often there are other surprises, such as finding that the LeCroy HDO series does not appear to have a mixed-signal option for bringing in digital inputs. If a SPI bus is used in a power system, there literally are not enough inputs to trigger on the SPI bus while monitoring voltage and current.
The conclusion here is that 12-bit oscilloscopes make for cool marketing, but they haven’t yet shown themselves to solve any measurement challenges over well designed and properly used 8-bit oscilloscopes. The digitizer may be 12-bits, but the system still has its own limitations. It’s possible that ADC technology will improve and the market will see high-frequency 12-bit oscilloscopes with benefits in some areas. When that day happens, keep your eye out for datasheets that reflect improved SNR, higher effective bits at full bandwidth, significantly lower noise, and more accurate DC gain measurements.