07-05-2012, 05:13 PM
Design of a parallel A/D converter system on PCB – For high-speed sampling and timing error estimation
parallelADC sys on PCB.pdf (Size: 1.8 MB / Downloads: 104)
Abstract
The goals for most of today’s receiver system are sampling at high-speed, with high resolution and
with as few errors as possible. This master thesis describes the design of a high-speed sampling
system with “state-of-the-art” components available on the market. The system is designed with a
parallel Analog-to-digital converter (ADC) architecture, also called time interleaving. It aims to
increase the sampling speed of the system. The system described in this report uses four 12-bits
ADCs in parallel. Each ADC can sample at 125 MHz and the total sampling speed will then
theoretically become 500 Ms/s. The system has been implemented and manufactured on a printed
circuit board (PCB). Up to four boards can be connected in parallel to get 2 Gs/s theoretically.
In an approach to increase the systems performance even further, a timing error estimation
algorithm will be used on the sampled data. This algorithm estimates the timing errors that occur
when sampling with non-uniform time interval between samples. After the estimations, the
sampling clocks can be adjusted to correct the errors.
ANALOG-TO-DIGITAL CONVERSION - THEORY
Today, almost every communication system works with signals in the digital
domain. This creates a need for a good Analog-to-digital conversion unit. This
unit is called an Analog-to-digital converter (ADC). The ADC is a key
component in systems for radio communication, digital signal processing and
measuring. It also plays a keyrole in many other systems where an analog signal
is an input. Ideally, the ADC has infinite resolution and is error free, but in
reality the resolution is limited and errors are unavoidable even if they can be
very small.
Sampling
An ADC converts an analog time-continuous signal waveform to a time-discrete
signal by sampling. Sampling is the technique to represent a continuous-time
signal with a sequence of time-discrete values (in this case, binary values). The
signal is usually bandlimited with bandwidth B and sampled at uniform time
intervals, TS. This will in the frequency domain correspond to a sample
frequency, fS =1 TS . To ensure that the sampled signal can be reconstructed
exactly from the samples, the sample frequency, fS, is required to be at least two
times the signals highest frequency component. This requirement is known as
the Nyquist Theorem and fS 2 is called the Nyquist frequency [1]. Sampling at
twice the signal frequency is called Nyquist sampling. If an analog signal have
frequency components above the Nyquist frequency it will have image overlap
and aliasing distortion because of the sampling [2], see the upper part of
figure 1. With signals below the Nyquist frequency, these phenomena will be
easily avoided. Lowpass filtering the signal before sampling would be enough to
overcome this problem. This is called anti-alias filtering, see figure 1.
Oversampling, undersampling and IF-sampling
Sampling at a higher rate than 2 times the analog signal frequency is called
oversampling. One of the advantages with oversampling is that only a simpler
anti-aliasing filter is required. This is because, with a higher sampling rate, the
mirrored signal images will be more separated from each other and that will ease
the requirements in the transition band of the filter, see lower part of figure 1.
If an oversampled signal is digitally decimated to a rate closer to Nyquist, an
advantage called conversion gain will occur. A conversion gain of 3dB is
achieved for every “factor-of-two” decimation due to a reduction in quantization
noise with 3dB [2].
Quantization and SNR
Quantization is when the time-continuous analog signal with an infinite number
of amplitude levels maps into a set or finite values represented by a limited
number of bits. Quantization always introduces an error because some
information in the signal is lost. When the number of levels in the quantizer is
large and the input signal is sufficiently random, the quantization error is well
approximated as a random white noise process, which is uncorrelated with the
input signal. Most commonly used is uniform quantization, where input
thresholds and output values are evenly spaced. The quantization stepsize, q,
determines the resolution of the quantization process.