25-05-2012, 03:58 PM
FPGA IMPLEMENTATION OF HIGH PERFORMANCE FLOATING POINT MULTIPLIER
FPGA IMPLEMENTATION OF HIGH PERFORMANCE FLOATING POINT MULTIPLIER.pdf (Size: 2.92 MB / Downloads: 154)
INTRODUCTION
Digital arithmetic operations are very important in the design of digital processors and
application-specific systems. Arithmetic circuits form an important class of circuits in digital
systems. With the remarkable progress in the very large scale integration (VLSI) circuit
technology, many complex circuits, unthinkable yesterday have become easily realizable
today. Algorithms that seemed impossible to implement now have attractive implementation
possibilities for the future. This means that not only the conventional computer arithmetic
methods, but also the unconventional ones are worth investigation in new designs.
MOTIVATION
As the scale of integration keeps growing, more and more sophisticated signal
processing systems are being implemented on a VLSI chip. These signal processing
applications not only demand great computation capacity but also consume considerable
amount of energy and area on the chip. While performance and area remain to be the two
major design tolls, power consumption has also become a critical concern in today’s VLSI
system design. The need for low-power VLSI system arises from two main forces. First, with
the steady growth of operating frequency and processing capacity per chip, large currents
have to be delivered and the heat due to large power consumption must be removed by proper
cooling techniques. Second, battery life in portable electronic devices is limited. Low power
design directly leads to prolonged operation time in these portable devices.
DESIGN APPROACH
The basic motive of our project was to study and develop a high performance floating
point multiplier in terms of speed, area and power. As the name suggests we had to go for
faster optimization. We know that the basic building blocks of a floating point multiplier are
the exponent adder circuit and the fixed point fraction multiplier. Hence we turned our focus
into the adder first. We studied the area occupied and the time delay consumed by different
adders and found out a proper relation between time and area complexity of all the adders
under consideration. We generated a factor Area-Delay product which helped us to properly
understand the Area and Delay trade-off perfectly and hence choose the best adder for
appropriate circumstances.
FLOATING POINT NUMBERS
The term floating point is derived from the fact that there is no fixed number of
digits before and after the decimal point, that is, the decimal point can float. There are
also representations in which the number of digits before and after the decimal point is
set, called fixed-point representations. In general, floating point representations are
slower and less accurate than fixed-point representations, but they can handle a larger
range of numbers. Floating Point Numbers are numbers that can contain a fractional part.
For e.g. following numbers are floating point numbers: 3.0, -111.5, ½, 3E-5 etc.
FLOATING POINT FORMATS
Several different representations of real numbers have been proposed, but by far the
most widely used is the floating-point representation. Floating-point representations have a
base b (which is always assumed to be even) and a precision p. If b = 10 and p = 3 then the
number 0.1 is represented as 1.00 × 10
. If b = 2 and p = 22, then the decimal number 0.1
cannot be represented exactly but is approximately 1.100110011001100110011×2 . In
general, a floating point number will be represented as ± d.dd… d × where d.dd… d is
called the Significand and has p digits. More precisely ± d0 d1 d2 ... dp-1 × represents the
number.