25-10-2012, 04:56 PM
Area, and Power Performance Analysis of a Floating-Point Based Application on FPGAs
ABSTRACT
Almost all signal processing algorithms are initially represented as double precision floating-point in languages such as
Matlab. For hardware implementations, these algorithms have to be converted to large precision fixed-point to have a
sufficiently large dynamic range. However the inevitable quantization effects and the complexity of converting the
floating-point algorithm into a fixed point one, limit the use of fixed-point arithmetic for high precision embedded computing.
FPGAs have become an attractive option for implementing computationally intensive applications. However, the common
conception has been that efficient FPGA implementations of floating-point arithmetic have a lot of performance, area and
power overheads compared to fixed-point arithmetic.With recent technology advances, FPGA densities are increasing at a rate
at which area considerations are becoming less significant. These advances have also reduced the performance and power
overhead of floating-point arithmetic. With appropriate designs, floating-point applications can even be more efficient than
fixed-point ones for large bitwidths. The overheads in the context of the overall application can be quite low. In this paper,
we present a preliminary area, and power performance analysis of double precision matrix multiplication, an extensively used
kernel in embedded computing and also show that FPGAs are good candidates for implementing high precision floating-point
based applications when compared to a general-purpose processor. Currently many FPGA based floating-point units, both open
source 2 and commercial 1, are available. However, most of them consider only single precision floating-point operations, and
do not make use of the recent advances in FPGAs. Moreover, an area, and power performance analysis of the floating-point
units in the context of a common application is lacking.