07-07-2012, 04:57 PM
energy aware compiler
energy aware compiler.docx (Size: 220.2 KB / Downloads: 27)
Introduction
Compiler is a critical component in determining the types, order (to some extent), and number of instructions executed for a given application. Thus, it wields a significant influence on the power consumed by the system. However, most compiler optimizations focus on metrics such as performance and code size, and do not account for power considerations during optimizations or code generation. There have been a few studies that have focused on specific low-level optimizations for reducing power such as instruction scheduling [9] and register allocation [3]. Highlevel (source-level) optimizations can complement these techniques, and more importantly, as shown in a few recent simulation-based studies (e.g., [8, 5]), can have a larger impact on the system power consumption. In order to develop and evaluate new energy-conscious compiler optimizations, we need mechanisms to estimate the energy consumption in a quick and accurate manner.
Literature Survey
It is anticipated that, in many array-dominated codes, the highest energy gains will come from high-level code optimizations [1, 10]. The framework I propose can be used by compiler designers and system architects in the following ways:
• Given a fixed architecture and technology parameters, various performance-oriented compiler optimization techniques can be evaluated from an energy viewpoint. For example, the energy impact of loop tiling, loop distribution, unroll-and-jam, and other loop and data transformations can be investigated using EAC.
• For specific compiler optimization techniques preferred for performance reasons, the potential energy savings from architectural enhancements/modifications can be evaluated. Conversely, compiler optimizations designed to exploit energy-efficient architectural features can be studied.
Modeling Energy Consumption of Software
Energy consumption is dependent on how different components of a system are exercised by the software. In this work, we focus on dynamic energy consumption. The dynamic energy consumed in a system can be expressed as the sum of the energies consumed in the different components such as the data path, caches, clock network, buses, and main memory. The activity, and consequently the energy consumed in these components, is determined by the software being executed on the system. The software can modify the number of transitions in the nodes (note that this affects switching power) by altering the input patterns, reduce effective capacitance by reducing the absolute number of accesses to high-capacitance components (e.g., large off-chip memories), or scale voltage/clock frequency to adapt the energy behavior of the code to the needs of the application.
Extracting Parameters for Energy Models
In order to compute the energy expended in different hardware units, the compiler should analyze the program and extract the application-dependent parameters required by the energy models. The second column in Figure 2 gives a list of these compiler-supplied parameters.
In this section, we explain the techniques to extract these parameters from the nested loop-based codes used in this work. The first step in developing the automated process involves identifying the high-level constructs used in these codes and correlating them with the actual machine instructions. The constructs that are vital to the studied codes include a typical loop, a nested loop, assignment statements, array references, and scalar variable references within and outside loops. To compute datapath energy, we need to estimate the number of instructions of each type associated with the actual execution of these constructs. To achieve this, the assembly equivalents of several codeswere obtained using our back-end compiler (a variant of gcc) with the O2-level optimization.