17-12-2012, 05:43 PM
Advanced Topics in Computer Architecture
1Advanced Topics.pdf (Size: 1.96 MB / Downloads: 111)
Goals of the course
Understanding the design techniques, machine structures,
technology factors, evaluation methods that will determine
the form of computers in 21st Century
Learning outcomes
At the end of the course the student should be able to:
A.Understand and evaluate the hardware components
of advanced architectures
B.Understand and analyze architectures performance
and select among different ones for particular use
scenarios.
C.Understand and analyze the most important parallel
architectures in order to distinguish their main
differences.
D. Understand and use simulation and evaluation tools
for advanced computer architectures
Computer Technology
• Computer technology has made incredible progress
in the roughly 60 years since the first generalpurpose
electronic computer was created.
• Today, less than $500 will purchase a personal
computer that has more performance, more main
memory, and more disk storage than a computer
bought in 1985 for 1 million dollars.
• This rapid improvement has come both from
advances in the:
– technology used to build computers
– innovation in computer design.
Microprocessor
• Although technological improvements have been
fairly steady, progress arising from better computer
architectures has been much less consistent.
• During the first 25 years of electronic computers,
both forces made a major contribution, delivering
performance improvement of about 25% per year.
• The late 1970s saw the emergence of the
microprocessor.
• The ability of the microprocessor to ride the
improvements in integrated circuit technology led to
a higher rate of improvement—roughly 35% growth
per year in performance.
Successful Architectures
• Two significant changes in the computer marketplace
made it easier than ever before to be commercially
successful with a new architecture.
– First, the virtual elimination of assembly language
programming reduced the need for object-code
compatibility.
– Second, the creation of standardized, vendorindependent
operating systems, such as UNIX and its
clone, Linux, lowered the cost and risk of bringing out
a new architecture.
RISC Machines
• These changes made it possible to develop
successfully a new set of architectures with simpler
instructions, called RISC (Reduced Instruction Set
Computer) architectures, in the early 1980s.
• The RISC-based machines focused the attention of
designers on two critical performance techniques
– the exploitation of Instruction level parallelism initially
through pipelining and later through multiple instruction
issue
– the use of caches (initially in simple forms and later using
more sophisticated organizations and optimizations).
RISC: Faster Machines
• The RISC-based computers raised the performance
bar, forcing prior architectures to keep up or
disappear.
– The Digital Equipment Vax could not, and so it was
replaced by a RISC architecture.
– Intel rose to the challenge, primarily by translating x86
(or IA-32) instructions into RISC-like instructions
internally, allowing it to adopt many of the innovations
first pioneered in the RISC designs.
• As transistor counts soared in the late 1990s, the
hardware overhead of translating the more complex
x86 architecture became negligible.
The end of an era
• The 16-year renaissance is over.
• Since 2002, processor performance improvement has
dropped to about 20% per year due to the triple hurdles of:
– maximum power dissipation of air-cooled chips,
– little instruction-level parallelism left to exploit efficiently, and
– almost unchanged memory latency.
• Indeed, in 2004 Intel canceled its high-performance
uniprocessor projects and joined IBM and Sun in declaring
that the road to higher performance would be via multiple
processors per chip rather than via faster uniprocessors.
• This signals a historic switch from relying solely on
instruction level parallelism (ILP), to thread-level
parallelism (TLP) and data-level parallelism (DLP).