14-06-2009, 12:58 AM
AUTONOMIC COMPUTING
A SEMINAR REPORT
Submitted by
Pushkar Kumar
impartial fulfillment of requirement of the Degree
of
Bachelor of Technology (B.Tech)
IN
COMPUTER SCIENCE AND ENGINEERING
SCHOOL OF ENGINEERING
COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY
KOCHI- 682022
SEPTEMBER 2008Page 2
DIVISION OF COMPUTER SCIENCE & ENGINEERING
SCHOOL OF ENGINEERING
COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY
KOCHI-682022
Certificate
Certified that this is a bonafide record of the seminar report titled
"AUTONOMIC COMPUTING"
done by
Pushkar Kumar
of VIIth semester Computer Science & Engineering in the year
2008 in partial fulfillment of requirement for the Degree of
Bachelor of Technology in Computer Science & Engineering of
Cochin University of Science and Technology.
Mrs. Sheena S. Dr. David Peter S.
Seminar Guide Head of Division
DatePage 3
Acknowledgement
At the outset, I thank the Lord Almighty for the grace, strength and hope
to make my endeavor a success.
I also express my gratitude to Dr. David Peter, Head
of the Department and for providing me with adequate facilities, ways and
means by which I was able to complete this seminar. I express my sincere
gratitude to him for his constant support and valuable suggestions without
which the successful completion of this seminar would not have been
possible.
I thank Ms. Sheena S, my seminar guide for her
boundless cooperation and helps extended for this seminar. I express my
immense pleasure and thankfulness to all the teachers and staff of the
Department of Computer Science and Engineering, CUSAT for their
cooperation and support.
Last but not the least, I thank all others, and especially
my classmates and my family members who in one way or another helped
me in the successful completion of this work.
PUSHKAR KUMARPage 4
ABSTRACT
The increasing scale complexity, heterogeneity and dynamism of networks,
systems and applications have made our computational and information
infrastructure brittle, unmanageable and insecure. This has necessitated the
investigation of an alternate paradigm for system and application design,
which is based on strategies used by biological systems to deal with similar
challenges - a vision that has been referred to as autonomic computing. The
overarching goal of autonomic computing is to realize computer and
software systems and applications that can manage themselves in accordance
with high-level guidance from humans. Meeting the grand challenges of
autonomic computing requires scientific and technological advances in a
wide variety of fields, as well as new software and system architectures that
support the effective integration of the constituent technologies.Page 5
TABLE OF CONTENTS
Chapter No. Title Page No.
List of Figures Ill
1 Introduction 1
1.1 The Complexity Problem 2
1.2 The Evolution Problem 3
Foundations and Concepts 4
2.1 The Ubiquitous Control Loop 4
2.2 Autonomic Elements 6
2.3 Characteristics of Autonomic Systems 8
2.4 Policies 11
2.5 Issues of Trust 13
2.6 Evolution Rather Than Revolution 14
Analysis and Benefits of Current AC Work 16
3.1 AC Framework 16Page 6
3.2 Quality Attributes and Architecture Evaluation 17
3.3 Standards 19
3.4 Curriculum Development 21
Conclusion 22
References 23
IIPage 7
List of Figures
No. NAME PAGE No.
2.1: Control Loop 4
2.2: Autonomic Element 6
2.3: Autonomic Characteristics 8
2.4: Increasing Autonomic Functionality 15
3.1: Interface Standards Within an Autonomic Element [Miller 05b] 20
niPage 8
AUTONOMIC COMPUTING
1.Introduction
Computer Systems develop organically. A computer system usually starts as a simple
clean system intended for a well defined environment and applications. However, in
order to deal with growth and new demands, storage computing and networking
components are added, replaced and removed from the system, while new applications
are upgraded.
Some changes to the system are intended to enhance its functionality, but result in loss of
performance or other undesired secondary effects. In order to improve performance or
reliability, resources are added or replaced. The particulars of such development cannot
be anticipated, it just happens the way it does.
The autonomic computing effort aims to make systems self-configuring and self-
managing. However, for the most part the focus has been on how to make system
components self-configuring and self-managing. Each such component has its own policy
for how to react to change in its environment.
Autonomic computing is not a new field but rather an amalgamation of selected theories
and practices from several existing areas including control theory, adaptive algorithms,
software agents, robotics, fault-tolerant computing, distributed and real-time systems,
machine learning, human-computer interaction (HCl), artificial intelligence, and many
more. The future of autonomic computing is heavily dependent on the developments and
successes in several other technology arenas that provide an infrastructure for autonomic
computing systems including Web and grid services, architecture platforms such as
service-oriented architecture (SOA), Open Grid Services Architecture (OGSA), and
pervasive and ubiquitous computing.
DIVISION OF COMPUTER ENGINEERINGPage 9
AUTONOMIC COMPUTING
1.1 The Complexity Problem
The increasing complexity of computing systems is overwhelming the capabilities of
software developers and system administrators who design, evaluate, integrate, and
manage these systems. Today, computing systems include very complex infrastructures
and operate in complex heterogeneous environments. With the proliferation of handheld
devices, the ever-expanding spectrum of users, and the emergence of the information
economy with the advent of the Web, computing vendors have difficulty providing an
infrastructure to address all the needs of users, devices, and applications. SOAs with Web
services as their core technology have solved many problems, but they have also raised
numerous complexity issues. One approach to deal with the business challenges arising
from these complexity problems is to make the systems more self-managed or autonomic.
For a typical information system consisting of an application server, a Web server,
messaging facilities, and layers of middleware and operating systems, the number of
tuning parameters exceeds human comprehension and analytical capabilities. Thus, major
software and system vendors endeavor to create autonomic, dynamic, or self-managing
systems by developing methods, architecture models, middleware, algorithms, and
policies to mitigate the complexity problem. In a 2004 Economist article, Kluth
investigates how other industrial sectors successfully dealt with complexity [Kluth 04].
He and others have argued that for a technology to be truly successful, its complexity has
to disappear. He illustrates his arguments with many examples including the automobile
and electricity markets. Only mechanics were able to operate early automobiles
successfully. In the early 20th century, companies needed a position of vice president of
electricity to deal with power generation and consumption issues. In both cases, the
respective industries managed to reduce the need for human expertise and simplify the
usage of the underlying technology. However, usage simplicity comes with an increased
complexity of the overall system (e.g., what is "under the hood"). Basically for every
mouse click or return we take out of the user experience, 20 things have to happen in the
software behind the scenes. Given this historical perspective with this predictable path of
technology evolution, maybe there is hope for the information technology sector.
DIVISION OF COMPUTER ENGINEERINGPage 10
AUTONOMIC COMPUTING
1.2 The Evolution Problem
By attacking the software complexity problem through technology simplification and
automation, autonomic computing also promises to solve selected software evolution
problems. Instrumenting software systems with autonomic technology will allow us to
monitor or verify requirements (functional or nonfunctional) over long periods of time.
For example, self-managing systems will be able to monitor and control the brittleness of
legacy systems, provide automatic updates to evolve installed software, adapt safety-
critical systems without halting them, immunize computers against malware
automatically, facilitate enterprise integration with self-managing integration
mechanisms, document architectural drift by equipping systems with architecture analysis
frameworks, and keep the values of quality attributes within desired ranges.
DIVISION OF COMPUTER ENGINEERINGPage 11
AUTONOMIC COMPUTING
2 Foundations and Concepts
2.1 The Ubiquitous Control Loop
At the heart of an autonomic system is a control system, which is a combination of
components that act together to maintain actual system attribute values close to desired
specifications. Open-loop control systems (e.g., automatic toasters and alarm clocks) are
those in which the output has no effect on the input. Closed-loop control systems (e.g.,
thermostats or automotive cruise-control systems) are those in which the output has an
effect on the input in such a way as to maintain a desired output value. An autonomic
system embodies one or more closed control loops. A closed-loop system includes some
way to sense changes in the managed element, so corrective action can be taken. The
speed with which a simple closed-loop control system moves to correct its output is
described by its damping ratio and natural frequency. Properties of a control system
include spatial and temporal separability of the controller from the controlled element,
evolvability of the controller, and filtering of the controlled resource.
Fig2.1
Numerous engineering products embody open-loop or closed-loop control systems. The
AC community often refers to the human autonomic nervous system (ANS) with its
many control loops as a prototypical example. The ANS monitors and regulates vital
signs such as body temperature, heart rate, blood pressure, pupil dilation, digestion blood
DIVISION OF COMPUTER ENGINEERINGPage 12
AUTONOMIC COMPUTING
sugar, breathing rate, immune response, and many more involuntary, reflexive responses
in our bodies. The ANS consists of two separate divisions called the parasympathetic
nervous system, which regulates day-to-day internal processes and behaviors, and the
sympathetic nervous system, which deals with stressful situations. Studying the ANS
might be instructive for the design of autonomic software systems. For example,
physically separating the control loops that deal with normal and abnormal situations
might be a useful design idea for autonomic software systems.
DIVISION OF COMPUTER ENGINEERINGPage 13
AUTONOMIC COMPUTING
2.2 Autonomic Elements
IBM researchers have established an architectural framework for autonomic systems
[Kephart 03]. An autonomic system consists of a set of autonomic elements that contain
and manage resources and deliver services to humans or other autonomic elements. An
autonomic element consists of one autonomic manager and one or more managed
elements. At the core of an autonomic element is a control loop that integrates the
manager with the managed element. The autonomic manager consists of sensors,
effectors, and a five-component analysis and planning engine as depicted in Figure 2. The
monitor observes the sensors, filters the data collected from them, and then stores the
distilled data in the knowledge base. The analysis engine compares the collected data
against the desired sensor values also stored in the knowledge base. The planning engine
devises strategies to correct the trends identified by the planning engine. The execution
engine finally adjusts parameters of the managed element by means of effectors and
stores the affected values in the knowledge base.
Fig 2.2
DIVISION OF COMPUTER ENGINEERINGPage 14
AUTONOMIC COMPUTING
An autonomic element manages its own internal state and its interactions with its
environment (i.e., other autonomic elements). An element's internal behavior and its
relationships with other elements are driven by the goals and policies the designers have
built into the system. Autonomic elements can be arranged as strict hierarchies or graphs.
Touch points represent the interface between the autonomic manager and the managed
element. Through touch points, autonomic managers control a managed resource or
another autonomic element. It is imperative that touch points are standardized, so
autonomic managers can manipulate other autonomic elements in a uniform manner. That
is, a single standard manageability interface, as provided by a touch point, can be used to
manage routers, servers, application software, middleware, a Web service, or any other
autonomic element. This is one of the key values of AC: a single manageability interface,
rather than the numerous sorts of manageability interfaces that exist today, to manage
various types of resources [Miller 05e]. Thus, a touch point constitutes a level of
indirection and is the key to adaptability. A manageability interface consists of a sensor
and an effector interface. The sensor interface enables an autonomic manager to retrieve
information from the managed element through the touch point using two interaction
styles:
(1) request-response for solicited (queried) data retrieval and
(2) send-notification for unsolicited (event-driven) data retrieval.
The effector interface enables an autonomic manager to manage the managed element
through the touch point with two interaction types:
(1) perform-operation to control the behavior (e.g., adjust parameters or send commands)
(2) solicit-response to enable call-back functions.
IBM has proposed interface standards for touch points and developed a simulator to aid
the development of autonomic managers. The Touch point Simulator can be used to
simulate different managed elements and resources and to verify standard interface
compliance.
DIVISION OF COMPUTER ENGINEERINGPage 15
AUTONOMIC COMPUTING
2.3 Characteristics of Autonomic Systems
An autonomic system can self-configure at runtime to meet changing operating
environments, self-tune to optimize its performance, self-heal when it encounters
unexpected obstacles during its operation, andâ€of particular current interestâ€protect
itself from malicious attacks. Research and development teams concentrate on
developing theories, methods, tools, and technology for building self-healing, self-
configuring, self-optimizing, and self-protecting systems, as depicted in Figure 3. An
autonomic system can self-manage anything including a single property or multiple
properties.
/ SeJf-
/ Configuring
Seff-
Heaiing
Fig 2.3
An autonomic system has the following characteristics:
¢ reflexivity: An autonomic system must have detailed knowledge of its
components, current status, capabilities, limits, boundaries, interdependencies
with other systems, and available resources. Moreover, the system must be aware
DIVISION OF COMPUTER ENGINEERINGPage 16
AUTONOMIC COMPUTING
of its possible configurations and how they affect particular nonfunctional
requirements.
self-configuring: Self-configuring systems provide increased responsiveness by
adapting to a dynamically changing environment. A self-configuring system must
be able to configure and reconfigure itself under varying and unpredictable
conditions. Varying degrees of end-user involvement should be allowed, from
user-based reconfiguration to automatic reconfiguration based on monitoring and
feedback loops. For example, the user may be given the option of reconfiguring
the system at runtime; alternatively, adaptive algorithms could learn the best
configurations to achieve mandated performance or to service any other desired
functional or nonfunctional requirement. Variability can be accommodated at
design time (e.g., by implementing goal graphs) or at runtime (e.g., by adjusting
parameters). Systems should be designed to provide configurability at a feature
level with capabilities such as separation of concerns, levels of indirection,
integration mechanisms (data and control), scripting layers, plug and play, and
set-up wizards. Adaptive algorithms have to detect and respond to short-term and
long-term trends.
self-optimizing: Self-optimizing systems provide operational efficiency by tuning
resources and balancing workloads. Such a system will continually monitor and
tune its resources and operations. In general, the system will continually seek to
optimize its operation with respect to a set of prioritized nonfunctional
requirements to meet the ever changing needs of the application environment.
Capabilities such as repartitioning, reclustering, load balancing, and rerouting
must be designed into the system to provide self-optimization. Again, adaptive
algorithms, along with other systems, are needed for monitoring and response.
DIVISION OF COMPUTER ENGINEERINGPage 17
AUTONOMIC COMPUTING
self-healing: Self-healing systems provide resiliency by discovering and
preventing disruptions as well as recovering from malfunctions. Such a system
will be able to recoverâ€without loss of data or noticeable delays in processingâ€
from routine and extraordinary events that might cause some of its parts to
malfunction. Self-recovery means that the system will select, possibly with user
input, an alternative configuration to the one it is currently using and will switch
to that configuration with minimal loss of information or delay.
self-protecting: Self-protecting systems secure information and resources by
anticipating, detecting, and protecting against attacks. Such a system will be
capable of protecting itself by detecting and counteracting threats through the use
of pattern recognition and other techniques. This capability means that the design
of the system will include an analysis of the vulnerabilities and the inclusion of
protective mechanisms that might be employed when a threat is detected. The
design must provide for capabilities to recognize and handle different kinds of
threats in various contexts more easily, thereby reducing the burden on
administrators.
adapting: At the core of the complexity problem addressed by the AC initiative is
the problem of evaluating complex tradeoffs to make informed decisions. Most of
the characteristics listed above are founded on the ability of an autonomic system
to monitor its performance and its environment and respond to changes by
switching to a different behavior. At the core of this ability is a control loop.
Sensors observe an activity of a controlled process, a controller component
decides what has to be done, and then the controller component executes the
required operations through a set of actuators. The adaptive mechanisms to be
explored will be inspired by work on machine learning, multi-agent systems, and
control theory.
DIVISION OF COMPUTER ENGINEERING 10Page 18
AUTONOMIC COMPUTING
2.4 Policies
Autonomic elements can function at different levels of abstraction. At the lowest levels,
the capabilities and the interaction range of an autonomic element are limited and hard-
coded. At higher levels, elements pursue more flexible goals specified with policies, and
the relationships among elements are flexible and may evolve. Recently, Kephart and
Walsh proposed a unified framework for AC policies based on the well-understood
notions of states and actions [Kephart 04]. In this framework, a policy will directly or
indirectly cause an action to be taken that transitions the system into a new state. Kephart
and Walsh distinguish three types of AC policies, which correspond to different levels of
abstraction, as follows:
¢ action policies: An action policy dictates the action that should be taken when the
system is in a given current state. Typically this action takes the form of "IF
(condition) THEN (action)," where the condition specifies either a specific state
or a set of possible states that all satisfy the given condition. Note that the state
that will be reached by taking the given action is not specified explicitly.
Presumably, the author knows which state will be reached upon taking the
recommended action and deems this state more desirable than states that would be
reached via alternative actions. This type of policy is generally necessary to
ensure that the system is exhibiting rational behavior.
goal policies: Rather than specifying exactly what to do in the current state, goal
policies specify either a single desired state, or one or more criteria that
characterize an entire set of desired states. Implicitly, any member of this set is
equally acceptable. Rather than relying on a human to explicitly encode rational
behavior, as in action policies, the system generates rational behavior itself from
the goal policy. This type of policy permits greater flexibility and frees human
DIVISION OF COMPUTER ENGINEERING 11Page 19
AUTONOMIC COMPUTING
policy makers from the "need to know" low-level details of system function, at
the cost of requiring reasonably sophisticated planning or modeling algorithms.
utility-function policies: A utility-function policy is an objective function that
expresses the value of each possible state. Utility-function policies generalize goal
policies. Instead of performing a binary classification into desirable versus
undesirable states, they ascribe a real-valued scalar desirability to each state.
Because the most desired state is not specified in advance, it is computed on a
recurrent basis by selecting the state that has the highest utility from the present
collection of feasible states. Utility-function policies provide more fine-grained
and flexible specification of behavior than goal and action policies. In situations
in which multiple goal policies would conflict (i.e., they could not be
simultaneously achieved), utility-function policies allow for unambiguous,
rational decision making by specifying the appropriate tradeoff. On the other
hand, utility-function policies can require policy authors to specify a
multidimensional set of preferences, which may be difficult to elicit; furthermore
they require the use of modeling, optimization, and possibly other algorithms.
DIVISION OF COMPUTER ENGINEERING 12Page 20
AUTONOMIC COMPUTING
2.5 Issues of Trust
Dealing with issues of trust is critical for the successful design, implementation, and
operation of AC systems. Since an autonomic system is supposed to reduce human
interference or even take over certain heretofore human duties, it is imperative to make
trust development a core component of its design. Even when users begin to trust the
policies hard-wired into low-level autonomic elements, it is a big step to gain their trust
in higher level autonomic elements that use these low-level elements as part of their
policies. Autonomic elements are instrumented to provide feedback to users beyond what
they provide as their service. Deciding what kind of feedback to provide and how to
instrument the autonomic elements is a difficult problem. The trust feedback required by
users will evolve with the evolution of the autonomic system. However, the AC field can
draw experience from the automation and HCl communities to tackle these problems.
Autonomic systems can become more trustable by actively communicating with their
users. Improved interaction will also allow these systems to be more autonomous over
time, exhibiting increased initiative without losing the users' trust. Higher trustability
and usability should, in turn, lead to improved adoptability.
DIVISION OF COMPUTER ENGINEERING 13Page 21
AUTONOMIC COMPUTING
2.6 Evolution Rather Than Revolution
Most existing systems cannot be redesigned and redeveloped from scratch to engineer
autonomic capabilities into them. Rather, self-management capabilities have to be added
gradually and incrementallyâ€one component (i.e., architecture, subsystem, or service) at
a time.
With the proliferation of autonomic components, users will impose increasingly more
demands with respect to functional and nonfunctional requirements for autonomicity.
Thus, the process of equipping software systems with autonomic technology will be
evolutionary rather than revolutionary. Moreover, the evolution of autonomic systems
will happen at two levels :
the introduction of autonomic components into existing systems and
the change of requirements with the proliferation and integration of autonomic
system elements.
DIVISION OF COMPUTER ENGINEERING 14Page 22
AUTONOMIC COMPUTING
IBM has defined five levels of maturity as depicted in Figure 2.4 to characterize the
gradual injection of autonomicity into software systems.
Basic
Manual analyse ano
problem solving
Leve! 1
Managed
Ccnfaii7cd tools,
manual acîions
Leve!2
Predictive
Crass-rcKiunec
correlation and
yttrJunra
Level
Adaptive
System morwws,
aaprelateE, and
lukusuicliuii
Autonomic:
I
i
i
p
»
i Dynaisiw; business
i reanageraent
i
i
i
i
i
I
1.
Leve! 4
Level 5
Fig 2.4
DIVISION OF COMPUTER ENGINEERING
15Page 23
AUTONOMIC COMPUTING
3 Analysis and Benefits of Current AC Work
3.1 AC Framework
The AC framework outlined in previous section provides methods, algorithms,
architectures, technology, and tools to standardize, automate, and simplify myriad system
administration tasks. Just a few years ago, the installation or un-installation of an
application on a des computer required the expertise of an experienced system
administrator. Today, most user can install applications using standard install shields with
just a handful of mouse clicks. Building self-managing systems using the AC framework,
developers can accomplish similar simplifications for many other system administration
tasks (e.g., installing, configuring, monitoring, tuning, optimizing, recovering, protecting,
and extending).
DIVISION OF COMPUTER ENGINEERING 16Page 24
AUTONOMIC COMPUTING
3.2 Quality Attributes and Architecture Evaluation
The architectural blueprint introduced in Section 2 constitutes a solid foundation for build
AC systems. But so far, this blueprint has not come with a software analysis and reasonin
framework to facilitate architecture evaluation for self-managing applications. The DEAS
Project, mentioned above, proposes to develop such a framework based on ABASs [Klein
99]. When the system evolves, engineers can use this analysis framework to revisit, analy
and verify certain system properties.
Quality attributes for autonomic architectures should include not only traditional quality
criteria such as variability, modifiability, reliability, availability, and security but also
autonomicity-specific criteria such as support for dynamic adaptation, dynamic upgrade,
detecting anomalous system behavior, how to keep the user informed, sampling rate
adjustments in sensors, simulation of expected or predicted behavior, determining the
difference between expected and actual behavior, and accountability (i.e., how can users
gain trust by monitoring the underlying autonomic system).
Traditionally, for most quality attributes, applying stimuli and observing responses for
architectural analysis is basically a thought exercise performed during design and system
evolution. However, the DEAS principal investigators envision that many of the
autonomicity-specific quality attributes can be analyzed by directly stimulating events
and observing responses on the running application, which is already equipped with
sensors/monitors and executors/effectors as an autonomic element.
Codifying the relationship between architecture and quality attributes not only enhances
the current architecture design but also allows developers to reuse the architecture
analysis for other applications. The codification will make the design tradeoffs, which
often exist only in the chief architect's mind, explicit and aid in analyzing the impact of
an architecture reconfiguration to meet certain quality attributes during long-term
evolution. The fundamental idea is to equip the architecture of autonomic applications
DIVISION OF COMPUTER ENGINEERING 17Page 25
AUTONOMIC COMPUTING
with predictability by attaching, at design time, an analysis framework to the architecture
of a software system to validate and reassess quality attributes regularly and
automatically over long periods of time.
This codification will also aid in the development of standards and curricula materials,
which are discussed in more detail in subsequent sections.
DIVISION OF COMPUTER ENGINEERINGPage 26
AUTONOMIC COMPUTING
3.3 Standards
Many successful solutions in the information technology industry are based on standards.
The Internet and World Wide Web are two obvious examples, both of which are built on
a host of protocols and content formats standardized by the Internet Engineering Task
Force (IETF) and the World Wide Web Consortium (W3C), respectively.
Before AC technology can be widely adopted, many aspects of its technical foundation
have to be standardized. IBM is actively involved in standardizing protocols and
interfaces among all interfaces within an autonomic element as well as among elements,
as depicted in Figure 5 [Miller 05b].
In March 2005, the Organization for the Advancement of Structured Information
Standards (OASIS) standards body approved the Web Services Distributed Management
(WSDM) standard, which is potentially a key standard for AC technology. The
development of standards for AC and Web services is highly competitive and politically
charged. The Autonomic Computing Forum (ACF) is a European organization that is
open and independent. Its mandate is to generate and promote AC technology [Popescu-
Zeletin 04].
DIVISION OF COMPUTER ENGINEERING 19Page 27
AUTONOMIC COMPUTING
Fig 3.1
DIVISION OF COMPUTER ENGINEERING
20Page 28
AUTONOMIC COMPUTING
3.4 Curriculum Development
Control systems are typically featured prominently in electrical and mechanical
engineering curricula. Historically, computer science curricula do not require control
theory courses. Recently developed software engineering curricula, however, do require
control theory [UVIC 03].
Current software architecture courses cover control loops only peripherally. The
architecture of autonomic elements is not usually discussed. Note that event-based
architectures, which are typically discussed in a computer science curriculum, are
different from the architectures for autonomic systems. Courses on self-managed systems
should be introduced into all engineering and computing curricula along the lines of real-
time and embedded systems courses.
How to build systems from the ground up as self-managed computing systems will likely
be a core topic in software engineering and computer science curricula.
DIVISION OF COMPUTER ENGINEERING 21Page 29
AUTONOMIC COMPUTING
4. Conclusions
The time is right for the emergence of self-managed or autonomic systems. Over the past
decade, we have come to expect that "plug-and-play" for Universal Serial Bus (USB)
devices, such as memory sticks and cameras, simply worksâ€even for technophobic
users. Today, users demand and crave simplicity in computing solutions.
With the advent of Web and grid service architectures, we begin to expect that an average
user can provide Web services with high resiliency and high availability. The goal of
building a system that is used by millions of people each day and administered by a half-
time person, as articulated by Jim Gray of Microsoft Research, seems attainable with the
notion of automatic updates. Thus, autonomic computing seems to be more than just a
new middleware technology; in fact, it may be a solid solution for reining in the
complexity problem.
Historically, most software systems were not designed as self-managing systems.
Retrofitting existing systems with self-management capabilities is a difficult problem.
Even if autonomic computing technology is readily available and taught in computer
science and engineering curricula, it will take another decade for the proliferation of
autonomicity in existing systems.
DIVISION OF COMPUTER ENGINEERING 22Page 30
AUTONOMIC COMPUTING
5.References
1. Autonomic Computing: IBM's perspective on the state of information technology.
2. Jeffery O. Kelhart and David M. Chess (2003) "The vision of autonomic
computing"
3. http ://www. research, ibmautonomic/research/index. html
4. http://autonomiccomputing
5. http://www.ibmdeveloperworks/autonomic