Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: AN EVENT-BASED FRAMEWORK FOR OBJECT-ORIENTED ANALYSIS, COMPUTATION OF METRICS
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
[attachment=16129]



AN EVENT-BASED FRAMEWORK FOR OBJECT-ORIENTED ANALYSIS, COMPUTATION OF METRICS AND IDENTIFICATION OF TEST SCENARIOS

Software is everywhere. It lets us get cash from an Automatic Teller Machine (ATM), make a phone call and drive our cars. An average company spends about 4 to 5 percent of its revenue on Information Technology (IT), whereas companies which are highly IT dependent, such as finance and telecommunications are spending more than 10 percent on it. In other words, IT sector has now one of the largest corporate expenses, outside employee costs. A lot of that money goes into hardware and software upgrades, software license fees, but a big chunk is for new software projects meant to create a better future for an organization and its customers.

Software projects are inherently complex, risky and require careful planning. Proper planning ensures that a project doesn't fail, while at the same time, customers get a clear definition of the project, know the project status and have a ready access to project deliverables at any point of time. Most recent surveys [1, 5] have shown that inadequate planning and specifications, ill defined requirements, poor process of requirement analysis and testing, lack of metrics and measures to compute project's sheer size and complexity, all together lead to numerous change requests, delays, significant added costs and increase in the possibility of errors. Thus a good requirement analysis method, proper management of software complexity and proper testing techniques are three important factors which play a vital role in avoiding software failures. This has been the motivation of the work carried out in this thesis. Next subsequent sections will focus on these three factors.


SOFTWARE REQUIREMENTS ANALYSIS

Understanding the problem domain and its requirements is a key to a successful project. Institute of Electrical and Electronics Engineers (IEEE) defines Requirements as a condition or a capability that must be met or possessed by a system, to satisfy a contract, standard, specification or other formally imposed document. Somerville and Sawyer in [14] define requirements as a specification of what should be implemented.

Requirement engineering plays a vital role in understanding requirements. Requirement engineering process encompasses systematic and repetitive techniques in discovering, documenting, modeling and maintaining a set of requirements for a system as a whole, or specifically for software components of a system.

Findings on failures due to Poor Requirement Analysis
There are many studies recently done to quantify cost and causes of software failures [1, 5]. Statistics presented in articles [3] have shown that, 60% - 80% of project failures can be directly attributed to poor requirements gathering, analysis and management. In article [3], it is cited that 68% of IT projects fail primarily due to poor requirements.
In [5], it is projected that companies pay a premium of as much as 60% on time and budget, when they use poor requirements practices in their projects. Over 41% of the IT development budget for software, staff and external professional services is consumed by poor requirements at an average, where the company is using average analysts. Sloppy development practices are also a rich source of failure and they can cause errors at any stage of an IT project. Moreover, the costs of errors that are introduced during requirements phase and fixed later in the Software Development Life Cycle (SDLC) increase exponentially [6].

Conceptual Modeling for Requirement Analysis

Software process models produce conceptual system models after systematic analysis and documentation of requirements. These conceptual models are an important bridge between analysis and design process. Some of the popular conceptual modeling techniques are Data flow model (DFD), variants of DFD, Relational model, Entity-Relationship (ER) model, (Extended Entity-Relationship) EER model, E2R diagram, Higher-Order Entity Relationship Model (HERM), Semantic Database Model i.e. SDM ,SOM (Semantic Object Model (SOM), Object Role Modeling (ORM), Conceptual Schema Language (CSL), DATAID-1 data schema integration, REMORA methodology, Booch method, Object-
Oriented Software Engineering (OOSE), Object-modeling technique (OMT) and Unified Modeling Language (UML).

Function-Oriented (F-O) methods [15] and Data-Oriented (D-O) methods [16] have paradigm mismatch between analysis and design, which is reduced by Object-Oriented (OO) software development methods like Booch method [17], OOSE or Jacobson method [18], OMT or Rumbaugh method [19], Coad and Yourdon method [21] and Wirfs-Brock method [22]. OO Conceptual Modeling provides a number of benefits like modularity, abstraction, information hiding and reusability that are not present in traditional requirement approaches and it uses the same model consistently from analysis to implementation. In the next section, we will discuss Object-Oriented Analysis (OOA) and some of its techniques and tools.

Object-Oriented Requirement Analysis

In OO Conceptual Modeling, a system is modeled as a group of interacting objects and is characterized by its class, its state (data elements) and its behavior. In 1997, Grady Booch, Jim Rumbaugh and Ivar Jacobson unified the Booch, OMT and OOSE method to develop UML which became an industry standard, created under the auspices of the Object Management Group (OMG) [20].UML is a graphical language for visualizing, specifying and constructing artifacts of a software-intensive system and has become a de facto standard for OO conceptual modeling. In UML, Use case diagrams have been considered effective in modeling functional requirements of a system, in general and software component, in particular.

Techniques and Tools used for OOA

A vital decision during OO Conceptual Modeling is to find objects and classes during OOA and then build a conceptual model for a problem domain. In this section, we describe various techniques that have been used in the past to extract components from requirements for building class and object models. Techniques proposed have either used Natural Language Processing (NLP) or employed Use cases to identify classes. Some of these approaches have been automated by building prototype tools.
Techniques for OOA

Grady Booch popularized the concept of Russell and used singular nouns and nouns of direct reference to identify objects and plural and common nouns to identify classes [17]. However, this is an indirect approach to find objects or classes and all nouns are not always classes or objects. Some of them refer to entire assembly, subassembly, attribute or a service. Several parsers were built using this approach to extract nouns and noun phrases from large-scale texts.

In [23], authors have used computerized classification systems and thesauri for the purpose of OOA of a valid Slovenian earthquake code. This approach is also not very easy and straightforward and is burdened with checking candidate classes with a thesaurus and is very much dependent on the structure and completeness of a thesaurus. It is always difficult to find one-to-one mapping between a thesaurus term and a class. In 1991-1992, pioneers like Codd and Yourdan, Shaler and Mellor and Ross identified certain categories like persons, role and organizations etc, which define application domain entities. These categories help experienced analysts to identify classes or objects. This approach only finds tangible objects but fails to identify abstract classes [24]. Ed Seidewitz and Mike Stark developed a technique in [25] to identify objects, classes and services from terminators, data stores and data flows of DFD. This approach also suffers from several problems like use of data abstraction instead of object abstraction, scattered pieces of objects across several DFD's and fragmented objects and classes [25]. In [26], authors have presented an integration of several approaches under one head called taxonomic class modeling (TCM) methodology. This methodology is neither tested nor validated by a controlled experiment. It does not also provide any automation support. In [27], several approaches are presented, based on NLP of textual requirements to extract components of OOA model.

Work done in [28] has presented a Use case-driven development process and its validation. However, it is reported in empirical findings that this technique leads to problems, such as developers missing requirements and mistaking requirements for design. Work in [29] identifies classes from goals of each Use case without descriptions, instead of scenarios. In
[30], a set of artifacts and methodologies are used, to automate the transition from requirements to detail design. In [31], a process is proposed for generating formal OO specifications in Object constraint Language (OCL) and Class diagrams from a Use case model of a system through a clearly defined sequence of model transformations. Work in [32] presents a methodology and a CASE tool named Use-Case driven Development Assistant (UCDA) to automate natural language requirements analysis and class model generation based on Rational Unified Process (RUP).


Tools for OOA

Several authors have used techniques, described in the previous section, to develop automation support for analysts. Some of those tools are Use-Case driven Development Assistant (UCDA), MOSYS (A Methodology for Automatic Object Identification from System Specification), RARE (Reference Architecture Representation Environment), CM-Builder (Class Model Builder), Linguistic assistant for Domain Analysis (LIDA), OOExpert, AURA (Automated User Requirements Acquisition), GOOAL (A Graphic Object Oriented Analysis Laboratory). Work done in [33] presents automated approaches to extract elements of OO system (namely classes, attributes, methods and relationships between the classes, sequence of actions, the use-cases and actors) using NLP techniques.


Limitations of existing Object-Oriented Requirement Analysis Techniques
The techniques described in the previous section use long descriptive requirements, expressed in natural language, as a starting point. Apart from limitation of each of the approaches discussed earlier, a natural language description of requirements often has the problem of incompleteness, inaccuracy and ambiguity. On the other hand, Use Case approaches, although, quite popular are at a deviation from basic OO concepts of visualizing systems in terms of objects for object modeling. Moreover, recently several arguments against Use cases have been cited in literature [34].

Use Cases have document centric, time consuming and declarative nature. They have problem of invisible scope creep and have inability to differentiate dynamic and static elements of specifications. Although, improvisation of Use case based requirements analysis approach has been done [35] by either automating Use Case based model generation, improving Use Case Templates or enhancing Use Case based analysis with scenarios but these solutions have not proposed any other alternative, their staring point still being Use Cases. In [36], it has been claimed that event modeling infuses rigor and discipline in Use case modeling by helping analysts in identifying what constitutes a Use case. According to [36, 37], event modeling helps in determining Use cases. Thus, one can say that Event modeling complements Use case modeling.

COMPLEXITY METRICS

Complexity is probably the most important attribute of software because it influences a number of other attributes such as maintainability, understandability, modifiability, testability and cost [7]. Basili defines complexity as a measure of the resources expended by a system, while interacting with a piece of software to perform a given task [8]. Work in [46] distinguishes three kinds of complexities: computational, psychological and representational, out of which, psychological complexity is the only complexity that is perceived by man. Structural complexity is a psychological complexity that has been studied extensively because it can be assessed objectively. Complexity is accessed by using metrics. Metrics measure the quality of a product, the productivity of people, and the benefits of new software tool or forms a baseline for various estimations.

Different metrics are proposed for different phases of software development. For instance, function points are used in requirements phase to estimate the size of the resulting system. Similarly, the metrics for cohesion and coupling are used for the design phase. The suite of metrics for OO design, Lines of Code, Software Science and Cyclomatic Number are helpful instruments for managing software process effectively. Other popular software metrics are Henry and Kafura 's Structural complexity, McClure 's Invocation complexity, Woodfield 's review complexity, Yau and Collofello 's stability measure, Yin and Winchester's architectural metrics based on analysis of a system's design structure chart and Douce's et al spatial complexity.
Findings on failures due to Complexity
In [4], it is reported that a project's sheer size is also a fountainhead of failure. Studies indicate that large-scale projects, fail three to five times more often, than small ones. Greater complexity increases possibility of errors because no one really understands all the interacting parts of a whole or has an ability to test them. Roger S. Pressman pointed out in his book, Software Engineering that "Even a small 100-line program with some nested paths and a single loop, executing less than twenty times, may require 10 to the power of 14 possible paths to be executed." To test all of those 100 trillion paths, he noted, assuming each could be evaluated in a millisecond, would take 3170 years. According to a report published in December 2009 [9], the primary cause of software project failures is complexity. Complexity can create delays, cost overruns and lead to inability in a system to meet business needs.

SOFTWARE TESTING

Testing is a process of detecting errors, injected into software during any phase of its development [10]. Testing has been found to consume at least half of the effort expended on total development [11].

Several approaches proposed for test case generation can be found in literature and these have been classified into random, path-oriented, goal-oriented and intelligent approaches [47]. Even though varied test case generation approaches are available, Model-Based Testing (MBT) approach has attracted many researchers and research is still being carried out to optimize the generation of test cases with minimum human effort. The next section presents the survey of related work done in the area of MBT using UML models.

Model Based Testing
A wide range of models like UML, SDL, Z, state diagrams, data flow diagrams, control flow diagrams, etc have been used in MBT. Some have used an EFSM (Extended Finite State Machine) or a state Machine (FSM). Others have used activity diagrams and I/O explicit Activity Diagram (IOAD) model, sequence diagrams, State Diagrams and State Charts or an integrated approach using more than one specific UML model.

Findings on failures due to Poor Testing
In article [2], according to Gartner Research, "The lack of testing and Quality Assurance (QA) standards, as well as a lack of consistency, often lead to software failure and business disruption, which can be costly." Results of a survey conducted in June 2010 [12] have also shown that majority of software bugs are attributed to poor testing procedures or infrastructure limitations, rather than design problems. A report that has cited six common failures of IT projects [13] has shown that poor testing and absence of proper change management are two main reasons of software failure.

TOWARDS AN EVENT-DRIVEN PARADIGM

Einstein, in his Theory of Relativity, used the term, 'event' to signify the fundamental entity of observed physical reality -- a single point in the time-space continuum. Similar to usage of events in modeling physical reality, the basic notion of event is also widely used to model software in different contexts and applications. Next, sub sections discuss the various existing applications and role of events.

Role of Events in Requirements Analysis & Specification, Modeling and Testing Work in [38] has iterated the fact that event thinking is an equally important way of modeling system. This section describes some of the approaches that have used events for the purpose of requirements analysis and specification. One classical example is of Event partitioning approach described in [39]. Event partitioning approach has also been applied to Real -Time Systems Analysis to model 'non-event' events.

Events have been used to deliver OO architecture for systems of all sizes in, to blend Event and Object partitioning, to change state of objects, for identifying and defining requirements process and data patterns to capture the processing policy, to specify requirements using event calculus , to promote easier maintenance and implementation of specifications for e-commerce application development, to model relevant facts and abstract behavior of application and to construct composite events from simple events.

Events have also played a very important role to model object interaction, develop a metric suite for domain modeling and for analogical reuse of structural and behavioral aspects of event-based object-oriented domain models. Events are modeled in terms of object-oriented structures, like entities and as four different events in UML language. Events are also used to model information structures and related activities in information systems and for modeling static and dynamic aspects of Information and Communication Systems.

Various event-based models have been used for GUI Testing [40] like an Event and Message driven Petri network (EMDPN) model and an EMDPN based interaction graph, Event InterActions Graph (EIAG). Events are also used to define a scalable and reusable model of GUIs, a GUI automation test model, a test coverage criterion for GUIs and for fault modeling.

Events in Complex Event Processing Systems
Event Processing is an emerging area. The academic roots of event processing originate in multiple disciplines: artificial intelligence, databases, simulation, verification, sensor handling, distributed computing, programming languages, business process management and many more. The EPTS (Event Processing Technical Society) [41] was launched in June 2008 as a consortium to promote the understanding of the event processing discipline, incubate standards and foster collaboration between industry and academia. An ACM conference, "Distributed Event-Based Systems" (DEBS) recognized as the "flagship" conference of the community, covers all topics relevant to event-based computing. Complex Event Processing (CEP) described in the book, "The Power of Events" [42] has shown how CEP can be used to enhance systems that deal with events. He showed how one can use the power of events to automate management without compromising managers' control.There are a few approaches that have used events for representing system requirements, identifying classes and generating class diagrams and use case diagrams.
Critical review of these approaches reveals that there are some drawbacks and a possible scope for improvement. This critical review is presented in the next section.