05-03-2013, 02:28 PM
A Framework Runtime Testing of Web Services
A Framework Runtime.doc (Size: 169.5 KB / Downloads: 16)
Abstract
Software testers are confronted with great challenges in testing Web Services (WS) especially when integrating to
services owned by other vendors. They must deal with the diversity of implementation techniques used by the other services and
to meet a wide range of the test requirements. However, they are in lack of software artefacts, the means of control over test
executions and observation on the internal behaviour of the other services. An automated testing technique must be developed
to be capable of testing on-the-fly non-intrusively and non-disruptively. Addressing these problems, this paper proposes a
framework of collaborative testing in which test tasks are completed through the collaboration of various test services that are
registered, discovered and invoked at runtime using the ontology of software testing STOWS. The composition of test services
are realized by using test brokers, which are also test services but specialized in the coordination of other test services. The
ontology can be extended and updated through an ontology management service so that it can support a wide open range of test
activities, methods, techniques and types of software artefacts. The paper presents a prototype implementation of the framework
in semantic WS and demonstrates the feasibility of the framework by running examples of building a testing tool as a test service,
developing a service for test executions of a WS, and composing existing test services for more complicated testing tasks.
Experimental evaluation of the framework has also demonstrated its scalability.
INTRODUCTION
The research on testing Web Services (WS) has been
growing rapidly in recent years [1, 2, 3, 4]. Most research
efforts fall into the following classes.
A. Generation of test cases. Techniques have been developed
to generate test cases from syntax definitions of
WS in WSDL [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17], business process and behavioural models in BPEL
[18, 19, 20, 21, 22, 23, 24, 25, 26], ontology based descriptions
of semantics in OWL-S [27, 28, 29], and other
formal models of WS such as finite state machines and
labeled transition systems [30, 31, 32], grammar graphs
[33, 34], and first order logic [35], etc. These techniques
have addressed various WS specific issues, such as the
robustness in dealing with invalid inputs and errors in
invocation sequences, fault tolerance to the failures of other
services that it depends on and broken communication
connections, and security in the environment that is vulnerable
to malicious attacks, and so on.
FRAMEWORK FOR TESTING WS
This section elaborates the framework, illustrates it with a
typical scenario and identifies the technical challenges.
A Typical Scenario
Suppose that a fictitious car insurance broker CIB is developing
a web-based system that provides online services
of car insurance. In particular, they provide the following
services to their end users.
The end users submit car insurance requirements to
CIB and get quotes from various insurers that CIB is connected
to, and then select one to insure the car. To do so,
CIB takes information of the car, its usage, and the payment.
It uses the WS of its bank B to check the validity of
user’s payment information, pass the payment to the selected
insurer and takes commissions from the insurer
and/or the user. The car insurance broker’s software system
has a user interface to enable interactive uses, and a
WS interface to enable other programs to connect as service
requesters. Its binding to the bank’s WS is static.
However, since insurance is an active business domain,
new insurance providers may emerge and existing ones
may leave the market from time to time, the broker’s
software binds dynamically to multiple insurance providers
to ensure that the business is competitive on the
market. The structure of the system is shown in Fig. 1.
Fig. 1 Structure of Car Insurance Broker Services
The developer of CIB’s service must test not only its
own code, but also its integration with other WS, i.e. the
WS of the insurers and the bank. This paper focuses on
the integration with dynamic binding. The following discusses
how the challenges can be resolved in the proposed
framework.
The Proposed Framework
The key notion of the framework is test services (T-service
in short), which are services designated to perform various
test tasks [44]. A T-service could be provided by the
same organization of their normal services or by a third
party that is independent of the normal service provider
but specialized in testing. For the sake of clarity, we use
functional service (or F-service in short) to denote the normal
services in the sequel.
Service Specific T-services
Ideally, each F-service should be accompanied with a
special T-service so that test executions of the F-service
can be performed by the corresponding T-service. Thus,
the normal operation of the original F-service is not disturbed
by test requests and the cost of testing are not
charged as real invocations of the F-service. The F-service
provider can distinguish real requests from the test requests
so that no real world effect is caused by test requests.
To ensure the testing carried on a T-service faithfully
represent the functional services, the following two
principles should be observed in the design and implementation
of T-services.
Key Technical Issues
From the illustrative scenario given above, we can identify
a number of technical issues that are crucial to the
practical implementation of the framework.
Semantic complexity of communications
The various parties that participate in the registration,
discovery and invocation of T-services communicate with
each other through SOAP messages. These messages are
complex in semantics. In particular, a T-service must publish
its services with a clear and accurate description of its
capability so that capability-based search of testers can be
performed. The diversity of testing methods, test activities,
test environments, and software artefacts used and
produced in testing make the description of capability
very complicated. Searching for appropriate T-services
for a test task must match test tasks with T-service capabilities.
This is also a complicated issue since test tasks are
not in one-one correspondence to capabilities. Finally, test
tasks must be submitted to T-services with parameters of
a wide range. Typically, a test task involves multiple
software artefacts, such as test cases, the service to be
tested, output of test executions, the test oracle to check
correctness of output, and so forth.