16-05-2014, 10:11 AM
Web-based Parameterized Questions for Object-Oriented Programming
Web-based Parameterized Questions.pdf (Size: 409.11 KB / Downloads: 56)
Abstract
Web-based questions for assessment and self-assessment of students’ knowledge
are important components of modern E-Learning. Parameterized questions allow to enhance
the value of Web-based questions while also decreasing the authoring costs. This paper
discusses the problems of implementation and evaluation of online parameterized questions
for the domain of object-oriented programming. We present QuizJET – a system supporting
authoring, delivery, and automatic assessment of parameterized online quizzes and
questions—aimed at teaching a non-formula-based domain, Java programming language. The
classroom evaluation of QuizJET demonstrated that by working with the system students were
able to improve their scores on in-class weekly quizzes. We have also observed a significant
relation between the amount of work and the success rate of students and their scores on the
final exam.
Introduction
Online quizzes formed by questions of different kinds are now the primary tool for evaluation and
self-evaluation of student knowledge in the context of Web-based education (Brusilovsky & Miller 2001). Yet
traditional Web-based questions have several shortcomings, which are now in the focus of modern e-learning
research. One of these directions known as parameterized or individualized questions, addresses the issue of
authoring costs and plagiarism. Unlike a regular question, which is presented exactly as it was authored to all
students, a parameterized question is a pattern of a question created by an author. At the presentation time, the
pattern is instantiated with randomly generated parameters. As a result, every question pattern is able to produce
a large or even unlimited number of different questions. In an assessment context a reasonably small number of
question patterns can be used to produce individualized assessments semester by semester even for large classes
resolving the potential plagiarism problems, as first shown in CAPA (Kashy et al. 1997). In a self-assessment
context, the same question can be used again and again with different parameters allowing every student to
achieve mastery. Parameterized questions allowed to achieve these benefits in both context without increasing
authoring costs. A number of pioneer systems such as CAPA (Kashy et al. 1997), WebAssign (Titus, Martin &
Beichner 1998), EEAP282 (Merat & Chung 1997), or Mallard (Graham, Swafford & Brown 1997) have
explored the use of individualized questions in a range of topics and demonstrated benefits of this approach.
QuizJET: A System to Author, Deliver, and Assess Parameterized Questions for Java
QuizJET (http://adapt2.sis.pitt.edu/quizjet/) was designed as a generic system to author, deliver, and
assess a range of parameterized questions for Java programming language. QuizJET can work in both
assessment and self-assessment modes and is able to cover a whole spectrum of Java topics from Java language
basics, to such critical topics as objects, classes, interfaces, inheritance, and exceptions. At the moment of
writing, QuizJET includes 101 question templates grouped into 21 quizzes. Altogether, these questions cover the
key topics of the introductory programming class. Each question or quiz can be accessed using a unique URL
and is delivered from QuizJET server, which supports both generation and assessment of questions. In our
classes, students access the questions through the course portal (Figure 1). Course portal allows a teacher to
structure the course as a sequence of lectures or topics and add links to interactive content, relevant to a specific
topic, directly to the topic folder on the portal. This arrangement allows students working on a specific lecture or
topic to access easily all relevant content.
Basic Statistics
Out of 31 students in the course, sixteen chose to work with QuizJET. Some of them tried the system
only once, others used it on a regular basis. Eleven of these 16 students used QuizJET quite actively (answered
30 or more questions). In the following analysis, these students are referred to as active users.
Table 1 presents a summary of student work with the system. Student performance was analyzed on
two granularity levels: overall performance and session performance. On each level we explored two groups of
student performance parameters: Student Activity (attempts, success rate) and Course Coverage (topics,
questions). The attempts variable measures the total number of questions attempted by the student in QuizJET
while success rate represents the percentage of correctly answered questions calculated as number of correctly
answered question divided by the total number of questions attempted. As the table shows, on average system
users made 41.71 attempts to answer 17.23 distinct questions. This represents a rather active usage of the system
and its parameterized nature. The average success rate was relatively high: 32.15%. The users tried on average
4.94 distinct topics.
Educational Value of QuizJET
In order to find relationships between students' work with QuizJET and their learning, we examined
correlation between student two QuizJET activity parameters (attempts and success) and three parameters of
course performance: weekly quiz scores, assignment scores and final exam scores. Quiz, assignment and exam
are respectively used as three different dependent variables, all measured in points (for all three variables,
maximum number of points were 100; minimum number of point was 0). Since QuizJET was introduced to the
class in the middle of term, only the weekly quizzes administered after system introduction were taken into
account in the analysis.
A Brief Review of Similar Work
Our work presented in this paper combined two ideas: parameterized question generation and
automatic assessment of student answers. The problem of automatic assessment in the area of programming has
been explored in the number of projects. A review and classification of this direction of work can be found in
(Brusilovsky & Higgins 2005). The majority of projects, however, focused on assessment of programming
assignments, which is the most time-consuming activity for instructors and graders. A good review of this work
can be found in (Ala-Mutka 2005; Higgins et al. 2005). Automatic assessment of smaller-scale questions and
quizzes received less attention, most likely because this need is supported to some extent by traditional quizzes.
The style of questions used in QuizPACK and QuisJET: predition of program execusion results, is among the
most popular types of questions in programming. The importance of these questions was stressed in the recent
SIGCSE working group report (Lister et al. 2004). Despite this known importance, only a handful or projects
focused on automatic assessment of this style of questions. In addition to QuizGUIDE, most notable projects
include Ramapo college Problets (Dancik & Kumar 2003; Krishna & Kumar 2001; Singhal & Kumar 2000), and
Web-To-Test (Arnow & Barshay 1999), which emerged into a commercial product (http://turingscraft.com).
In contrast, parameterized question generation received relatively little attention in the domain of
programming. As mentioned in the introduction, the majority of work on question generation has been done in
such “formula-based” domain as math or physics. Yet, a number of Ramapo College problets (Krishna & Kumar
2001; Kumar 2005) explore the idea of parameterized generation on a larger scale than QuizPACK and
QuizJET and demonstrate the effectiveness of this technology.
Summary & Future Work
In this paper, we reported our work web-based parameterized questions for object-oriented
programming. We presented the tool, QuizJET, which supports authoring, generation and assessment of online
questions for Java programming language. The classroom evaluation of QuizJET uncovered a relationship
between QuizJET activity and both weekly quizzes and final exam scores. The QuizJET activity and success of
the active user contribute significantly to the regression model to predict final exam score. In addition,
subjective survey questions showed that students positively assessed the system and its key features.
We plan to continue our study of QuizJET in classroom context. In our future studies, we expect to
collect more representative sampling and be more specific about the way we measure the knowledge gained
through the course. We also plan to do the cross comparison between QuizJET and QuizPACK, from course
coverage to the structure of complexity of the questions.