06-09-2017, 10:26 AM
The growth of the Internet, the rapid development of technology and the high demand for higher education, lifelong learning and approaches to content delivery have made educational institutions now equipped with a variety of information and communication technologies (Sancar Tokmak , 2013). In 2000, Moe and Blodget predicted that the number of online education students could reach up to 40 million by 2025. One reason for the growing demand for online education is the expectation that in order to succeed, people should stay at both of new technologies and information. Because online instruction offers a viable and more flexible alternative to face-to-face face-to-face education, educational institutions have strived to offer online courses to meet society's demands for lifelong learning . However, online education differs from face-to-face education in many ways and therefore requires different strategies to succeed.
Educators and other researchers have expressed numerous concerns about the quality of online education courses (Lou, 2004), and as researchers such as Thompson and Irele (2003) and Kromrey, Hogarty, Hess, Rendina-Gobioff, Hilbelink and Lang 2005) have noted, as online courses flourish, meaningful evaluation is essential to improve the quality of such offers. Different types of assessment models address different goals of students and educators. Eseryel (2002) lists six basic approaches to evaluation: goal-based assessment, goal-oriented assessment, sensitive assessment, systems evaluation, professional review and quasi-legal evaluation, and points out that researchers and other evaluators should be familiar with the different models and choose the most suitable to your goals. Hew et al. (2004) have categorized evaluation models as macro, meso and micro, with "Context, Input, Process, Product (CIPP)" included in the macro-level evaluation category as a useful model for answering important questions about education programs online. Bonk (2002) also advocates the CIPP model to examine online learning within a broader system or context.
The CIPP is an evaluation model based on decision-making (Boulmetis & Dutwin, 2005). Since this study aimed to make decisions regarding the improvement of an online masters program, the study used the CIPP model within the framework of a mixed methodology design. This process involved the identification of stakeholder needs (students, managers and instructors), after which decisions were made on how to improve the course, and students were surveyed regarding their perceptions of changes in the program .
Educators and other researchers have expressed numerous concerns about the quality of online education courses (Lou, 2004), and as researchers such as Thompson and Irele (2003) and Kromrey, Hogarty, Hess, Rendina-Gobioff, Hilbelink and Lang 2005) have noted, as online courses flourish, meaningful evaluation is essential to improve the quality of such offers. Different types of assessment models address different goals of students and educators. Eseryel (2002) lists six basic approaches to evaluation: goal-based assessment, goal-oriented assessment, sensitive assessment, systems evaluation, professional review and quasi-legal evaluation, and points out that researchers and other evaluators should be familiar with the different models and choose the most suitable to your goals. Hew et al. (2004) have categorized evaluation models as macro, meso and micro, with "Context, Input, Process, Product (CIPP)" included in the macro-level evaluation category as a useful model for answering important questions about education programs online. Bonk (2002) also advocates the CIPP model to examine online learning within a broader system or context.
The CIPP is an evaluation model based on decision-making (Boulmetis & Dutwin, 2005). Since this study aimed to make decisions regarding the improvement of an online masters program, the study used the CIPP model within the framework of a mixed methodology design. This process involved the identification of stakeholder needs (students, managers and instructors), after which decisions were made on how to improve the course, and students were surveyed regarding their perceptions of changes in the program .