A comparison of the practices used by human resource development professionals to evaluate web-based and classroom-based training programmes within seven Korean companies
Younghee Jessie Konga* and Ronald Lynn Jacobsb*
aCollege of Business, Franklin University, Columbus, OH, USA; bCollege of Education, University of Illinois, Champaign, IL, USA
(Received 28 September 2010; final version received 25 October 2011)
The purpose of the study was to compare the practices used by human resource development (HRD) professionals to evaluate web-based and classroom-based training (CBT) programmes within seven Korean companies. This study used four components of evaluation and three factors of evaluation barriers to compare the differences between these two training approaches. This study also explored the key decision factors for determining how HRD professionals evaluated their web-based and CBT programmes. Two data sets were used for the study; one set of data was gathered from a survey questionnaire distributed to HRD professionals and the other was gathered from interviews with HR/HRD directors within the seven companies. The results showed that web-based and CBT programmes were not meaningfully different on the most components of evaluation and evaluation barriers. The results also found six key decision factors determining evaluation for web-based and CBT programmes.
Keywords: training evaluation; training evaluation barriers; primary factors determining evaluation
Since many organizations consider training as a way of improving their performance in today’s competitive environment (Yanmil and McLean 2001), they want to select the best training programmes for their employees among a wide variety of training approaches (Bartley and Golek 2004). Classroom-based training (CBT) is one of the most frequently used training approaches in organizations, since learners can directly communicate with an instructor and peers and directly share information with each other (Kapp and McKeague 2002). In addition to the use of CBT, the development of web-based training (WBT) has been increased in the human resource development (HRD) field due to rapid advancement in the capabilities and distribution of technologies (Bassi and Van Buren 1998).
Since investment in these training programmes has been rapidly increasing in organizations, senior management asks HRD professionals to evaluate their WBT
*Corresponding author. Email: firstname.lastname@example.org; email@example.com
Human Resource Development International
Vol. 15, No. 1, February 2012, 79–98
ISSN 1367-8868 print/ISSN 1469-8374 online
� 2012 Taylor & Francis http://dx.doi.org/10.1080/13678868.2012.658632
and classroom-based training (CBT) programmes in order to determine the best training programmes for their employees (Olson and Wisher 2002).
Many previous researchers (Curtain 2002; Jung and Rha 2003; Rumble 2001; Whalen and Wright 1999) have stressed that evaluating web-based and classroom- based training should be considered and performed differently, because their learning environments are entirely different from each other. For instance, WBT is delivered via the Internet or intranet using a computer, while CBT is delivered face- to-face via an instructor. In addition, WBT allows trainees to learn at their own pace at a convenient time and location. On the other hand, CBT is typically group- based requiring that trainees meet their instructor and peers at a fixed time and location.
Following these differences, researchers (Curtain 2002; Jung and Rha 2000; Rumble 2001; Trentin 2000; Whalen and Wright 1999) have suggested that evaluating WBT programmes also needs to be approached differently from the evaluation of CBT programmes. They have found that there are more difficulties and challenges to evaluating WBT programmes than classroom-based training, due to the wide variation in WBT forms. The evaluation of WBT is influenced by a number of factors, such as IT infrastructure used, the number of learners, flexibility of learning times, frequency of course revision, type and amount of media used, type and amount of learner support, type and amount of interaction, learners’ experience level with technology, or unexpected technical issues.
For example, WBT structures change at an exceptional pace, since technologies have been changing so quickly. Consequently, it can be difficult to thoroughly understand the range of different types and organizational settings in which WBT is delivered among organizations (Bradsford, Brown, and Cocking 1999). In addition, when an organization provides different types of online courses that require different preparation times and levels of online interaction, it is more difficult to evaluate the level of workload and extra time spent by administrative staff and instructors to support online courses (Curtain 2002; Rumble 2001).
However, despite calls for the evaluation of WBT programmes to be conducted differently, there is limited information available on whether HRD professionals actually follow this suggestion in practice. Strother (2002) suggested that useful and sound comparative study about evaluating web-based and CBT has been extremely limited over the years, since evaluating WBT programmes has not been treated differently. Wisher and Olson (2003) also pointed out the lack of any empirical studies comparing the evaluation of WBT and CBT programmes. They reviewed over 500 articles published between 1996 and 2002, and found that most of these studies dealt with design issues or technology concerns rather than with strategies or tools for successful evaluation. Hence, due to an insufficient number of previous evaluation studies, they had difficulty assessing evaluation practices in their survey.
If WBT and CBT approaches differ from each other in important ways, which have implications on how these training approaches should be evaluated, and, if there is limited information on whether organizations actually differ in their evaluation practices related to these training approaches, then more should be known about how HRD professionals actually evaluate their web-based and CBT programmes. Understanding the practices used by HRD professionals to evaluate different training approaches will help us identify elements to evaluate that are
80 Y.J. Kong and R.L. Jacobs
missing from current practice, examine enablers and disablers of the evaluation of these elements, and conduct appropriate evaluations of them.
The purpose of the study is to compare the practices used by HRD professionals to evaluate web-based and CBT programmes within seven Korean companies. More specifically, this study (1) examines how HRD professionals evaluate web-based and CBT programmes in their organizations, and (2) compares the differences between these training approaches. Also, this study (3) identifies the barriers that might prevent HRD professionals from evaluating web-based and CBT programmes, and (4) compares the differences between these training approaches with the evaluation barriers. Finally, this study (5) explores the important decision factors that HRD professionals use to determine how to evaluate web-based and CBT programmes in their organizations.
The purpose of evaluation
Kraiger (2002) suggests three primary reasons to evaluate training programmes: decision-making, feedback, and marketing. The first purpose for conducting evaluation is to provide input for making decisions about training programmes, such as course retention, course revision, or personnel decisions (e.g. quality of instructor). The second purpose for evaluation is to provide feedback to course designers, trainers, or the trainees themselves that would allow them to design or engage with the course more effectively. The final purpose for collecting training evaluation data is for the purpose of marketing the training programmes (e.g. demonstrating the value of training to upper management or helping future sponsors or trainees understand the beneficial changes resulting from training).
Training evaluation models
On reviewing the work on training evaluation in HRD literature, Kirkpatrick’s four-level framework and the CIPP model emerge as particularly common in organizations.
Kirkpatrick’s four-level framework
Kirkpatrick’s evaluation is the oldest and most widely used evaluation criteria in organizations. Many of the other evaluation models found in the literature are designed and revised based on Kirkpatrick’s framework (Russ-Eft and Preskill 2001).
The Kirkpatrick framework consists of a series of four levels: reaction, learning, behaviour, and results (1998). In this model, reactions refer to trainees’ levels of satisfaction, measured via surveys or interviews designed to identify their perceptions of the training programme. Reaction is measured during or immediately after training. The next level of evaluation, learning, refers to the extent of trainees’ changes in knowledge, skills, and attitudes. Learning is measured before and after training. The next level of evaluation is behaviour. Behaviour refers to the extent of trainees’ behaviour changes in transferring knowledge, skills, and attitudes to their jobs. Behaviour is measured after training. The final level of evaluation is results. A result refers to organizational results or changes because of enhanced behaviours.
Human Resource Development International 81
In Kirkpatrick’s framework, reactions to training are related to learning, learning is related to behaviour, and behaviour is related to results.
However, Kirkpatrick’s framework has been routinely criticized or revised in the HRD literature. For example, Alliger and Janak (1989) question its hierarchical evaluation approach. According to their critiques, reactions to training should be viewed as unrelated to learning, and learning can have a direct influence on results criteria as opposed to only indirect influence through behavioural change. In addition, Holton (1996) argues that Kirkpatrick’s evaluation is nothing more than a taxonomy of outcomes. He removes reactions from the level of primary outcomes of training and defines it as a moderating variable between trainees’ motivation to learn and their actual learning. In addition, he states that learning is related to transfer and transfer is related to results.
The CIPP model
Another of the most frequently used evaluation models for training in organizations is the systems-based model (Phillips 1997). Under the systems approach, Stufflebeam’s (1971) CIPP model is the best known (Fitzpatirck, Sanders, and Worthen 2004). He points out that, since evaluation should be informative to decision makers, the systems approach can be effectively applied to evaluation. The CIPP model acronym stands for the four components of training systems: context, input, process, and output.
The following are the main characteristics of these four types of evaluation:
(1) Context evaluation involves obtaining information about the organizational context to determine training needs and to identify objectives for the programme.
(2) Input evaluation involves determining the availability of resources, alter- native programme strategies, and plans for instructional design process.
(3) Process evaluation is assessing the plans and barriers for implementing the programme and monitoring procedural events and activities.
(4) Product evaluation includes making judgments about the outcomes of the programme related to their objectives, context, input, and process informa- tion, interpreting their worth and merit.
Evaluating web-based and CBT based on a systems approach
In practice, Jacobs (2003) provides an interactive process of evaluation from the systems view for evaluating structured on-the-job training (SOJT). In his book, he points out that this systems view of training evaluation enables HRD professionals to see all the relevant elements separately, allowing them to evaluate training programmes, and how the elements bring together each component. In addition, it enables HRD professionals to examine the relationship among the four components within a system (e.g. SOJT) and with other systems in an entire system or an organization, allowing them to determine the value of each component and of the entire system.
As a result, Jacobs suggests that the evaluation questions for SOJT should be asked based on system inputs, processes, and outputs. The questions also need to be asked about the organizational context in which the training takes place. For
82 Y.J. Kong and R.L. Jacobs
example, the training input questions include units of material to be learned, the training locations, and the characteristics of trainers and trainees at the time of the training. The training process questions include the amount of time required to complete the training, the availability of training resources, and the behaviours of the trainer and trainee during the training. The training output questions include whether the training goals have been achieved, whether the training met the needs of trainees and whether the training is more effective or efficient than other training approaches in terms of financial benefits after the training. Lastly, the organizational context questions include the extent of management commitment to the use of the training and the interaction of the training with other systems in the organization.
The evaluation of SOJT from a systems view can be applied to the evaluation of other training approaches such as web-based and CBT, because they can also be viewed as systems themselves and, at the same time, as one of many subsystems in a larger system (i.e. the organization). Following the systems approach proposed by Jacobs (2003), Figure 1 explains how to evaluate WBT programmes; Figure 2 illustrates how to evaluate CBT programmes.
Barriers to training evaluation
Previous research found that organizations do not actually conduct evaluation well (Curtain 2002; Holton and Naquin 2005; Jung and Rha 2000; Kirkpatrick 1998;
Figure 1. The evaluation of WBT programmes from a systems view.
Human Resource Development International 83
Kraiger 2002; Rumble 2001; Whalen and Wright 1999). The primary barriers inhibiting evaluation proposed by these authors are:
(1) Lack of expertise in evaluation: lack of knowledge and skills or lack of experience in evaluation
(2) Lack of organizational support for evaluation: unavailability of resources, organizational confidentiality practices or policies, fear of negative financial return, and limited budget and time
(3) Lack of evaluation methods and tools: evaluation activities limited to reaction sheets and statements of learning outcome, lack of common cost framework, and methodological limitations of financial returns measurement
Three barriers affecting evaluation
Previous studies have endeavoured to find appropriate ways of conducting effective evaluation (Hofstetter and Alkin 2003; Johnson 1998; Leviton 2003; Shulha and Cousins 1997). Taut and Alkin (2003) propose that this body of research can be addressed to oppose the barriers that inhibit evaluation. In their study, they examined the perception of project staff on barriers that inhibited evaluation implementation and its eventual use, based on Alkin’s (1985) evaluation framework. In his evaluation framework, Alkin (1985) divides factors affecting the conduct of
Figure 2. The evaluation of CBT programmes from a systems view.
84 Y.J. Kong and R.L. Jacobs
evaluation and its use into three categories: human factors, context factors, and evaluation factors.
First, ‘human factors describe evaluator and user characteristics’ (Taut and Alkin 2003, 251). For example, human factors include evaluators’ commitment to facilitate and enhance evaluation, evaluator’s willingness to actively involve stakeholders in the evaluation process, evaluator’s competence and reliability in evaluation, stakeholders’ organizational positions, stakeholders’ experience levels to help the evaluator and their interest in the evaluation, stakeholders’ commitment to use evaluation results, and so on.
Second, ‘context factors refer to the context in which the evaluated program exists’ (Taut and Alkin 2003, 251). For example, context factors include written requirements (e.g. federal/state requirements and organizational policies), fiscal constraints (e.g. the amount of money and time available for the evaluation), the context outside of the actual training programme and the organization (e.g. external funding agencies), and programme characteristics (e.g. age/maturity of the programme, innovativeness of the programme, and overlap with other projects).
Third, evaluation factors involve the evaluation itself. For example, evaluation factors include evaluation procedures (e.g. the appropriateness and accuracy of the methods used and use of a general model to guide evaluation), the information dialogue (e.g. amount and quality of interaction between the evaluator and stakeholders), substance of evaluation information (e.g. information relevancy and specificity), and evaluation-reporting factor (e.g. timing of information, style of oral presentations, and format of reports).
This study employed a mix of quantitative and qualitative non-experimental research. Two data sets were used for the study. One set of data was gathered from a survey questionnaire distributed to Korean HRD professionals via online in October of 2008 to compare the differences between web-based and CBT programmes in evaluation practices.
The other data set was gathered from interviews with HR/HRD directors in February of 2009 to explore the research questions in more depth. The researcher adapted Jacobs’ evaluation framework (2003) that was based on the systems approach to four components of evaluation and Alkin’s framework (1985) for evaluation barriers. Then, she developed new construct scales for a survey questionnaire and interview questions based on these established constructs to fit this study’s purpose.
The researcher selected seven companies in Korea where web-based and CBT programmes were implemented and evaluated. The seven companies selected represent a variety of industries in Korea, including food, chemical manufacturing, automobile manufacturing, financial services, IT services, and educational services industries.
The total number of employees in each organization varies from 400 to 75,000, and the total revenue of the companies’ ranges, as of 2007, from $1.5 million to
Human Resource Development International 85
$30 billion. All the companies are headquartered in Seoul, Korea, with branch offices in many regional cities across the nation and around the world.
Survey to HRD professionals
The population for this study was all HRD professionals (N¼ 147) who currently evaluated web-based and/or CBT programmes within the seven companies in Korea, and the total number of respondents was 73. Out of the 73 respondents, 69 respondents had experience in the evaluation of CBT programmes, whereas 43 had experience in the evaluation of WBT programmes.
For the survey questionnaire, the researcher designated one independent variable with two levels: web-based and CBT programmes. Two dependent variables were measured in this study: the four components of evaluation and three factors of evaluation barriers. Six demographic characteristics were also measured in this study: type of industry, size of organization, an organization’s years of conducting training programmes, employee’s job level, employee’s educational level, and percentage of employee’s work that is related to training evaluation.
Respondents were asked to identify the extent to which input, process, output, and organizational context components of evaluation were used by HRD professionals to evaluate their web-based and CBT programmes, using a five-point Likert-type scale ranging from 1 (never) to 5 (always), with higher scores indicating more frequent use of evaluation questions. Respondents were also asked to assess the degree of barriers to evaluating web-based and CBT programmes based on three factors affecting evaluation (i.e. human, context, and evaluation factors), using a five-point Likert-type scale ranging from 1 (strongly disagree) to 5 (strongly agree), with lower scores indicating more barriers perceived by HRD professionals.
Validity and reliability
To ensure the instrument’s content validity, the panel of five experts reviewed and commented on the items in the instrument to ensure that they accurately represented the content area intended. Then, the English version of the instrument was translated to the Korean version by the researcher, and the Korean versions were reviewed by Korean Ph.D. students and HRD professionals to gain their consensus on the items. To ensure the reliability of the survey questionnaire, Cronbach’s alpha coefficient was employed to measure internal consistency of the items and the alpha values of the instrument were between .8 and .9.
Interviews with HR or HRD directors
To obtain qualitative data, the researcher conducted individual interviews with HR or HRD directors. The participants were all current directors in the departments of HR or HRD in the seven companies selected. Five out of seven HR/HRD directors agreed to participate in this study and were interviewed. Interviews were conducted
86 Y.J. Kong and R.L. Jacobs
either phone, email, or face to face, whatever was most convenient for them, and supplementary follow-up phone calls were made, when necessary.
The interview was composed of four main questions including demographic information. Respondents were asked about how web-based and CBT programmes were implemented and evaluated differently in their organizations based on the four components of evaluation. Respondents were also asked about the factors that inhibited the evaluation of web-based and CBT programmes and whether there were differences between these training approaches based on the three factors of evaluation barriers. Finally, they were asked what kind of information they looked for from the evaluations and how they used the results of the evaluation in order to explore the key decision factors determining evaluation for their web-based and CBT programmes.
Credibility and dependability
Maxwell (2005) states that, in qualitative research, credibility refers to internal validity, and the credibility in this study was enhanced based on rich data and control of bias. The researcher audio-taped and transcribed all interviews word by word and took field notes recording the interviewees’ actions during interview. To ensure consensus and reduce bias in this study, the verbatim transcripts were shared with interviewees or another researcher to ensure that the researcher’s interpretations of the interviewees’ statements were consistent with the transcripts.
Lincoln and Guba (1985) state that, in qualitative research, dependability closely corresponds to the notion of reliability in quantitative research, and ‘since there can be no validity without reliability, a demonstration of validity is sufficient to establish reliability’ (p. 316).
First, the data from the survey questionnaire were analysed using the Statistical Package for the Social Science (SPSS). Descriptive statistics such as frequency, percentage, mean for population (m), and sigma for population (s) were used to describe the evaluation practices used by Korean HRD professionals and to compare the differences between web-based and CBT programmes in the evaluation practices. In addition, multiple regression analysis was conducted to examine the relationship between demographic variables of HRD professionals and the evalua- tion practices used for web-based and CBT programmes. Second, the data from the interview was read, coded, and analysed by the researcher to seek a deeper understanding of these relationships in the natural setting.
Four components of evaluation
As shown in Tables 1 and 2, the results of the survey showed that web-based and CBT programmes were most frequently evaluated for the process evaluation component (mw¼ 4.02, mc¼ 4.29) followed by the input evaluation component
Human Resource Development International 87
(mw¼ 3.70, mc¼ 3.78). On the other hand, web-based and CBT programmes were least frequently evaluated for the organizational context evaluation component (mw¼ 3.03, mc¼ 2.93), followed by the output evaluation component (mw¼ 3.12, mc¼ 3.46).
The results also showed that, the input evaluation component produced a higher mean score for CBT (m¼ 3.78, s¼ 0.59) than WBT programmes (m¼ 3.70, s¼ 0.86). The process evaluation component produced a higher mean score for CBT (m¼ 4.29, s¼ 0.56) than WBT programmes (m¼ 4.02, s¼ 0.75). The output evaluation compo- nent produced a higher mean score for CBT (m¼ 3.46, s¼ 0.74) than WBT programmes (m¼ 3.12, s¼ 0.86). On the other hand, the organizational context evaluation component produced a lower mean score for CBT (m¼ 2.93, s¼ 0.93) than WBT programmes (m¼ 3.03, s¼ 0.85).
The results of the interview (Table 3) produced the same results as the survey; web-based and CBT programmes were most frequently evaluated for the process evaluation component, whereas they were least frequently evaluated for the organizational context evaluation component.
These results also showed that, in the input and process components of evaluation, the five interviewees all said that their HRD professionals used existing survey forms to evaluate trainees’ satisfaction with training. However, they said that they included different questions for web-based and CBT programmes on the survey questionnaire, although the main categories for the survey were the same. All five interviewees stated that the online survey had more detailed questions included on the questionnaire than the survey for the classroom. In addition, they mentioned that WBT was more complicated to evaluate than CBT, because web-based courses were specialized and individualized.
Table 2. Descriptive statistics on CBT programs for four components of evaluation (N¼ 69).
Included Excluded Total
Mean Std. deviation N Per cent N Per cent N Per cent
Input 3.78 .59 69 94.5 4 5.5 73 100.0 Process 4.29 .56 69 94.5 4 5.5 73 100.0 Output 3.46 .74 68 93.2 5 6.8 73 100.0 Context 2.93 .93 69 94.5 4 5.5 73 100.0
Table 1. Descriptive statistics on WBT programs for four components of evaluation (N¼ 43).
Included Excluded Total
Mean Std. deviation N Per cent N Per cent N Per cent
Input 3.70 .86 43 58.9 30 41.1 73 100.0 Process 4.02 .75 43 58.9 30 41.1 73 100.0 Output 3.12 .86 43 58.9 30 41.1 73 100.0 Context 3.03 .85 43 58.9 30 41.1 73 100.0
88 Y.J. Kong and R.L. Jacobs
Meanwhile, the HRD professionals at the five companies rarely evaluated their web-based and CBT programmes for the output evaluation component, but they more frequently evaluated WBT programmes than CBT programmes for the organizational context evaluation component in order to receive reimbursement of tuition money from the Korean government.
In summary, the results of both the survey and the interviews showed that web- based and CBT programmes were meaningfully different on the process evaluation component. However, web-based and CBT programmes were not meaningfully different on the input, output, and organizational context components of evalua- tion. The results of both the survey and the interviews also showed that the demographic characteristics of HRD professionals were not meaningfully related to the practices used to evaluate web-based and CBT programmes for each evaluation component.
Three factors of evaluation barriers
As shown in Tables 4 and 5, the results of the survey showed that the respondents perceived more barriers in the context (mw¼ 3.17, mc¼ 3.11) and evaluation factors
Table 3. The differences between web-based and CBT programs for the four components of evaluation.
Categories Web-based Classroom-based
Input evaluation 1. Survey questionnaire Different questions: adding
more questions Different questions
Process evaluation 1. Survey questionnaire Different questions: adding
more questions Different questions
2. Trainees’ participation and learning progress
Checking through online:
Checking through direct observation:
. Number of clicking slides . Class
. Login times . CC-TV
. Discussion board . Interview
. Pop-up window
. Homework . Homework or
discussion in class Output evaluation
Self-check Self-check 1. Learning tests Multiple choices Multiple choices
Pass/fail Pass/fail Simple essay Essay Reflection paper Reflection/project paper
2. Transfer of training Action plan Projects Supervisor’s observation Action plan Number of sales Supervisor’s observation 360 degree feedback Number of sales Statistics 360 degree feedback
Statistics Organizational context evaluation 1. Survey questionnaire Little difference Little difference 2. Korean government
support Report evaluation results to
the government N/A
Human Resource Development International 89
(mw¼ 3.14, mc¼ 3.22) than in the human factors (mw¼ 3.68, mc¼ 3.81) to evaluating their web-based and CBT programmes. The results also showed that the human factors produced a lower mean score for WBT (m¼ 3.69, s¼ 0.58) than CBT programmes (m¼ 3.81, s¼ 0.55). When comparing web-based and CBT programmes for the context and evaluation factors of evaluation barriers, the context factors produced a higher mean score for WBT (m¼ 3.17, s¼ 0.74) than CBT programmes (m¼ 3.11, s¼ 0.73). The evaluation factors produced a lower mean score for WBT programmes (m¼ 3.14, s¼ 0.80) than CBT programmes (m¼ 3.22, s¼ 0.71).
The results of the interviews (Table 6) produced the same results as the survey; the context and evaluation factors were the primary barriers preventing HRD professionals from evaluating web-based and CBT programmes. Table 6 presents the barriers that inhibited the evaluation of these training approaches in each area – human, context, and evaluation factors.
The interview results also showed that evaluation barriers were slightly different between web-based and CBT programmes in the area of human factors due to the lack of credibility for an external evaluator or of responsibility of vendors for WBT programmes. Most companies being studied here purchased online courses from professional vendors. Three interviewees stated that the vendors did not take much responsibility for further evaluation after the training was completed. One interviewee also stated that his senior management did not trust the evaluation results provided by an external evaluator, because they did not believe that an external evaluator knew their organization’s characteristics and circumstances better than their own staff.
In addition, evaluation barriers were slightly different between web-based and CBT programmes in the area of context factors due to the unique structure of web- based courses. Based on interviews with HR/HRD directors, most of their HRD
Table 4. Descriptive statistics on three factors of evaluation barriers for WBT programs (N¼ 40).
Included Excluded Total
Mean Std. deviation N Per cent N Per cent N Per cent
Human 3.69 .58 40 54.8 33 45.2 73 100.0 Context 3.17 .74 40 54.8 33 45.2 73 100.0 Evaluation 3.14 .80 40 54.8 33 45.2 73 100.0
Table 5. Descriptive Statistics on three factors of evaluation barriers for CBT programs (N¼ 67).
Included Excluded Total
Mean Std. deviation N Per cent N Percent N Per cent
Human 3.81 .55 67 91.8 6 8.2 73 100.0 Context 3.11 .73 67 91.8 6 8.2 73 100.0 Evaluation 3.22 .71 67 91.8 6 8.2 73 100.0
90 Y.J. Kong and R.L. Jacobs
professionals needed to add more questions to the survey for their WBT programmes. Finally, all interviewees pointed out that there was little difference between web-based and CBT programmes in the evaluation factors of evaluation barriers. HRD professionals perceived that barriers in current evaluation methods and tools, especially for evaluating trainees’ performance and financial benefits, as apply to both web-based and CBT programmes.
In summary, the survey and interview results both showed that web-based and CBT programmes were not meaningfully different on the three factors of evaluation barriers. The results of both the survey and interviews also showed that the demographic characteristics of HRD professionals were not meaningfully related to the barriers to evaluating web-based and CBT programmes.
Key decision factors for determining evaluation
After interviewing five HR/HRD directors, the primary decision factors determining evaluation for web-based and CBT programmes were identified as follows: senior management’s need, development of current or new training programmes, type of training programme, political issues for online courses, personnel benefits, and budget for training programmes.
Senior management’s needs
The first decision factor determining evaluation was the needs or requests of senior management for the evaluation of web-based and CBT programmes. All interviewees
Table 6. The differences between web-based and CBT programs in the three factors of evaluation barriers.
Categories Web-based Classroom-based
Human factor 1. Lack of professional knowledge and skills No difference No difference 2. Lack of credibility & responsibility of an
external evaluator or vendors More barriers
3. Lack of HRD professionals’ commitment to evaluation
No difference No difference
4. Lack of stakeholders’ commitment to evaluation
No difference No difference
Context factor 1. Lack of senior management’s interest in
training evaluation No difference No difference
2. Lack of support from organizations (a) Limited budget for evaluation No difference No difference (b) Limited time for evaluation No difference No difference
3. The unique structure of web-based course Barriers perceived Not perceived 4. Organizational characteristics and
circumstances No difference No difference
Evaluation factor 1. Lack of evaluation methods and tools (a) Outdated, unreliable survey tools No difference No difference (b) Few reliable evaluation methods and
tools available to evaluate trainees’ performance and organizational results
No difference No difference
Human Resource Development International 91
said that this factor was themost powerful factor in howHRDprofessionals evaluated their web-based and CBT programmes. The senior management mostly asked HRD professionals to use surveys to measure trainees’ satisfaction with training and their participation rates in classes.
However, sometimes, members of senior management asked HRD professionals to evaluate trainees’ knowledge and skills, especially when trainees needed to acquire certifications. In addition, they occasionally requested information on trainees’ job performance as a result of training.
Development of current or new training programmes
The second decision factor determining decisions about training evaluation was deciding whether to improve current training programmes or develop new training programmes. Companies interviewed identified and analysed problems with current training programmes based on satisfaction rates of trainees primarily through surveys. If training satisfaction was low, then they revised current training programmes or developed new training programmes.
Type of training programme
The third decision factor for training evaluation depended on what type of training programme was offered. Generally, WBT was purchased from vendors for awareness or managerial training, whereas CBT was provided from within the organization for the purpose of technical training. All interviewees said that they generally evaluated knowledge tests, supervisor’s observations, and trainees’ satisfaction, and checked attendance rates of trainees for technical training programmes. On the other hand, they generally provided self-assessment or evaluated trainees’ satisfaction for awareness and managerial training programmes.
Political issues for online courses
All interviewees talked about political issues related to their online courses. The politics related to online courses were mainly determined by the relationships between the Korean government and organizations and between organizations and their HRD professionals. First, interviewees discussed the relationship between the government and their organizations. TheKorean government has encouraged companies to provide their employees with WBT programmes, reimbursing companies for training costs. Since companies can reduce the cost of training through going online, they endeavour to meet the government’s evaluation criteria for WBT by reporting, for example, the completion rates and satisfaction rates of trainees.
Second, the relationship between an organization and its HRD professionals worked to accomplish the intended goals of HRD professionals. For example, one of the interviewees stated that HRD professionals sometimes needed to purchase new equipment to enable the delivery of multimedia course content or replace old equipment or learning management systems with new ones. Then, when they evaluated trainees’ satisfaction with current online courses, they included evaluation of their IT infrastructure, online servers, or other equipments that they needed to buy or replace, and reported the results to management to achieve their intended goals in delivering training programmes.
92 Y.J. Kong and R.L. Jacobs
The fifth decision factor involved how to evaluate employees’ performance related to training. Three interviewees stated that they evaluated their employees’ performance to measure personnel benefits. The interviewees stated that they evaluated HRD professionals regarding how they provided their employees with training pro- grammes and whether they increased the purchase of online courses from out- sourcing companies within their organization’s budgets. In addition, the interviewees mentioned that they evaluated trainees regarding what training programmes they selected, how they took their training programmes, and whether they completed the training hours required for their job promotions.
Budget for training programmes
The last decision factor for training evaluation involved setting up the budget for implementing future training programmes. Two interviewees said that they conducted surveys to set up budgets for upcoming web-based and CBT programmes.
The premise of the study was that the evaluation of different training approaches, such as classroom-based and WBT, should be planned and performed differently because of different learning environments (Curtain 2002; Jung and Rha 2003; Rumble 2001; Whalen and Wright 1999). In general, the results of the study did not support the premise of the study, except for the process component of evaluation. The results did show some differences between web-based and CBT programmes in evaluation practices, but they were not judged to be meaningfully different.
Six points might be considered in explanation of the results of the study: (1) lack of management interest in evaluation, (2) uncertain workforce development policy, (3) possible confusion between the input and process evaluation components, (4) lack of knowledge and skills among HRD professionals, and (5) uncertainty about the original premise.
Lack of interest of senior management in evaluation
The first point suggests that senior management may not have expressed sufficient interest in all of the evaluation components. All five interviewees reported that, when evaluating web-based and CBT programmes, senior managers were most interested in trainees’ satisfaction with training and their participation rates in classes. These areas are most often categorized as part of the process component of training evaluation. Managers may have assumed that training has its own effectiveness and, thus, the most important goal of training is to have employees take classes. They also appeared to believe that trainees’ satisfaction equalled successful training and, thus, evaluating trainees’ satisfaction was the most important.
As a result, they asked HRD professionals in effect to focus on the process evaluation component, when evaluating their web-based and CBT programmes and, thus, the differences between these training approaches may be clearly revealed in the process evaluation component, an area of evaluation performed directly to satisfy their senior management’s goals.
Human Resource Development International 93
Uncertain workforce development policy
Another possible explanation for the results involves the Korean government’s policies regarding how organizations are reimbursed for the costs of WBT programmes. The interviewees stated that, in order to receive this funding, their organizations must conduct evaluation of WBT programmes, according to evaluation criteria set by the government, and report the results. The criteria required for evaluating WBT programmes include primarily completion rates and satisfaction rates of trainees.
Thus, to receive government support, organizations might request their HRD professionals to frequently evaluate their WBT programmes for the process evaluation component. Accordingly, HRD professionals may be more familiar with how to appropriately evaluate the training approaches for this component than for the other three components and, thus, they might evaluate web-based and CBT programmes differently for this component.
Possible confusion between input and process evaluation components
The third point suggests that there might be some misunderstanding about the input and process evaluation components. The results showed that web-based and CBT programmes were more frequently evaluated for the input evaluation component than for the output and organizational context components of evaluation. However, it can hardly be concluded that the input component was as frequently evaluated by HRD professionals.
The reason for the observed frequency in conducting input evaluation, based on the interview results, is that the input evaluation component was often considered to be a part of the process evaluation component, which means that it involved evaluating trainees’ satisfaction with training rather than intentionally conducting separate input evaluation. Thus, the results suggest that web-based and CBT programmes were basically evaluated for the process component, and that there are some differences between these training approaches in this component.
Lack of HRD professionals’ knowledge and skills
The results might also be explained by the fundamental lack of HRD professionals’ knowledge and skills in training evaluation. The results of both the survey and the interviews showed that HRD professionals perceived fewer barriers in the human factors than in the context and evaluation factors. The interview results further explained that since HRD professionals mostly used pre-existing survey forms for 20–30 years to evaluate their training programmes, which were not difficult to use, they may not be aware of their own lack of knowledge and expertise in the evaluation of both web-based and CBT programmes. Thus, they may not perceive any differences between web-based and CBT programmes in the human factors of evaluation barriers.
In addition, current HRD professionals can have difficulty of performing higher- level evaluation with currently available evaluation methods and tools, thus, those in the study may rarely conduct this kind of evaluation for either training approach. Accordingly, they may not be able to recognize any differences between web-based and CBT programmes in evaluation barriers.
94 Y.J. Kong and R.L. Jacobs
Uncertain original premise
Finally, the original premise – that the evaluation of different training approaches should be performed differently – might be not as strong as initially assumed. Many researchers (Jung and Rha 2000; Rumble 2001; Strother 2002; Trentin 2000; Whalen and Wright 1999) have emphasized that different training approaches should be evaluated differently, and, following this premise, that the evaluation of web-based and CBT should be conducted differently.
However, other researchers (Clark 2000; Garrison and Vaughan 2008) argue that the differences between training approaches such as web-based and CBT might be due to quality of course design, quality of the instructor, or potential individual differences among instructors rather than the delivery modes themselves. Garrison and Vaughan (2008) stated that sound learning theories such as constructivism and social cognition for learning may be more important than delivery modes for ensuring effective learning. Thus, as they suggested, effective teaching and learning principles should be considered more than the unique elements of delivery mode, when comparing the evaluation of different training approaches.
Contribution to HRD
This study contributes to both the theory and practice of evaluation for training programmes in the HRD field. Theoretically, employing a methodology that combines quantitative and qualitative elements, this study contributes to empirical research that has been conducted rarely in the HRD field. That is, the study provides deeper understanding of how organizations actually evaluate web-based and CBT, identifying evaluation barriers to the different training approaches, and how unique programme features and organizational and social circumstances result in different evaluation methods for web-based and CBT programmes.
In addition, this study contributes to our understanding of the dynamic interactions among people who have different roles and responsibilities during an evaluation process, examining how evaluators respond differently to evaluate their web-based and CBT programmes.
This study also provides some recommendations for further research. First, this study found that the lack of reliable evaluation methods and tools was one of the most prevalent reasons why organizations were not committed to evaluating their training programmes and avoided higher levels of training evaluation. Consequently, more active research is needed to develop evaluation methods and tools that are easy for practitioners to use and guide them to use these tools for evaluating different training approaches in appropriate ways.
Second, unlike evidence from existing literature, this study showed that WBT programmes were more frequently evaluated than CBT programmes for the organizational context component of evaluation due to the Korean government’s support for online learning. Since this study is limited to examining seven companies in one country, Korea, more comparison studies among various countries and companies will be needed to determine to what degree a particular country or organization’s characteristics, culture, and circumstances influence different training evaluation approaches.
Practically, the results of this study have three important implications for HRD professionals to improve current training evaluation in their organizations. First, this study found that the primary barrier was the context factors due to lack of
Human Resource Development International 95
interest in and commitment to training evaluation on the part of senior management. Thus, senior management must deliberate thoughtfully on how to manage and maintain current training evaluation effectively.
For example, HRD professionals need to revise current personnel policy and benefits related to training evaluation, focusing on ways of improving the quality of training programmes rather than increasing the quantity of online courses. Since organizations evaluate their HRD professionals’ performance based on the number of online courses purchased from vendors with the cheapest prices, HRD professionals concentrate more on increasing the quantity of courses rather than on increasing the quality of those courses. This approach can be hardly guaranteed to deliver good quality training courses to employees.
Second, this study found that most participants were unaware of evaluation barriers that came from the lack of HRD professional’s knowledge and skills in training evaluation, since they mostly used and reused existing survey templates to evaluate trainees’ satisfaction for both web-based and CBT programmes. Thus, HRD professionals should improve their job competency in the area of current training evaluation in the workplace.
For example, HRD professionals need to reflect on the adequacy of their current evaluation practices. They should take stock of what they currently do in the evaluation of their web-based or CBT programmes and examine whether they meet the basic requirements to successfully implement these training approaches. They also need to identify different requisite elements and barriers to evaluate web-based and CBT programmes and implement an appropriate evaluation for the different training approaches. In addition, they should examine and evaluate how various stakeholders correspond to multiple goals of a specific training programme to help select the best training approaches for their organizations. Professional development of training evaluation is a great opportunity for HRD professionals to improve these job competencies.
Finally, national evaluation policy makers in Korea should provide organiza- tions with effective guidance and standards to conduct appropriate evaluation for web-based and CBT programmes. Especially, they should be aware of the negative results related to their support for online courses. They should accurately investigate actual prices of online courses in the market and re-establish current evaluation criteria in order to reduce the negative practice of focusing on the quantity of online courses with cheapest prices rather than the quality of the courses. They should also request diverse evaluation reports that go beyond trainees’ satisfaction with training to help organizations determine the best training approaches for their employees.
Alkin, M. 1985. A guide for evaluation decision makers. Thousand Oaks, CA: Sage. Alliger, G., and E. Janak. 1989. Kirkpatirck’s levels of training criteria: Thirty years later.
Personnel Psychology 42, no. 2: 331–42. Bartley, S., and J. Golek. 2004. Evaluating the cost effectiveness of online and face-to-face
instruction. Educational Technology & Society 7, no. 4: 167–75. Bassi, L., G. Benson, and S. Cheney. 1996. The top ten trends. Training and Development 50,
no. 11: 28–42. Bradsford, J., A. Brown, and R. Cocking. 1999. How people learn: Brain, mind experience and
school. Washington, DC: National Academy Press.
96 Y.J. Kong and R.L. Jacobs
Clark, R. 2000. Evaluating distance education: Strategies and cautions. Quarterly Review of Distance Education 1, no. 1: 3–16.
Curtain, R. 2002. Online learning: How cost-effective? In Online delivery in the vocational education and training sector: Improving cost effectiveness, ed. Richard Curtain, 125–43. Leabrook, SA, Australia: Australian National Training Authority, NCVER.
Fitzpatrick, J., J. Sanders, and B. Worthen. 2004. Program evaluation: Alternative approaches and practical guidelines. Boston, MA: Pearson Education, Inc.
Garrison, R., and N. Vaughan. 2008. Part one: Community of inquiry framework. In Blended learning in higher education, ed. Garrison R. and V. Vaughan, 1–68. San Francisco, CA: Jossey-Bass.
Hofstetter, C., and M. Alkin. 2003. Evaluation use revisited. In International Handbook of Educational Evaluation, ed. T. Kellaghan and D.L. Stufflebeam, 197–222. London, Great Britain: Kluwer Academic Publishers.
Holton, E. 1996. The flawed four-level evaluation model. Human Resource Development Quarterly 7, no. 1: 5–21.
Holton, E., and S. Naquin. 2005. A critical analysis of HRD evaluation models from a decision-making perspective. Human Resource Development Quarterly 16, no. 2: 257– 80.
Jacobs, R. 2003. Structured on-the-job training: Unleashing employee expertise in the workplace. San Francisco, CA: Berrett-Koehler.
Johnson, R. 1998. Toward a theoretical model of evaluation utilization. Evaluation and Program Planning 21, no. 1: 93–110.
Jung, I.S., and I. Rha. 2000. Effectiveness and cost-effectiveness of online education: A review of literature. Education Technology 40, no. 4: 57–60.
Kapp, K., and C. McKeague. 2002. Blend two proven training methods to improve results. Chemical Engineering 109, no. 8: 191–4.
Kirkpatrick, D.L. 1998. Evaluating training programs: The four levels. San Francisco, CA: Berrett-Koehler, 1998.
Kraiger, K. 2002. Decision-based evaluation. In Creating, implementing, and managing effective training and development systems in organizations: State-of-the-art lessons for practice, ed. Kurt Kraiger, 331–75. San Francisco, CA: Jossey-Bass.
Leviton, L. 2003. Evaluation use: Advances, challenges and applications. American Journal of Evaluation 24, no. 4: 525–35.
Lincoln, Y.S., and E.G. Guba. 1985. Naturalistic Inquiry. Beverly Hills, CA: Sage Publications, Inc.
Maxwell, J. 2005. Qualitative research design: An interactive approach. Thousand Oaks, CA: Sage Publications, 2005.
Olson T., and R. Wisher. 2002. The effectiveness of web-based instruction: An initial inquiry. International Review of Research in Open and Distance Learning 3, no. 2.
Phillips, J. 1997. Handbook of training evaluation and measurement methods. Houston, TX: Gulf Publishing.
Rumble, G. 2001. The costs and costing of networked learning. Journal of Asynchronous Learning Network 5, no. 2: 75–96.
Russ-Eft, D., and H. Preskill. 2011. Evaluation in organizations. Cambridge, Massachusetts: Perseus Publishing.
Shulha, L., and J. Cousins. 1997. Evaluation use: Theory, research, and practice since 1986. Evaluation Practice 18, no. 3: 195–208.
Strother, J. 2002. An assessment of the effectiveness of e-learning in corporate training programs. The International Review of Research in Open and Distance Learning 3, no. 10.
Stufflebeam, D., W. Foley, W. Gephart, L. Hammond, H. Merriman, and M. Provus. 1971. Educational evaluation and decision-making in education. Itasca, IL: Peacock.
Taut, S., and M. Alkin. 2003. Program staff perceptions of barriers to evaluation implementation. American Journal of Evaluation 24, no. 2: 213–26.
Trentin, V. 2000. The evaluation of online courses. Journal of Computer Assisted Learning 16, no. 3: 259–70.
Whalen, T., and D. Wright. 1999. Methodology for cost-benefit analysis of web-based tele- learning program: Case study of the bell institute. The American Journal of Distance Education 13, no. 1: 24–44.
Human Resource Development International 97
Wisher, R.A., and T.M. Olson. 2003. The effectiveness of web-based instruction. Alexandria, VA: United States Army Research Institute for the Behavioural and Social Sciences Publication.
Yanmil, S., and G. McLean. 2001. Theories supporting transfer of training. Human Resource Development Quarterly 12, no. 2: 195–208.
98 Y.J. Kong and R.L. Jacobs
Copyright of Human Resource Development International is the property of Routledge and its content may not
be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written
permission. However, users may print, download, or email articles for individual use.