EVALUATING THE COOPERATIVE EDUCATION PROGRAM

BETTY ANNE ARMSTRONG
Newburgh, Ontario
Canada

Introduction

While cooperative education is a relatively new phenomenon in Canadian secondary schools, it is a teaching methodology rooted in the older American methodologies of experiential education and experience-based career education (Gager, 1982; Conrad and Hedin, 1982). Co-op programs began to emerge in Ontario, Canada, in the early 1970's and in 1979 the Ontario Ministry of Education (OME) formally recognized the rapid proliferation of these programs as being based on a valid method of delivering a course of study (OME, 1979). At that time cooperative education was defined as:
[ a course or set of courses which] consists of an in-school component and an out-of-school component so that learning and experience are combined in an educationally beneficial way (OME, 1979, p.18).

(It is important to note that up to two thirds of a student's course-related learning time may be spent at the worksite in such a program.) Because of the relative youth of this type of program in Ontario secondary schools, a review of the literature related to program evaluation was undertaken in order to provide some guidance to co-op practitioners in this province. This article is written with the needs of the Canadian secondary school co-operative education practitioner in mind. However, the similarities between Canadian and American secondary and university level co-op programs are numerous enough to warrant an audience for this paper which goes beyond the secondary school co-op teacher.

The three-fold purpose of this article is to:

Fifty Years of Educational Program Evaluation

In the fifty years since R.W. Tyler first developed and applied a systematic approach to educational evaluation, program evaluators seem to have fallen into three roughly different camps. Some evaluators have recommended an evaluation by objectives approach, which seems to have originated out of the business world's manage­ment-by-objective approach to evaluation. Other evaluators have recommended an experimental approach to evaluation, which seems to have been rooted in the scientific community's quantitative approach to research. And finally, still other evaluators have recommended a more holistic approach to evaluation which seems to have arisen out of the failure of the earlier two approaches to provide a well-rounded evaluation of an educational program. This section of the paper will discuss representative authors of each of the three approaches to program evaluation and will attempt to point out the underlying assumptions and biases of each approach for the reader.

Ralph W. Tyler was perhaps the first educator to develop and apply a systematic approach to educational evaluation. He recommended that the follow­ing steps in an evaluation procedures be adopted:

However, there are problems with applying a classical Tylerian approach to program evaluation. First of all, most co-op practitioners have not been trained to develop curriculum objectives (Stufflebeam and Shinkfield, 1985). This means that there is a danger that uninformed co-op coordinators will develop trivial behavioral objectives in order to facilitate their easy measurement.

Secondly, because the curriculum objectives are expressed in terms of student behavior, this tends to narrow the focus of program evaluation to student behaviors (on their acquisition of information and/or skills, only) rather than broadening the focus to include other elements which enter into the delivery of a program and which may have affected those student behaviors ( curriculum materials and content, teacher effectiveness, institutional support, etc.) and which should also enter into an evaluation of that program. Not only is the focus narrowed to student behaviors alone, but expected student behavior becomes the ultimate criteria for every educational action, a fact which has been criticized by a number of authors (Cronbach, 1980; Eisner, 1985).

Thirdly, the classical Tylerian approach has been criticized because it fails to evaluate the worth of the objectives, themselves, and because evaluation tends to become a terminal process which is carried out at the end of the program when it is too late to make any changes for the most recent participants.

Tyler's opinions, however, have changed over the years. In the Journal of Cooperative Education (Summer, 1980, p.9) he stated:
... in the fifty years since systematic educational evaluation has been recognized and the development of procedures designed for it was begun, the conception of the evaluation process has become an activity encompassing the total time span of educational program planning, development, implementation, and operation.

In the decades following Tyler's first work in evaluation, other authors - whose orientation was scientifically based (Suchman, 1954; Campbell and Stanley, 1963) - asserted that "evaluation, like all research, must rest upon the logic of the scientific method and should therefore adhere to standards of research methodology" (Stufflebeam and Shinkfield, 1985, p.94). (It is important to note that these researchers were espousing a quantitative research approach, as opposed to the more recently acceptable qualitative research approach.) Program evaluators were therefore enjoined to conduct their investigations in as scientifically rigorous a manner as possible. However, these authors tended to forget that the classroom was not a laboratory and could not be controlled in the same way. Program evaluation which attempted to adhere to a strictly scientific model opened itself to accusations of lack of control of variables, on the one hand, or to too rigorous control of variables - thereby giving rise to an unrealistic classroom environment - on the other hand. Furthermore, this approach to evaluation focused on student performance without attempting to examine all the other variables which might have given rise to that performance.

While evaluations utilizing rigorous research design had high "scientific" respectability, they often imposed controls which were unattainable and/or impossible to maintain in the context of the classroom and which interfered with the day-to-day operation of the programs under study. These studies were limited to an investigation of the effects of a small number of variables, tending to ignore other variables from the wider context in which the program took place.

It is important to note that, while early proponents of the scientific method emphasized classical experimental approaches - pre- and post-testing, standardized tests, statistical analysis of data, etcetera - in recent decades behavioral and educational researchers have advocated the inclusion of a field study, or qualitative, approach to inquiry in which the researcher spends a great deal more time actually observing and/or participating in the classroom in order to gain a better understanding of the context in which the students are learning (Owens et al., 1979; Fehrenbacher et al., 1979; Post, 1979; Cronbach, 1980).

Because of Tyler's original focus on student performance and because "scien­tifically" structured research was dependent on providing easily reproducible results, program evaluation through the 1940's and 1950's tended to utilize stan­dardized, norm-referenced achievement tests to determine both student per­formance levels and the "worth" of the program itself.

Standardized testing has, and had, the advantage of being based on psy­chological theory, of employing standardized technology and procedures to administer and score, of being supported by professional standards, and of providing "scientifically" based norms against which to judge student performance. However, standardized testing is limited because of the availability of only a certain number of test instruments which are relatively inflexible - due to the enormous costs and amount of time required to produce them. Furthermore, these tests may, or may not, be appropriate for use in a particular educational setting since they are based on national (usually American) norms which do not take into account the differences inherent in different locales.

Standardized testing tends to limit itself to knowledge and abilities which can be measured easily by paper and pencil tests. This, in turn, tends to trivialize the worth of those educational experiences which cannot be measured using a paper and pencil test (Eisner, 1985). Ironically, these tests are often only indirect measures of learning and do not measure outcomes which are directly related to a particular program or school (Stufflebeam and Shinkfield, 1985).

Many authors have discussed the limitations of standardized testing at much greater length (Sax, 1980; Eisner, 1985; Cronbach, 1980; Stufflebeam and Shinkfield, 1985) than the brief discussion provided in this paper. In summary, however, a complete reliance on standardized tests to measure student learning and the value of the program is no longer recommended by evaluation researchers.

During the 1960's the previously mentioned limitations of business' management-by-objective approach and of the scientific approach to program evaluation became the topic of widespread discussion within the discipline. It became more and more apparent that the results of educational evaluations were not particularly helpful to practitioners and decision-makers and were - for the most part - being ignored by those people. In 1971, Phi Delta Kappa's National Study Committee on Evaluation formally concluded that the field of educational evaluation needed the development of new theories, new methods of evaluation and new training programs for evaluators. Concurrent with Phi Delta Kappa's examination of the field, new conceptualizations began to emerge.

Contrary to the early Tyler and Campbell writings, Stufflebeam pointed out that evaluation was "not an event but a process" (Stufflebeam & Shinkfield, 1985, p.159). He felt that evaluation should be ongoing and should be concerned with improving a program by examining: the context in which the teaching and learning took place, the quality of the curriculum and the curriculum materials, the processes which occurred as part of the student-teacher interaction, and the outcomes (product) of all of the above. He coined the acronym CIPP (context, input, process, product) to encapsulate the elements of his view of the evaluative process.

Like Tyler and Suchman, Stufflebeam thought that evaluation should be geared to the information requirements of decision-makers, since he reasoned that the administrators would be the ones ultimately responsible for determining new program directions.

Stake (1976), to the contrary, envisioned evaluation as being more responsive to the needs of the teachers and/ or staff, calling this a client-centered approach to evaluation - since these were the people who would use the results of the evaluation. Accordingly, he stressed the need for a more humanistic approach whereby information was collected from representatives of all the persons involved in a program so that side effects and unintended outcomes - as well as intended outcomes - became apparent. It was Stake who coined the terms responsive and pre-ordinate evaluation. [Responsive evaluation responds to the teacher's need to understand a program's strengths and weaknesses whereas pre-ordinate evaluation is concerned with the degree to which the pre-established (preordained) goals and objectives have been met (Stake, 1976).J Stake seems to have been one of the first to point out that internal, self-evaluation was potentially more useful than an external evaluation, contrary to most other researchers who were wedded to the notion of outside evaluators conducting the study.

Building upon Tyler's work, Scriven distinguished between formative and summative evaluation - formative evaluation taking place throughout a program's duration and summative evaluation taking place at the end of a program. Due to the influence of the management-by-objectives model, Scriven stressed summative evaluation's importance over that of formative evaluation. He also extended the marketplace approach to education by saying that evaluators should take a consumer-oriented approach to evaluation, engaging in a cost analysis of the program (Scriven, 1983).

Unlike Tyler and Scriven, who tended to concern themselves with the worth and/ or value of a program, Cronbach focused on the educational nature of program evaluation, stating that it should concern itself with the enlightenment of the participants - potential improvement, not judgement, of a program should be of primary importance to the evaluator ( Cronbach, 1980, p.1 ). He further recommended that evaluation methodology borrow from both the business and the scientific approach to evaluation because this would provide more relevant information - relevance of information, not the form of inquiry, being of primary importance to the evaluation process.

Along with the debate over the preferred methodology to be used in carrying out program evaluation, scholars also debated the purpose of such an exercise. Eisner, Cronbach and Stake emphasized the informative nature of evaluation. They believed that the information arising from the evaluation of a program should be used to educate and inform the staff and administrators, thus enabling them to make changes which would bring about improvements in the program.

Tyler 1, Scriven and Stufflebeam asserted that the main purpose of evaluation was to establish a program's worth. Tyler likened program evaluation to an independent audit (1980), and Scriven (1983) encouraged evaluators to ensure that the costs of a program did not outweigh its benefits.

While there have been many other scholars who have published their thoughts on the evaluation process, I have chosen to discuss the work of Tyler, Campbell, Suchman, Stufflebeam, Stake, Scriven and Cronbach because I believe them to be representative of the three most important traditions in the field of program evaluation: business and industry's management-by-objective approach (Tyler, Scriven), scientific research's experimental approach (Suchman, Campbell), and a holistic approach (Stufflebeam, Stake, Cronbach). For the purposes of evaluating the cooperative education program, I believe these to be the three most influential traditions.

Evaluating the Cooperative Education Program

The previous section of this paper outlines the various approaches to educational program evaluation in order to establish a context within which to understand the literature directly related to evaluating the cooperative education program.

The debate over methodological preference and the purpose of program evaluation still continues in the published reports of scholars who have been involved in evaluating cooperative education and experiential learning programs. Furthermore, these scholars seem to have adopted approaches to evaluation which seem to fall within the three previously identified types: business, scientific and holistic.

Little and Landis (1984) employed a business-like approach to evaluation, recommending an evaluation model which examined the program's objectives and processes and engaged in a cost-benefit analysis in order to establish the worth of the program.

Post's article (1979) was a fascinating example of a three year study which began in the classical research mode (scientific approach) in an attempt to establish the worth of an experimental career education program. But at the end of the first year, the researchers realized that they were gathering information which was of dubious value to the evaluation process. The article documents the emer­gence of a more holistic approach to evaluation in which the evaluators spent much more time observing in the field and utilizing the staff as field researchers. From an attempt to establish the worth of the program, the evaluation process became one in which the primary concern was to inform and educate program staff and administrators in order to enable them to improve the program.

Fehrenbacher, Owens and Haenn, in two different articles for The Journal of Research and Development in Education (1979), championed a holistic approach to educational evaluation in which the primary concern was to educate and inform staff and administration in order to bring about improvements in the program. However, these authors also recognized the fact that there may be occasions when it was necessary to prove a program's worth. They recommended the adoption of different methods of gathering information dependent upon the underlying reason for engaging in the evaluation process.

While these are not the only scholars to publish articles on evaluating the cooperative education program, they are illustrative of the fact that the field of evaluation is still evolving. The traditional approaches to evaluation (business, scientific, holistic) are still with us and the debate over the purpose of evaluation continues unabated.

There have been some efforts at cooperative education program evaluation in Ontario, Canada. The Wellington County Board of Education published an excellent interview schedule accompanied by an hierarchical outline of teacher­ coordinator behaviors, against which the program coordinator's interview responses are rated. However, this instrument examines only teacher-coordinator behavior and, as pointed out in the earlier discussion of evaluation literature, it is necessary to examine other aspects of a program during the evaluation process.

The Board of Education for the City of York ( Ontario, Canada) has also published a checklist of teacher-coordinator behaviors which are rated on a scale from least to most satisfactory. Again, this instrument is limited because the sole emphasis is on the program coordinator's behavior.

In 1980 the Leeds Grenville Board of Education ( Ontario, Canada) published the results of an evaluation of the first year of an experimental cooperative education program. The evaluation examined the context, input, process and product of the program but limited itself to questionnaires as the sole instrument for gathering data. Many authors have pointed out that it is necessary to use multiple information-gathering techniques in order to remove concerns about the validity of the information being gathered (Owen et al., 1979; Fehrenbacher et al., 1979; Post, 1979; Cronbach, 1980; Stufflebeam and Shinkfield, 1985).

A number of other researchers have examined the effect that cooperative education, and other forms of experiential education, programs have had on student development.

During the 1978-79 school year, Conrad and Hedin carried out a massive study of 4,000 secondary school students who were enrolled in experiential education programs across the United States. The study attempted to quanti­tatively measure the impact such programs had on student development and to identify the program variables which promoted such student development.

They defined experiential education as: "educational programs offered as an integral part of the general school curriculum, but taking place outside of the conventional classroom, where students are in new roles featuring significant tasks with real consequences, and where the emphasis is on learning by doing with associated reflection" (Conrad and Hedin, 1982, p.59). Because their definition of experiential education is so similar to the Ontario Ministry of Education's definition of cooperative education, it is appropriate to consider the results of their study to be applicable to co-op as well.

In the 1982 report of the results of their study, Conrad and Hedin pointed out that experiential education had a positive impact on student social, psychological and intellectual development. Students tended to increase significantly - both in absolute terms and relative to students in regular classrooms - in tests of moral reasoning, self-esteem, social and personal responsibility, attitudes toward adults and others, career exploration, and empathy/complexity of thought.

The authors also highlighted the fact that the single element common to all these programs was that they included a weekly reflective learning session, based on student experiences at the worksite, which helped students integrate the work experience with the theoretical in-class study. The authors cited these sessions as "the single strongest factor in explaining positive student change," (Conrad and Hedin, 1982, P.71).

In a smaller Canadian study, Shaughnessy (1985) administered the Personal Skills Map as pre- and post-test to 49 southern Ontario secondary school students enrolled in two cooperative education courses and to 47 students enrolled in two non co-op courses. [The PSM measures intrapersonal (psychological), interpersonal (social) and career/life effectiveness skills, as well as personal wellness skills,] Shaughinessy found increases in all areas in the cooperative education students as opposed to overall regression in the control students. And, in a later study, Stressman (1986) administered the same test to 15 northern Ontario cooperative education students and to 14 control students - once more, finding overall gains in all areas by the co-op students.

Some scholars (Simon, 1983; Moore, 1981; Greenberger and Steinberg, 1986; and Gager, 1982) have described and analyzed the social context in) which work placement learning occurred. They examined the processes by which student ­learning on-the-job was made possible, or not, by the supervisors and/or other workers. In doing so, they highlighted the importance of the informal and unplanned learning which takes place at the worksite and which impacts on the student's social, psychological and intellectual development.

Echoing Conrad and Hedin, Simon, Moore and Gager recommended that the social context of the worksite form the basis for a critical pedagogy to be used in reflective learning sessions examining the co-op worksite. [Critical pedagogy is a teaching methodology rooted in the "lived" experience of the learner - "a form of education which fosters the questioning of existing social forms and an interest in alternative possibilities" (Simon, 1983, p.18).]

To summarize the research into the impact of cooperative and experiential education:

Recommendations for the Evaluation Process

The previous section of this paper outlines recent research into effective cooperative education programming and program evaluation. In this section recommendations will be made as to how the process of evaluating a cooperative education program, should be accomplished, in the light of the literature reviewed.

It is as follows:

The program evaluation process for a cooperative education course should proceed through the following steps:

  1. Determine the purpose of the evaluation, whether it is to bring about program improvement or to attempt to prove the worth of the program. Once the purpose has been determined, the decision is made as to who will receive the report.
  2. Establish whether or not the evaluation will be carried out internally or externally.
  3. Establish who will carry out the evaluation (teacher-coordinator, administrator, co-op advisory board member(s), a team made up of all of the above, an external evaluator, etcetera).
  4. Establish specific and measurable goals and objectives of the program.
  5. Establish a list of the program elements which will be investigated such as: student access to the program ( which includes information dissemination, entry criteria and entry process, student placement at the worksite ); timetabling of course; administrative support for program; course content, materials and delivery; instruction and communication; work placements ( which includes supervisor effectiveness and appropriateness of placement); quality of work­site monitoring; achievement of course objectives; student performance both in and out of school; student, parent and employers' evaluation of program.
  6. Decide on information-gathering techniques which will provide data on anticipated as well as unanticipated program outcomes (for example: exam­ination of documents, interviews, observation, follow-up of graduates).
  7. Design the information-gathering instruments ( questionnaires, observation checklists, worksite monitoring forms, interview schedules, student log formats, etcetera).
  8. Gather information on an on-going basis.
  9. Analyze the information/data.
  10. incorporate the results of the evaluation by making necessary changes to the program.
  11. Record the outcome of the evaluation process. Report outcomes if it is politically expedient to do so.

In conclusion, the above synthesis of the literature lends support to the contention that the evaluation process described in this article is appropriate to the special needs of the cooperative education program.

References

Anderson, Stephen E. (1987). Evaluation manual for community-based training programs. Available from Community Outreach Dept., George Brown College, P.O. Box 1015 Station B, Toronto, Ont. MST 2T9.

Campbell, D.T. & Stanley, J.C. (1%3). Experimental and quasi-experimental designs for research on teaching. In N .L. Gage (Ed.), Handbook of research on teaching, pp. 171-246. Chicago: Rand McNally.

Conrad, D. E. & Hedin, D. (1982). The impact of experiential education on adolescent development. In Conrad & Hedin, (Eds.), Youth participation and experiential education, pp. 57-76. New York: The Haworth Press. Cronbach, Lee J. et al. (1980). Toward reform of program evaluation. San Francisco: Jossey Bass.

Eisner, Elliot W. (1985). The art of educational evaluation: A personal view. Philadelphia: The Falmer Press.

Fehrenbacher, H.L.; Owens, T.R. & Haenn, J.F. (1979). Student case studies as part of a comprehensive program evaluation. Journal of Research and Development in Education, 12, pp. 63-70.

Gager, Ron. (1982). Experiential education: Strengthening the learning process. In D. Conrad and D. Hedin (Eds.), Youth participation and experiential education, pp.31-41. New York: The Haworth Press.

Greenberger, E. & Steinberg, L. (1986). When teenagers work: The psychological and social costs of adolescent employment. New York: Basic Books, Inc,

Kleinfeld, Judith. (1983). Practical evaluation for experiential education. Journal of Experiential Education, 6, 45-47.

Leeds & Grenville County Board of Education. (1980). Evaluation report for the Gananoque programme: Co-operative education. Available from the Leeds and Grenville Board of Education, 25 Central Ave., Brockville, Ont.

Little, M.W. & Landis, L.M. (1984). An evaluation model for a comprehensive cooperative education program. The Journal of Cooperative Education, 20, 28-40. Moore, D.T. (1981). Discovering the pedagogy of experience. Harvard Educational Review, 51, 286-301.

Ontario Ministry of Education. (1979). Secondary school diploma requirements: Circular H.S.1.

Owens, T.R.; Haenn, J.F. & Fehrenbacher, H.L. (1979). The use of multiple strategies in evaluating an experience-based career education program. Journal of Research and Development in Education, 12, 35-49.

Post, John 0., Jr. (1979). An evaluation model for exemplary projects. Journal of Research and Development in Education, 12, 15-20.

Sax, Gilbert. (1980). Principles of educational and psychological measurement and evaluation. Belmont, Calif.: Wadsworth, Inc.

Schon, David A. (1983). The reflective practitioner. New York: Basic Books. Scriven, Michael. (1983). Cost in evaluation: Concept and practice. In M.C. Alkin and LC. Salmon (Eds.), The costs of evaluation, pp. 27-44. Beverly Hills: Sage Puhl.

Shaughnessy, Pat. ( 1985 ). Personal skills development of cooperative education students in two secondary schools in the city of York. Toronto: The Board of Education for the city of York.

Simon, Roger I. (1983). But who will let you do it? Counter- hegemonic possibilities for work education. Journal of Education, 165: 235-56.

Stake, R.E. (1976). A theoretical statement of responsive evaluation. Studies in Educational Evaluation, 2, 19-22.

Stressman, H. Elsie. (1986). A study of personal skills of cooperative education students. Unpublished master's thesis, Lakehead University, Thunderbay, Ont.

Stufflebeam, D.L. & Shinkfield, A.J. (1985). Systematic evaluation. Boston: Kluwer­ Nijhoff Publishing.

Suchman, E.A. (1954). The principles of research design. In J.T. Doby et al. (Eds.), An introduction to social research, pp. 254-267. Harrisburg, P.A.: The Stackpole Co.

The Board of Education for the City of York. An objectives-based evaluation model for cooperative education teachers. Available from The Board of Education for the City of York, 2 Tretheway Dr., Toronto, Ont., M6M 4A8.

Tyler, Ralph W. (1980). A brief overview of program evaluation. Journal of Cooperative Education, 16, 7-15.

Wellington County Board of Education. (1984). Cooperative education: Co-op program implementation evaluation profile. Available from The Wellington County Board of Education, 155 Paisley St., Guelph, Ont. NIH 2P3.

Wending, Tim L. ( 1980 ). Evaluating occupational education and training programs. Boston: Allyn E Bacon, Inc.