INVESTIGATING THE EDUCATIONAL BENEFITS OF COOPERATIVE EDUCATION: A LONGITUDINAL STUDY1

GERALDINE VAN GYN 2
JAMES CUTT
MAKRK LOKEN
FRANCES RICKS

University of Victoria
Victoria B.C.
Canada

Traditionally, cooperative education in colleges and universities has been viewed as an effective training strategy that benefits the career devel­opment of students (Godfrey, 1989) but is not necessarily regarded as an effective educational strategy. This view is in contrast to the origins of the program that began with the educational theories of John Dewey. His posi­tion was that "the only true education comes through the stimulation of the (learner's) powers by the demands of the social situations in which he finds himself" suggesting that learning occurred through problem solving in an authentic environment (Dewey, 1933). The basis of the cooperative educa­tion movement is, most certainly, the education of the student, but over time the focus and interest have shifted to the more tangible, pragmatic employ­ment outcomes associated with this curriculum model. In keeping with this orientation, much of the research on cooperative education outcomes has focused on career-related benefits of this program model.

It may be argued that the career benefits to students are directly related to the academic benefits of the program. However, there are so many confounding variables, such as the familiarity of students with employers and the opportunity for the work term to act as a career entry opportunity, that a clear link between academic preparation in cooperative education programs and probability of success in employment has not been established in any rigorous manner.

Although Fletcher (1989) suggests that the beneficial effect of co-op education on academic achievement has been carefully documented and replicated, questions have been raised (Wilson, 1987; Rowe, 1989) as to the validity of the findings. The main research efforts (e.g., Smith, 1965, Lindenmeyer, 1967) that have investigated the educational benefits to students in co-op are either descriptive in nature or suspect due to methodological problems related to choice of dependent variable or lack of control of entry variables that may influence academic results at graduation. A cross-­sectional study of a large number of co-op and regular students at entry to their respective programs clearly showed academic differences between the two groups with the co-op sample having the advantage at this point in their program (Van Gyn, et al., 1996). The difference was measured by two methods: grade point average upon entry and a standardized test of applied knowledge in a variety of domains designed by the American College Testing Program. Both dependent variables revealed statistical differences between the groups and gave support to Wilson's (1987) concern that acad­emic differences at graduation shown in previous research may not be due to program effects but to entry level differences.

A strong theoretical argument has been made (Branton et al., 1992) for the educational benefit of cooperative because of the interaction of the academic and work term experiences in co-op. As well, several articles (Guskin. 1993; Sheckley et al., 1993; Wagner, 1993; Van Gyn, 1994a; Van Gyn, 1996) have described the potential of the cooperative education model in supporting contemporary learning strategies such as problem solving, experiential learning and reflective practice. There is no shortage of theoretical support for the educational benefits of cooperative educa­tion but empirical evidence of this phenomenon is lacking.

In an attempt to validate the educational efficacy of cooperative educa­tion, a large cross-disciplinary longitudinal study was implemented. A major problem facing the researchers was the choice of a suitable instrument with which to measure learning. As the research was grounded, theoreti­cally, in the educational model proposed by Branton, et al. (1992), we were concerned that the measures of learning reflect the effect of the interaction of the learner with the various learning environments. Another issue to be considered was the choice of an experimental design that would control for variables such as entry level academic differences. Over a two-year period, considerable effort was applied to these issues and the longitudinal research was initiated. The main focus of the study was to determine whether partic­ipation in a co-op program made any difference to the academic progress of the students in that program. Other facets of the cooperative education model were explored and the results have been reported elsewhere (Branton et al., 1992; Van Gyn, 1994b; Van Gyn et al., 1996) but this paper is concerned only with the investigating of the educational impact on the student participating in cooperative education.

Methodology

Subjects. Subjects (n=999), from both cooperative (CP) and non coop­erative education programs (NCP) were recruited from engineering, science and arts faculties at the University of Victoria (UVIC) and the University of British Columbia (UBC) in Canada. Recruitment from two universities was necessary because an insufficient number of regular program students in the appropriate subject disciplines were available at the University of Victoria to ensure the application of the subject sampling technique chosen for this study. The large number of subjects was chosen to permit the creation of sufficient pairs of subjects (CP /NCP) matched on several variables. Please refer to Table 1 for the number of subjects recruited in each program from each university and to the procedures section of this paper for the method of matching subjects.

Table 1
Number of Subjects Recruited in Each Discipline Area in Co-op and Non-Co-op Programs
Co-op Non-Co-op Total
Arts 84 323 407
Engineering 128 187 315
Science 97 180 277
Total 309 690 999

Only students with at least a second class standing in their previous academic year who were entering either the first or second year of a program were recruited. These criteria were chosen because many cooperative education programs require second class standing for entry and are begun in the first year (e.g., engineering) or second year. This then ensured a minimum level of homogeneity among subjects for the initial testing, but it limits the applicability of the results to programs with similar standards.

Instrumentation. The objective form (OT) of the College Outcomes Measure Program (COMP) exam developed by the American College Testing Program (ACT) was chosen to measure the level of knowledge of the subjects in both the initial and second testing phases. Two different versions of the test were used in the pretest (Form 8) and the post-test (Form 10) to control for test familiarity. The two different forms of the test have been statistically equated by ACT.

The OT is based on the same series of fifteen simulation activities as the COMP. Both instruments yield a total score and sub-scores for each of the six outcome areas defined by COMP. The only difference between the two tests is that the OT consists entirely of multiple choice questions and so is less complex to administer and score than the COMP ACT showed a correlation coefficient of .87 between the two instruments indicating that the OT is an accurate proxy measure for the COMP (Steele, 1989). ACT reports that with the OT the accuracy of the scores for individuals is reduced, but that the accuracy of the OT "is much higher when used to monitor progress of groups in acquiring general education knowledge and skills." (Steele, 1989, p.11). ACT also reports that many studies have supported the validity of the OT in measuring level of general education. Reliability of means for groups on the OT was estimated by ACT as .98 for the total score and .97 to .98 for the six sub-scores.

The COMP test was specifically designed by ACT to measure the ability to apply general knowledge and skills to functioning in adult society. ACT suggests that by implication it also indirectly measures applied knowledge. The OT version of the COMP requires approximately three hours to complete. The test contains a series of simulation activities based on realistic stimulus material drawn from the adult public domain. The student responds by answering several multiple choice questions based on the simulations. The OT contains three areas of process knowl­edge: Communicating (speaking and writing), Solving Problems, Clari­fying Values and three areas of content knowledge (Functioning within Social Institutions, Using Science and Technology, and Using the Arts). The three content areas and the three skill- related or process areas are interre­lated on the test. A sampling plan to test for each of these areas is built into the OT. The test battery includes sixty tasks, each of which elicits two student responses. Each item on the test serves to measure a process dimension in a particular content dimension. The test is scored so that a maximum of 240 points can be earned. The scores on the sub dimensions are not additive but reflect both a process and a content score. Therefore, the total of the process scores (Communication, Problem Solving and Values Clarification) is equal (within 1 score value) to the Total Score as is the total of the content scores (Functioning in Social Institution, Using Science and Technology and Using the Arts). This interrelationship is shown in Figure 1.

The choice of the instrument was based on ACT's extensive testing of the OT and the normative data that was available against which to compare the results of the samples in this study. Besides its rigorous design and statis­tical basis, the fact that the instrument addressed both process and content knowledge made it appropriate for this longitudinal study.

Figure 1. The Interrelationships of the Process and Content Areas in the Objective Form (OT) of the College Outcomes Measures Program (COMP)

Procedure. Subjects were recruited from university classes through the cooperation of coordinators and academic advisors of various programs. The recruitment phase began in the fall term of the academic year (September through October). Pretesting of the 999 subjects began in November and concluded in December. Subjects were tested in groups of 25 in a university classroom setting. They were paid $20.00 for the initial testing and were informed that they would be paid an additional $30.00 if they participated in the second phase of testing in approximately 2.5 years.

Besides collecting data from the OT, demographic data were also collected. These consisted of information on gender, age, work experience, location of origin (urban or rural), previous academic experience, acad­emic standing and discipline.

The second contact with the original subjects began 24 months after the initial testing. During that period, a number of subjects had with­drawn from the university, had taken a considerable time out from their academic studies or had changed programs (some from co-op to regular or vice versa). These subjects were not included in the second testing phase. The sample number in the second phase of testing was 582.

Because of the cooperative education program format, some manipu­lation of the time of testing had to occur to ensure that all subjects in the post-test sample had attended university for an equal number of academic terms. Therefore, testing took place between November and May, 26 to 31 months following the initial testing. Subjects were tested in the same loca­tion as the initial testing and were guided by the same tester. In both the pre and post-test situations, the tester impressed upon the subjects how important their performance on the test was to the growth and development of their program within the university and encouraged them to address the test in a very serious manner, attempting to give their best performance. Subjects were given the results of both the pretest and the post-test after all testing was completed.

Following the second testing phase of the study during which the subjects who participated in the initial testing were retested on the OT (Form 10), a smaller pool of subjects (n=234 or 117 matched pairs) was selected from the larger subject group. This sub sample was created by matching pairs of subjects on the following variables:

Subjects were not matched on GPA at entry due to the well known diversity in standards and grading procedures among institutions. It was the analysis of the post-test scores of this smaller, closely matched sample that allowed the researchers to study any differences between the co-op and regular student sample regarding their scores on the OT without the confounding variable of entry level differences.

Data Analysis. A univariate analysis of variance (ANOVA) was applied to the pretest total scores of the larger sample (n=999) and to the pretest total score of all subjects who completed both the pretest and the post-test (n=582). As well, the pretest scores of the matched pairs sample (n=234 or 117 matched pairs) were analyzed in the same manner. This analysis was also applied to the post-test total scores of the larger sample (n=582) and to the post-test total scores of the matched pairs sample.

Because of the nature of the sub scores, a multivariate analysis of variance (MANOVA) was applied to the six sub scores on the pretest and on the post-test of both the large group sample and the matched pairs sample.

Results

As reported in a previous paper on the entry level differences of the CP and NCP groups, (Van Gyn, et al., 1996), the univariate analysis of vari­ance applied to the total pretest scores of the large sample (n=999) showed a significant difference between the CP and NCP subjects, F(l, 997)=11.32, p<.001. The MANOVA on sub-scores showed that the CP sample scored significantly higher on Using Science and Technology, F(l, 263)==47.96, p<.01, and Problem Solving, F(l,736)=20.23, p<.001. Subsequent analysis of differences between subjects from UBC and UVIC showed an institu­tional effect, F(l, 997)=4.46, p<.05). However, on further analysis, the effect was due to program (CP or NCP) rather than institution as most CP students were from UVIC. The analysis of the total pretest scores of those subjects who completed both the pretest and post-test phase (n=582) also revealed a significant difference between groups, F(l,580=8.77), p< .001. No differences were found between those subjects who were retained in the study and those who dropped out or were screened out. This reas­sured the researchers that the subjects who did not complete the study were not systematically different in their capability to achieve on the pretest from those who did remain in the study. Further analysis of the pretest and demographic data from the large sample revealed other differ­ences, but these have been reported in a previous paper on entry level differences (Van Gyn, et al, 1996). The main finding that would have a bearing on the analysis of the longitudinal pretest data was that the "average" student in the total sample scored at the 80th percentile on the mean Total score. This compares with the norm of the 58th percentile for students in similar reference groups at ninety-three four-year institutions in the United States. This can be explained by the fact that participants in the co-op sample had to have had at least a second class standing to be admitted to their programs and those in the non co-op sample were screened into the study on the same basis. The reference group of college students tested by ACT had no such academic requirement. However, with such high scores in the pretest, it was anticipated that there might be a ceiling as to the gain or improvement on the post-test.

The ANOVA applied to the 117 matched pairs of CP and NCP subjects revealed no significant differences between groups on the total pretest scores of Form 8 of the OT. (F (1, 232) =.05, p=.83). This was expected since total pretest score (plus or minus 2 raw scores) was one of the variables on which the pairs were matched, but a statistical matching was done for verification. Since the pairs were not matched on sub-scores, due to logistical considerations, an analysis of these six scores was completed to determine any differences between the groups. The MANOVA showed that the two groups were not significantly different in any of the six sub-scores (Table 2).

Mean total post-test scores on the OT (Form 10) and the sub-scores for both groups are presented in Table 3. It should be noted that in comparing mean pretest scores on Form 8 of the OT to mean post-test scores on Form 10 of the OT, rather than an expected gain in scores, the differences in the two total scores of the two groups resulted in an average loss of .91 points for the CP sample and 4.07 points for the NCP sample.

Table 2
Mean Pretest Scores and Standard Deviations for Matched Pairs of Co-op and Non-Co-op Students (n=232) on Form 8 of OT
Co-op Students Non-Co-op Students
Total Scores 195.44
11.87
195.11
11.89
Sub Scores
Communication 57.96
5.90
57.39
5.83
Problems Solving 78.44
4.91
77.10
5.58
Values Clarification 58.89
4.57
59.55
4.53
Functions in Social Institutions 464.63
5.01
65.67
5.73
Using Science & Technology 66.56
4.59
66.67
4.24
Using the Arts 63.99
5.97
64.63
5.24

Obviously, this result was surprising both to us and to ACT who mechanically completed the numerical analysis. One would assume that after approximately three years in university, students should improve in a test of this type.

Further investigation into the characteristics of the OT revealed two plausible explanations for the decline in mean score. First it should be noted that 23.5% of the subjects (n=55) exhibited a decrease in individual total score. As indicated previously, the pretest scores of the larger sample and of the matched pairs sample were much higher than the normative sample data developed by ACT. The average total score of this study's large sample was 194.19 and the mean total score for the matched sample was 195.28. The mean total score for the reference groups cited by ACT (freshmen and sophomores at four-year institutions) was 173.9 and 178.9 respectively. The same normative score for seniors or graduating students at four-year institutions was 187.8. Therefore the average score of the first and second year students in this study was much higher than all reference groups, including those with two to three years in university, cited by ACT. This fact alone would point to the likely probability of a ceiling effect and resultant nil or small gains on the post-test. ACT has carried out an investigation of a possible ceiling created by high pretest scores. Steele (1989) showed that in a longitudinal study at a large public research university, in the group of entering freshman who scored very high on the OT (n=51; mean total score=l97.8), 11 (22%) of these students showed losses of -1 to -9 points on the post-test. Eighteen (35%) showed above average gains of 11 to 32 points resulting in an overall mean gain in the total group of 10.8 points. Although our sample did not exhibit this "balancing" of the mean total score by a sub sample of subjects exhibiting gains far above average, the percentage of students who declined from test 1 to test 2 was remarkably similar to the decline rate cited by ACT when subjects score very high on the pretest. Of course, a more simple explanation may be the statistical phenomenon of a regression toward the mean usually observed when the experimental group produces extreme scores on the pretest dependent variable.

*Between group difference (p<0.05)
Table 3
Mean Post-Test Scores and Standard Deviations for Matched Pairs of Co-op and Non-Co-op Students (n=232) on Form 10 of OT Co-op Students
Co-op Students Non-Co-op Students
Total Scores 194.53
12.91
191.04*
12.64
Sub Scores
Communication 58.24
5.47
57.44
5.41
Problems Solving 77.15
5.29
75.64*
5.43
Values Clarification 60.31
4.61
57.22
4.57
Functions in Social Institutions 65.22
5.30
62.70*
4.71
Using Science & Technology 65.92
4.21
65.34
4.72
Using the Arts 63.53
6.59
63.09
6.12

A second factor, also described in the ACT literature, may have been responsible for the lack of higher mean total score in the post-test. Form 10 used for the post-test was designed by ACT to counteract some cultural bias perceived in previous forms of the OT. Form 10 included more "activ­ities direct(ly) address(ing) issues of fairness and equality on the job for Blacks and Latinos." For the Canadian students, these activities may not be culturally relevant and may have created a situation in which Form 10 was relatively more difficult for these students than Form 8 (pretest) which did not include this cultural adjustment.

We are confident that these factors were responsible for the lower than­ expected gains and suggest therefore that considering differences in gains between groups is not appropriate. We suggest instead that an investigation of the differences in the post-test is meaningful particularly in light of the rigorous matching of subjects on pretest scores and other influential variables.

Therefore, an ANOVA was applied to the post-test total OT scores for the matched pairs group and a significant difference was found, F(l,232)=5.44, p<.05, between the CP and NCP groups. A MANOVA was applied to the differentiated sub-scores with a difference found at the p=.056 level. Given the significance of the univariate analysis and the homogeneity of this matched subsample, we took license to investigate further the subscore differences. Significant differences between groups emerged in the sub-scores of Functioning in Social Institutions, F(l, 232)=5.38, p<.05 and Problem Solving, F(l,232)=4.61, p<.05. The remaining four were not significantly different.

Although entry GPA was not used as a matching variable, an analysis of pre and post GPAs in the two matched pair groups was completed. Interestingly, the average GPA of the CP group at entry on a 9-point stanine scale was 6.27 (s.d.=l.40) and for the NCP group, 5.03 (s.d.=2.26) even though they scored the same on the OT pretest. Post test data showed that the CP group had an average GPA of 6.19 (s.d.=l.53) which was slightly lower than the pretest level and the NCP group had a mean GPA of 5.25 (s.d.=2.36) which was an increase over the pretest GPA for this group. From a statistical point of view, at pretest the CP group had a significantly higher GPA than the NCP group, F (1, 232)= 8.93, p<..01, and this relationship did not change at post-test. However, it should be noted that within group, there was no statistically significant change in GPA. The advantage to the CP group by the higher GPA at entry did not mani­fest itself in the pretest OT scores. The higher mean post-test total score for the CP group as compared to that of the NCP group cannot, therefore, be attributed to a correlation with a higher mean GPA on the post-test.

The post-test scores of the OT for the large sample (n=582) were also subjected to an ANOVA. A significant difference was found between groups, F(l, 580)=10.51, p<.05 with the CP group scoring higher than the NCP group. It should be recalled, however, that there was a significant difference on the OT between groups using the larger sample (both the original 999 and the sample of 582 available at the post-test stage ) at the pretest stage and therefore any difference, without the control for entry level and other variables that may influence academic progress, cannot legitimately be said to result from the type of program in which the student participated.

Discussion

This study seeks to document, using rigorous research methodology, the educational benefit of participation in a cooperative education program to which previous cooperative education literature has alluded.

Educational or academic benefit is a very broad term and, as Sternberg (1993) proposes, it is multifaceted. This view is endorsed by many other educational theorists. Researchers in education have found measuring progress in the educational domain difficult and one may have to settle for specifying one or two aspects of academic progress in order to say anything at all about the impact of one educational model compared with another. In choosing the OT version of the COMP over other instruments, we were confident that we were measuring some aspects of educational progress that related to the acquisition and application of knowledge and that the sub-scores of the test tapped into those content and process areas valued as outcomes of post secondary education. As we interpreted the OT as measuring outcomes that would result from the interactive educational model proposed by Branton, et al., (1992), it appeared compatible with the theoretical perspective that was the impetus for this research.

A second difficulty faced by education researchers is the choice of a design for their research efforts. With the growing acceptance of qualitative research in many areas previously dominated by the scientific method and its quantitative results, the limitations of rigorously controlled research in revealing the detailed richness of many phenomena related to human behavior are becoming more apparent. The choice of the quantitative matched pairs design in this study was necessitated, in our view, by the need to establish a base for the measurement of educational benefits given the lack of control and methodological problems characteristic of previous research on this topic. Our pretest data on the large group of 999 subjects confirmed that co-op students were different in academic preparedness from the regular students. It was our opinion that controlled quantitative research could give us some "broad stroke" information on the educational phenomenon in cooperative education. This type of research may open the door for alternate methods of examining this phenomenon that may be more sensitive to individual differences and may provide insights into the breadth of educational progress due to participation in co-op. Having stated these a priori choices and the assumptions behind them, we still must deal with the limitations in this research and draw some implications from the process and the outcome.

The results from the OT are equivocal due to the lack of gain over time but they did show a statistically significant better performance on the post­test by the CP group as compared with the NCP group. We feel that this is a reasonable indicator of the benefit of participation in cooperative educa­tion programs, particularly if the significantly different sub-scores are considered. The benefit to the CP group was in the sub-scores of Problem Solving and Functioning in Social Institutions. In a recent paper, Van Gyn (1994a) outlined the potential advantage of the cooperative education curriculum model for enhancing the process of problem solving. The oppor­tunity to use the practical problems confronted in a work term as a basis for authentic problem solving during an academic term was identified, as it has been by other authors, as an educationally beneficial way of gaining and maintaining knowledge. The results of the present study give some support to this explanation of this benefit of cooperative education.

In a similar vein, the tri-archic model of intelligence proposed by Stern­berg (1993) suggests that intelligence can be understood as the application of components of information processing to varying levels of experience. One of the foundations of this model of intelligence is tacit knowledge, defined as the knowledge base that enables us to operate in the everyday world. Tacit knowledge is also recognized as providing the basis for the appropriate application of formal knowledge such as a student may gain in a university degree program (Sternberg and Wagner, 1992). Williams and colleagues (1993) have shown that a co-op experience of as little as five months has a demonstrable and measurable impact on the tacit knowledge base of students. This may account for the effect demonstrated by the higher score of the CP group on the post-test of the OT and in the specific sub-scores; that is, the CP group, with greater tacit knowledge, may be able to apply the academic content gained in their program in a more appro­priate manner.

Because of the significant but small differences between the two populations in the post- test and considering the instrumentation anom­alies, the results of the study are not strong enough to state with a high degree of confidence that cooperative education is a more effective educa­tional model than the regular program. However, there is sufficient evidence to warrant continued study of the educational efficacy of coop­erative education particularly from the educational perspectives cited in the co-op literature that are in keeping with contemporary thought on learning and curriculum design (Branton et al., 1990, Heinemann, 1983, Sternberg and Wagner, 1992, Van Gyn, 1994a). The significance of the quantitative differences between groups on the post-test scores creates a basis for further qualitative and quantitative research that may reveal the subtle differences between co-op and regular program models in the various aspects of learning.

Implications for Further Research

Significant lessons were to be learned from this research and has been often stated, "hind sight is 20-20." The most relevant issues for future research on educational outcomes in cooperative education have to do with operational definitions, standardized instrumentation and choice of methodology.

Our main interest in pursuing this line of research was to identify the educational benefits associated with cooperative education. The first major problem lies in the definition of these benefits. The question that must be answered is, "what are the results of learning within a cooperative educa­tion environment?". In the last ten years, it has become quite clear that learning means much more than "knowing what" and must include knowing why and how. Processes, such as critical inquiry and problem solving, have been shown to be essential learning outcomes. Learning to learn has become as important, if not more so, than the mastery of knowl­edge. The work of Sternberg and his colleagues is typical of the shift in the focus of understanding what the student can learn in an educational setting. Without expanding further on the principal dilemma of research into educa­tional benefits, it is sufficient to say that the broadened view of educational outcomes presents us with the challenge of identifying the various potential outcomes associated with co-op. However, it also presents us with the opportunity to support, in a variety of ways, what John Dewey theorized was educational and important about experiential learning.

The second issue that has implications for future research, is the choice of measurement tools. The hard lesson we learned was that popu­lation specific measurement tools may tell you something about one group but if your target group is in any way different, it may not tell you the same thing! This is a basic research lesson that we already knew but unfortunately had to experience it to really believe it. Our conclusion is that standardized tests of learning outcomes are not useful given the current diversity of students in cooperative education programs worldwide. If the researcher can, as we have, account for the anomalous outcomes created by the diversity or difference from the normative popu­lation on which the test has been standardized, some insight into the bene­fits of the program can be gained. However, even these fairly superficial insights are "muddied" by statistical issues and unresolved variations from the expected outcome. These research concerns led us to a third major issue in this type of educational research, namely, the issue of methodology. It has become obvious to us, over the course of this research, that there is less need to generalize findings regarding learning benefits and more of a need to examine specific educational settings to be able to make conclusions and recommendations that have meaning and rele­vance. We are not trivializing the research results of our six years of work, as it has emphasized to us the robustness of the educational effects of cooperative education. However, it has served to make clear that student diversity must be accounted for, in documenting and understanding learning outcomes. This is clearly illustrated in the model suggested by Branton et al. (1990). In this very basic model of learning, the state of the learner is a main variable along with the state of the learning environment and the learning outcomes. All these variables interact and all must be understood in order to say anything meaningful about learning. We now understand our own model much better.

The two previous issues of definition and instrumentation lead us to the final issue. We would like to make a strong argument for future research to employ alternatives to the traditional quasi-experimental methodology used in most research on educational outcomes. In the tradi­tional approach to research, the attempt is to account and control for the variations in the samples and settings. In doing so, we may "wash out" the very elements that are important to the specific samples and settings in defining and understanding learning outcomes. The necessity to be able to generalize educational research findings, which is part of the founda­tion of experimental research, has been significantly removed due to the wide variations in program design, educational delivery systems, and student socioeconomic and cultural characteristics. To assess benefits and assign cause within an educational program, characteristics of that program and participants must be acknowledged.

As suggested in the discussion, the more in-depth and specific results gained from a variety of qualitative methodologies may be of more value to us in examining our current programs and designing new programs to meet the needs of the changing educational environments, changing profiles of students and changing workplaces. Through a comparison of the in-depth profiles and outcomes of a variety of cooperative education programs, we may be able to see the commonalities but we would also be able to assess the differences that are also of importance. The status of qual­itative research has grown immeasurably in the past decade. This presents us with a viable alternative method of examining the impact of cooperative education. Our experiences in the previously reported research and in other research on cooperative education have strengthened our belief in potential contributions of alternative paradigms. Building on the direction provided by previous research, the results of the application of alternative research methodologies may give us a more clear and meaningful view of the impact of the cooperative education model.