Internal Consistency and Factor Analysis of a Work Performance Measurement Instrument

Kettil Cedercreutz, Ph.D.
Associate Provost, Division of Professional Practice, University of Cincinnati
J. Joseph Hoey IV, Ed. D.
Vice President for Institutional Effectiveness, Savannah College of Art and Design
Cheryl Cates, MBA, ABD
Associate Director, Division of Professional Practice, University of Cincinnati
Richard Miller, Ph.D.
Professor, Department of Civil and Environmental Engineering, University of Cincinnati
Catherine Maltbie, Ed. D.
Research Associate, College of Education, Criminal Justice, and Human Services,
University of Cincinnati
Marianne Lewis, Ph.D.
Associate Professor, Department of Management, University of Cincinnati
Anita Todd, BME, M.Ed.
Associate Professor, Division of Professional Practice, University of Cincinnati
Tom Newbold, M.Ed.
Associate Professor, Division of Professional Practice, University of Cincinnati

Abstract

This article describes a section of a research project entitled Development of a Corporate Feedback System for Use in Curricular Reform (CFCR) supported by the US Department of Education's Fund for Improvement of Postsecondary Education (FIPSE). The research pursued at the University of Cincinnati (UC) in 2004-2008, aims at implementing an assessment cycle that can be used to test curricular efficacy in the context of work. The publication at hand validates preliminary findings regarding the internal consistency and underlying factors of the main constructs of a generic faculty developed instrument applied to the assessment of co-op performance of students enrolled in a Civil and Environmental Engineering Program. A secondary example of factor analysis for students enrolled in the College of Business illustrates the importance of factor analysis in a practical feedback situation.

Index Terms - Postsecondary, Assessment, factor analysis, instrument development, civil and environmental engineering

In 2004 the University of Cincinnati (UC) was awarded a US Department of Education Fund for the Improvement of Postsecondary Education (FIPSE) grant for a proposal titled Development of a Corporate Feedback System for Use in Curricular Reform (CFCR). The duration of the grant is three years. The agenda of the CFCR project was developed by UC faculty together with the Accreditation Council for Cooperative Education (ACCE), which volunteered its accredited schools as a reference and dissemination group for the project. The cooperation with ACCE has resulted in a development process that takes into account the needs of nine institutions engaged in Cooperative Education: Bowling Green State University, Case Western Reserve University, Georgia Institute of Technology, Georgia Southern University, Mississippi State University, North Carolina State University, University of Alabama at Huntsville, University of Central Florida, and University of North Texas. The CFCR project first tests the co-op based curriculum development and reform system at UC before transferring the process for application in reference institutions (Cedercreutz et al., 2005). The budget of the research project is presented in Table 1 (Appendix A).

The research project is set in a strong tradition of cooperative education (Cedercreutz et al., 2002). In 1906, Herman Schneider, the Dean of the Engineering College at the University of Cincinnati, pioneered cooperative education (Park, 1916, 1943). By 2008 the University of Cincinnati has grown into a high-impact research university offering Cooperative Education in four colleges: the College of Engineering (CoE); the College of Design, Architecture, Art and Planning (DAAP); the College of Business (CoB); and the College of Applied Science (CAS). The cooperative education curricula follow an alternating sequence of on-campus courses and co-op work experiences as presented in Figure 1 (Appendix A).

As students enter the co-op program they are divided into two sections; Section I and Section II. When Section I is on work assignment, Section II is attending on-campus courses, and vice versa. The alternating co-op schedule allows the 1,500 employers with which the University cooperates to assign students productive and meaningful work assignments. Typically a student is required to complete six quarters of co-op over three years of undergraduate study. Approximately 4,000 students, enrolled in more than 40 programs, participate in the UC co-op program each year. Employer evaluations provide in excess of 200,000 data points annually. Starting in 2004, this data has been directly entered

Assessment and Learning (PAL) database developed by Division of Professional Practice (PP) faculty. The Division of Professional Practice had traditionally used a paper-based assessment system before moving to the web-based PAL system. The web-based system was implemented to enhance user friendliness, streamline data collection, and allow flexible production of reports on learning outcomes. The aim of the FIPSE grant is to develop a corporate feedback system for use in curricular reform. The term curricular reform is here defined as a change in the curriculum, having the objective to enhance student learning. The change can be either pursued by altering the material the curriculum covers, or by altering the methodology used to cover the material. In the latter case the change is referred to as a pedagogic change.

The feedback system is built around the tradition of having the supervisors of the students assess student performance during each co-op work term. The project strives to elevate this assessment to a novel level by using student co-op work performance data for continuous curriculum improvement. The work is pursued in the tradition of Dr. Edwards Deming, whose work has laid the ground for continuous improvement in contemporary industry worldwide (Deming, 1943, 1948, 1982, 2000). A schematic view of the proposed feedback structure is presented in Figure 2 (Appendix B). Similar structures have successfully been implemented at Georgia Institute of Technology (Hoey & Nault, 2002; Hoey et al., 2002) and Iowa State University (Hanneman et al., 2002). The goal of the work is to develop an effective assessment strategy that is: a) goal-oriented; b) reasonably accurate; c) used; d) valued; and e); cost effective; f) direct; g) blind; and h) contextual as defined by leading authors on continuous improvement in higher education (Banta, 2002; Ewell, 2002; Suskie, 2002, 2004, 2006).

Further, the research project builds a triangulated feedback loop based on assessment data gathered using three different methodologies. Figure 3 gives an overview of how these methodologies (qualitative instruments, quantitative instruments and focus groups) are linked together to form an organic whole (Appendix B).

The assessment instruments have characteristics as follows:

Research Objectives

The objective of the paper is to analyze to what extent an instrument developed by faculty to analyze student performance on a one-on-one basis can be used to analyze the behavior of a specific population. The research dovetails with the ambition of a number of universities to use assessment data to enhance the efficacy of curricula. The principle of outcomes based assessment is strongly promoted by a number of accreditation bodies such as the Accreditation Board for Engineering and Technology (ABET) and the Canadian Association for Cooperative Education (CAFCE) (Accreditation Board for Engineering and Technology, 1995; Conference Board of Canada's Corporate Council on Education, 1992; Canadian Association for Cooperative Education, 1996). Whenever degree granting departments want to improve their curriculum, and measure the outcomes through the assessment of student work in an industrial context, it is important to understand the characteristics and limitations of the applied assessment instrument. This publication subjects the assessment instrument used in the FIPSE funded project Developing a Corporate Feedback Loop for Curricular Reform to both a factor analysis as well as a measurement of internal consistency. The principle of both methodologies is described in relative detail, in order to give the cooperative education practitioner an understanding of these analysis principles in a cooperative education context. The paper will form a reference for future publications on co-op based work performance measurement.

In a one-on-one situation, a co-op advisor can discuss an evaluation with a student and go much deeper into a specific employment situation. For this project we are analyzing a cohort of students based on a large amount of data. The questions that arise in a cohort situation are: a) whether the instrument properly measures the constructs originally defined by faculty, and b) whether other constructs exist, having a higher internal consistency that the instrument might be better suited to measure? Question a can be answered by calculating a measure of internal consistency using Cronbach's alpha, whereas question b can be answered by pursuing a factor analysis. Only when these two forms of analysis have been done and the internal consistency of the redeveloped constructs have been measured can the researcher effectively engage in a discussion on what the measured results actually mean. A well performed Cronbach Alpha, and factor analysis will help the research team to use the collected data to enhance the efficacy of the curriculum.

The paper at hand covers on a primary level the validation of Assessment Instrument I in a civil and environmental engineering context as well as the establishment of a correlation matrix that illuminates those variables that are closely related to co-op student performance. The primary objective of the research is to determine the instrument's consistency in measuring the underlying constructs of: a) communication, b) conceptual and analytical ability, c) learning/theory and practice d) professional qualities, e) team work, f) leadership, g) technology, h) design and experimental skills, i) work culture, j) organization planning, and k) evaluation of work habits. The secondary objective of the research was to illuminate the interrelationships between the 41 variables of Assessment Instrument I in order to determine a subset of underlying constructs that more thoroughly explain the performance of civil and environmental engineering co-op students. The assumption is that these factor analysis developed constructs could become more useful than the original developed as a result of a faculty governance process. On a secondary level, the paper illustrates how the use of factor analysis is able to provide meaning in the data, when the results using the original constructs was confusing to the faculty member seeking pedagogical change in the classroom.

Methodology

he research focuses on mapping the internal consistency and the potential underlying factors of Assessment Instrument I as presented in Table 2 (Appendix C). Assessment Instrument I measures 41 parameters related to cooperative education work performance.

Assessment Instrument I was developed during the 1990s by the Division of Professional Practice Faculty in accordance with the academic governance processes of the unit. Professional Practice is a centralized academic unit of the University of Cincinnati responsible for cooperative education offered by the institution. The work of this faculty took into account the pedagogic ambitions of the Division; the accreditation requirements of regional, national, and professional accrediting bodies; as well as central US Department of Labor publications (Cates & Jones, 1999; Cates & Langford, 2006; Langford & Cates, 1996; Secretary's Commission on Achieving Necessary Skills, 1991). The instrument development work explicitly considered the Curriculum 2000 report of the Accreditation Board for Engineering and Technology (ABET) (Accreditation Board for Engineering and Technology, 1997), the attributes of the Accreditation Council for Cooperative Education (ACCE), the accreditation requirements of the Canadian Association for Cooperative Education (CAFCE) (Conference Board of Canada's Corporate Council on Education, 1992; Canadian Association for Co-operative Education, 1996), and the outcomes-based accreditation requirements of the North Central Accreditation Agency (NCAA) (The Higher Learning Commission, 1997). The five-point Likert scale (where 1=Unsatisfactory; 2=Poor; 3=Satisfactory; 4=Good; and 5=Excellent) was adopted by the Division of Professional Practice faculty as the scale was commonly used in industry and served the faculty's student assessment needs. Since its inception in the mid 1990s, the instrument has worked to the faculty's satisfaction when grading co-op work term performance of individual students. Anecdotal information suggests that employers use the instrument with integrity and without systematic bias when working to convey their observations and expectations to the student and the co-op faculty advisers.

Relevance of Measured Parameters

The relevance of the measured parameters has been tested by the UC Evaluation Services Center using a web-based survey that ran for one quarter in conjunction with the regular co-op assessment cycle (Maltbie et al., 2006). Some 504 employers returned the survey, which corresponds to an approximate return rate of 30%. Table 2 shows the results of the survey for the population at large, as well as for one specific program, Civil and Environmental Engineering (CEE). The CEE data is based on 26 returns. The table shows that 97.5 % of the parameters are deemed important by the majority of supervisors at large. The corresponding ratio is 95% for civil and environmental engineering supervisors. The Quality of Work Produced, Volume of Work Produced, Attendance, and Punctuality are considered less important by 60% of both populations.

The meaning conveyed by the instrument has been studied using focus groups consisting of four to six employers for each program (Maltbie et al., 2006). The focus groups were conducted by the UC Evaluation Services Center for all programs participating in the CFCR project, namely Architecture, Civil and Environmental Engineering, Accounting, and Information Systems. The results of the focus groups corresponded to the results gathered through the web-based survey described above. The focus group findings were consistent with the findings of the online survey.

Internal Consistency

The Cronbach alpha coefficient (Cronbach, 1943, 1946, 1953, 1988, 1990) procedure is used for the quantification of the internal consistency of an instrument in a specific environment. The theory of internal consistency aims to quantify how measured parameters relate to underlying hypothetical constructs. Let us look at the Cronbach alpha algorithm in light of an example. In the case of Assessment Instrument I Parameter B, Conceptual and analytical ability is measured using variables B1) Evaluates situations effectively, B2) Solves problems/makes decisions, B3) Demonstrates original and creative thinking, and B4) Identifies and suggests new ideas. The Cronbach alpha algorithm measures in essence the balance between the variability of different sections of an instrument. Parameter B, Conceptual and analytical ability, is in the context of internal consistency called a hypothetical construct as it is not directly observed but assessed using four separately measured parameters. This is done by splitting the data of all measured parameters in two subgroups A and B and comparing the results. The Cronbach alpha internal consistency algorithm measures to what extent these different split halves of the same instrument return the same average value (Cronbach, 1943, 1946, 1953, 1988, 1990). Let us illustrate the case with an example. We can split the parameters B1, B2, B3, and B4, in two groups symmetrically in three iterations.

- ITERATION I: -Split Half A: B1 and B2
- Split Half B: B3 and B4
- ITERATION II: -Split Half A: B1 and B3
- Split Half B: B2 and B4
- ITERATION III: -Split Half A: B1 and B4
- Split Half B: B2 and B3

Cronbach alpha algorithm gives a measure of the consistency between all plausible split halves of an instrument. In the case of four parameters the Cronbach Alpha methodology is relatively simple. As the amount of measured parameters increases the amount of permutations explodes, which necessitates the use of dedicated software. The value of Cronbach alpha can theoretically vary between 0 (no consistency), and 1 (perfect consistency). Values between 0.7 and 0.9 are typically considered good. Cases have been made to challenge the largely empirically validated and sample size dependent Cronbach's test alpha. Even though theoretical limitations can be shown in the Cronbach's alpha algorithm, the method remains an international bench mark (Vehkalahti, 2000) fully suitable for scrutiny at this particular level.

Factor Analysis

Factor analysis aims at developing a limited set of constructs that have high internal consistency. In more formal language factor analysis can be defined as a methodology that aims at identifying so called eigenvectors defined in the hyperspace formed by the individual measured dimensions (Cattell, 1946, 1973, 1946; Spearman, 1923, 1927, 1931). Factor analysis is in essence the opposite procedure to the analysis of internal consistency described in the Cronbach alpha coefficient above. In factor analysis the researcher submits the data to the system, which after a variety of permutations, returns a set of more or less independent factors. The mathematical analysis is pursued in a space that has as many dimensions as the instrument has measured parameters. Due to the multidimensional nature of the parameter space, the process is exceedingly difficult to visualize for the human mind. The purpose of this analysis, however, is to try to find the relationships between variables. In factor analysis meaningful factors are found by rotating the coordinate system set by the different dimension using either orthogonal rotations (that return independent factors) or oblique rotations (that return dependent factors). Due to the complexity of the exercise, the use of commercially available rotation algorithms is necessary. The rotation pursued in this paper, was pursued using the Quartimax rotation methodology.

Software

The data for the analysis was collected using the PAL database system developed by the Division of Professional Practice. This database allows the Division to collect data from both co-op students and employers which can then be queried for a host of research projects. The analysis internal consistency was pursued using SPSS 14.0 for Windows. The factor analysis was pursued using StatiXL 14.0®.

Research Findings

The following analysis was based on 390 electronic returns of Assessment Instrument I collected through the PAL system over 10 consecutive co-op quarters (Winter quarter 2004 - Spring quarter 2006). Including 390 returned evaluations representing 65.9% of the total population of 592 recorded Civil and Environmental Engineering co-op quarters.

Internal Consistency of Instrument Constructs

The internal consistencies of constructs A through K are presented in Table 4 (see Appendix E).

Factor Analysis

After careful consideration, Assessment Instrument I was subject to a factor analysis excluding constructs H and F. The exclusion was based on the relatively low rate of return of data in this category. The research focused on finding independent factors therefore the rotations were limited to orthogonal rotations. The analysis was run using quartimax rotation and set to incorporate factors that contributed to the total variance with the magnitude of one eigenvalue. In order to find a set of underlying independent factors the data was subject to both Varimax and Quartimax rotations. The objective of the researchers was to break down the data set into constructs that as far as possible are independent from each other. The choice between Quartimax and Varimax rotations was based on a very pragmatic analysis. Quartimax rotation simply returned constructs that were relatively easy to name and communicate in the context of cooperative education while Varimax constructs returned constructs that did not correspond to an obvious dimension of work performance. Table 5 shows the aggregation of variance as a function of factors (see Appendix F).

The structure, presented in Table 5 is dominated by a single factor that contributes to 57.2% of measurement variance. The screen plot in Figure 4 shows an almost linear distribution of all factors beyond factor 1 (Appendix F).

The four most dominating individual factors sorted in accordance to major underlying contributors are reported in Table 6 (Appendix G).

Parameters such as manages projects and/or other resources effectively; volume of work produced; recognizes political/social implications of actions; shows initiative/is self-motivated; professional attitude toward work assigned, and sets goals and prioritizes, belong to the top contributors to factor 1. The factor also includes many ethically-oriented contributors such as respects diversity, possesses honesty/integrity/personal ethics, and assumes responsibility/accountable for actions. The authors of this paper have chosen to call factor 1 Goal-Oriented Professional Contribution. It is by far the most dominant factor, and it is intuitively easy to understand that scoring high on the underlying parameters has a direct contribution to the added value the employer is set to generate. The parameters identifies and suggests new ideas; and demonstrates original and creative thinking both form the major contributors to factor 2, labeled by the authors as Creative Thinking. The factor has an inverse relationship to the other main factors, which indicates that scoring high in creative thinking unfortunately has a negative correlation with the score received on factors 1, 3 and 4. While operating in a civil and environmental engineering context it is no surprise that items such as understands the technology of the discipline; uses technology, tools, and information; and understands complex systems & interrelationships form a third distinct factor labeled Technology & Systems Expertise. Punctuality and Attendance form the fourth very distinct factor labeled Punctuality and Attendance. While this information is of particular interest to the Civil Engineering Faculty at the University of Cincinnati, it is important to note that factor analysis in and of itself presents constructs for further discussion and interpretation. The authors have proposed several plausible explanations for the various factors but the significance of the methodology is less in the definitive answers that it provides and more in the potential for the methodology to lead to further research. In dealing with an issue as complex as cooperative education, as with any of the social science data regarding workplace performance, there must be many efforts to look for the underlying factors that contribute to performance. The research to date on the FIPSE project has identified factor analysis as a pivotal method for the co-op community to undertake in our efforts to truly understand what co-op performance data really means.

Internal Consistency of Developed Constructs

Table 7 (Appendix H) further presents a calculation of Cronbach's alpha coefficient based on the factors defined by the factor analysis. Table 7 shows a slightly increased level of internal consistency as compared to the constructs presented in Table 6 which indicates that the refinement of the data is on a positive trajectory.

Use of Factor Analysis of Clarify Pedagogic Reform Data

While the research dovetails with the ambition of a number of universities to use assessment data to enhance the efficacy of curricula, this data can only be as effective as the meaning derived from the data. Factor analysis is a methodology that is used to regroup constructs into more meaningful clusters. The tables below give an overview of a puzzling problem solved during the FIPSE Project using factor analysis. A before and after study showed a perplexing development in a comparative student cohort study. Table 8 (Appendix H) shows the analysis before a factor analysis, while Table 9 (Appendix I) reflects the same results after the use of factor analysis.

The analysis based on predetermined constructs shows a mixed bag of results. Even though the internal consistency is consistently above 0.80, the positive and negative parameters are distributed over a number of constructs. Regrouping the parameters using factor analysis brings the internal consistency of all constructs but one above 0.90. All parameters having a negative development are now clustered under one relatively independent construct. The new grouping helps understand the development of the students more effectively. This was critical to the faculty member who had undergone a pedagogic change designed to enhance students' skills in communications, teamwork and leadership. Once the factor analysis was introduced into the discussion the meaning in the data became clear whereas prior to the use of this technique the situation was quite problematic for both the faculty member and the co-op practitioner.

Discussion

When assessing the results of the internal consistency analysis and the factor analysis, one must bear in mind a number of things. The original constructs of: a) communication, b) conceptual and analytical ability, c) learning/theory and practice, d) professional qualities, e) team work, f) leadership, g) technology, h) design and experimental skills, i) work culture, j) organization planning, and k) evaluation of work habits have been developed through a faculty governance procedure, with the aim of assessing individual students in conjunction with personalized on-site or off-site advising. The internal consistency of these original constructs have been determined to meet acceptable standards for use through the application of a Cronbach alpha test which demonstrated Cronbach alpha factors of .81 to .94. A Cronbach alpha factor of 0.80 or better is typically considered to be a strong indicator of instrument internal consistency (Vehkalahti, 2000). The two constructs that return the highest internal consistency are: B) Conceptual and Analytical Ability and J) Organizational planning. The Cronbach alpha shows that a very high internal consistency exists between the measured parameters within these two groups.

The factor analysis further shows that a substantially higher internal consistency can be reached by re-designing the constructs. Using these constructs allows the researcher to move forward from a process of faculty designed constructs to one of scientifically defined constructs, having an even higher internal consistency. In the case of Civil and Environmental Engineering the majority of the variance can be attributed to a single factor, Goal-Oriented Professional Contribution (Cronbach alpha 0.98). This factor embodies the entire philosophy of Cooperative Education that works on socializing individuals to become contributing members of a working environment. The face value constructs, as well as constructs generated through factor analysis, can be considered valid in view of the context in which they are used. The faculty developed constructs are valuable in the advising of individual students, whereas the factor analysis generated constructs bring important information with regard to the sensitivity of the instrument as it relates to the underlying eigenvectors. Factor analysis also gives scientists an understanding of complex interrelationships between large numbers of imprecisely measured variables which is typical of most co-op assessment instruments. Factor analysis gives a picture of the underlying dynamics of this particular co-op assessment instrument. It reveals that a parameter such as: Sets Goals and Prioritizes is a high contributor to the most dominant factor, Goal-Oriented Professional Contribution. The analysis further reveals the role of a parameter as a contributor to a specific eigenvector. In a pedagogic context, however, the person responsible for curriculum development is typically more interested in quantifying the changes of underlying parameters. Throughout the FIPSE research project it has become apparent that the resistance of the organization to external assessment requires extensive management. Only when faculty responsible for curriculum development and reform are assured that the data presented is a reliable source of information will they be willing to pursue appropriate revisions to the curriculum. Only when questions regarding the meaning found within the data can be further investigated using factor analysis, can resistance give way to understanding.

Conclusions and Suggestions for Further Research

The research reported here suggests that the faculty developed assessment instrument has excellent internal consistency when using it to measure constructs such as communication, conceptual and analytical ability, learning/theory and practice, professional qualities, team work, leadership, technology, design and experimental skills, work culture, organization planning, and evaluation of work habits as defined by Professional Practice faculty. The instrument is further an excellent tool for the measurement of Goal Oriented Professional Contribution in Civil and Environmental Engineering, as specified via the factor analysis. For the latter constructs the instrument yields as high Cronbach coefficient alpha factors as 0.98. The factor analysis further reveals that creativity as a construct is inversely oriented as compared to Goal Oriented Professional Contribution. The findings of this work verify that the instrument is effective in measuring Goal Oriented Professional Contribution in the context of Cooperative Education of civil and environmental engineering. Further investigation using factor analysis as demonstrated in this paper can become critical in helping to understand the impact of curricular reform by focusing on the underlying constructs in each unique discipline or discipline cluster. In order to validate the instrument, it might be beneficial to analyze how the specified constructs hold up in different academic environments. Answering this question would give co-op practitioners an understanding of how effective the instrument is in measuring co-op performance in a multitude of environments and within a multitude of unique curricular reform efforts. Another interesting question awaiting further discussion is how accurate the instrument is in measuring one underlying dimension. This could be done by mapping the interrater reliability of the instrument when used in a multitude of environments. One of the most significant areas for further research lies in the application of the presented methods to assessment instruments used by other institutions engaged in cooperative education. The analyses may serve as an important step in creating a dialogue regarding work performance based reform of cooperative education curricula. In summary the results of the two statistical methods described in this paper have a lot to offer in the cooperative education based assessment of student work performance for institutions aspiring to use the information for curricular reform. The Cronbach's alpha provides a well established statistical measure of internal consistency of the instrument being used. The factor analysis gives the reader an understanding of how this factor having a high internal consistency, were found and how in one case this technique proved essential in understanding pedagogic reform. The paper validates the internal consistency of the face value constructs, and the constructs found through factor analysis, giving the reader an opportunity to pursue credible assessment based curricular reforms, in an environment of higher education. Once the internal consistency of specific measured factors has been assessed, a more rewarding dialogue, focusing on the interpretation of the data itself, can be pursued. This paper should be seen as a part of the discussion leading to this dialogue.