Types of evaluation


1941, Encyclopedia of Educational Research, Macmillan, NY

Evaluation is a relatively new technical term, introduced to designate a more comprehensive concept of measurement than is implied in conventional tests and examinations. Monroe has distinguished between measurement and evaluation by indicating that the emphasis in measurement is upon single aspects of subject-matter achievement or specific skills and abilities, but that the emphasis in evaluation is upon broad personality changes and major objectives of an educational program… evaluation involves the identification and formulation of a comprehensive range of the major objectives of a curriculum, their definition in terms of pupil behavior, and the selection or construction of valid, reliable, and practical instruments for appraising the specified phases of pupil behavior (pp. 404-5, 1950 edn).

 


1972, Malcolm Parlett and David Hamilton, ‘Evaluation as illumination: a new approach to the study of innovatory programmes’, reprinted in Hamilton et al., Beyond the Numbers Game, Macmillan, Basingstoke, 1977

… within educational research two distinct paradigms can be discerned. Each has its own strategies, foci and assumptions. Dominant is the ‘classical’ or ‘agricultural-botany’ paradigm, which utilises a hypothetico-deductive methodology derived from experimental and mental-testing traditions in psychology… More recently, a small number of empirical studies have been conceived… and relate instead to social anthropology, psychiatry and participant observation research in sociology… The most common form of agricultural-botany type evaluation is presented as an assessment of the effectiveness of an innovation by examining whether or not it has reached required standards of pre-specified criteria. Students – rather like plant crops – are given pre-tests… and then submitted to different experiences… Subsequently… their attainment (growth or yield) is measured… Studies of this kind are designed to yield data of one particular type, i.e. ‘objective’ numerical data…

illuminative evaluation takes account of the wider contexts in which educational programmes function. Its primary concern is with description and interpretation rather than measurement and prediction… The aims… are to study the innovatory programme: how it operates; how it is influenced by the variou9s school situations in which it is applied; what those directly concerned regard as its advantages and disadvantages; and how students’ intellectual tasks and academic experiences are most affected. It aims to discover and document what it is like to be participating in the scheme, whether as teacher or pupil… Central to an understanding of illuminative evaluation are two concepts: the ‘instructional system’ and the ‘learning milieu’ (pp. 7-10).

 


1989, David Hopkins, Evaluation for School Development, Open University Press, Milton Keynes (reproducing approaches to evaluation described in 1986 by Bob Stake)

Formative-summative. The most pervading distinction is the one between evaluation done during the development of a programme and those done after the programme has been completed… when the cook tastes the soup it is formative evaluation and when the guest tastes the soup it is summative…

Formal-informal. Informal evaluation is a universal and abiding human act… Formal evaluation is more oparationalized and open to view… It is needed when the results are to be communicated elsewhere…

Case particular-generalization. A most important distinction is between the study of a programme as a fixed and ultimate target, or the study of a programme as a respresentative of others…

Product-process. … A study of the ‘product’ is expected to indicate the pay-off value; a study of the ‘process’ is expected to indicat the intrinsic values of the programme…

Descriptive-judgemental. Many evaluators coming from a social science background define the evaluation task largely as one of providing information, with an emphasis on objective data and a de-emphasis on subjective data. Those coming from the humanities are likely to reverse the emphasis…

Preordinate-responsive. … Preordinate studies are more oriented to objectives, hypotheses and prior expectations, mediated by the abstractions of language. Preordinate evaluations know what they are looking for an design the study so as to find it. Responsive studies are organized around phenomena encountered – often unexpectedly – as the programme goes along.

Wholistic-analytic. … The more common social science research approach is to concentrate on a small number of key characteristics. A case study is often used to preserve the complexity of the programme as a whole, whereas a multivariate analysis is more likely to indicate the relationship among descriptive variables.

Internal-external. … whether [evaluation studies] will be conducted by personnel of the institution responsible for the programme or by outsiders. They differ as to how formal the agreement to evaluate, as to how free the evaluators are to raise issues and interpret findings, and as to how changes in plans will be negotiated.

The eight dimensions above do not result in 256 different evaluation designs. Many of the dimensions are correlated, both conceptually and in frequency-of-use (pp. 16-18).

 


1997, John Davis, ‘Evaluation Research: Part One: Types of Evaluation Research', Metropolitan State College of Denver, http://www.naropa.edu/faculty/johndavis/prm2/types6.html

1. Context evaluation: examines the political, social, financial, and other contexts for the program and the evaluation…

2. Needs assessment: it may be used to determine the need for the programme, justify it, and design it…

3. Process evaluation, program monitoring: determines whether the program was implemented as promised and how it was delivered and received…

4. Formative evaluation: uses information collected during the early stages of the program to modify the later stages.

5. Outcome evaluation, summative evaluation: determines whether the objectives of the programme were met. Data to be collected come directly from the program’s objectives.

6. Efficiency evaluation, cost-effectiveness, cost-benefit analysis: compares the costs and benefits of the program.

7. Utilization: evaluates whether the evaluation itself was used. Many good evaluations are not used for reasons unrelated to the evaluation itself (pp. 1-2).

 


[2000?] The Action Evaluation Research Institute: Helping Groups Define, Promote and Assess Success, http://www.aepro.org/inprint/papers/aedayton.html

[It is worth looking at this site for details of method, projects and success stories.]

Action Evaluation is a new method of evaluation, one that focuses on defining, monitoring, and assessing success. Rather than waiting until a project concludes, Action Evaluation supports project leaders, funders, and participants as they collaboratively define and redefine success until it is achieved. Because it is integrated into each step of a program and becomes part of an organization, Action Evaluation can significantly enhance program design, effectiveness and outcome… [It]

Participation: All stakeholders in the process from the beginning, articulating and negotiating their goals, their values, and their proposed action plans.

Reflexivity: All participants function as ‘reflective practitioners’ together, reflecting and examining the interaction of goals, values and activities…

 


[Compare the above ‘action evaluation’ approach with the 2 that follow:

 

[1996?] Johns Hopkins Center for Communication Programs, ‘Research and Evaluation’, http://www.jhuccp.org/r&e

 

[?] Evaluation unit of the Center for Urban Studies, Wayne State University, ‘Evaluation Research Overview’, http://www.cus.wayne.edu/capabilities/evaluation.asp

The evaluation staff believe that evaluation and program staff should be partners in both program and evaluation design. The design phase of any evaluation is critical, as it sets the stage for the evaluation implementation. The first step in the evaluation research design process is to clearly articulate the evaluation questions of the intended audience. … evaluation staff attempt to identify all the stakeholders in the program, that is, who cares about this program and the evaluation findings. Stakeholders could include program administrators, staff and participants, board members, funders, and community leaders… Staff of the evaluation unit use up-to-date methods and approaches. Evaluations must be methodologically rigorous and in keeping with state-of-the-art evaluation practice… The evaluation unit strongly believes that evaluation should not be a burden for program staff. Evaluation approaches and designs should accomplish what is needed with as little disruption to the client as possible… Systematic, purposive, comprehensive and well-communicated evaluation is an integral part of program improvement and assessment…

 


2002, William M.K. Trochim, Cornell University, ‘Introduction to Evaluation’, http://www.socialresearchmethods.net/kb/intreval.htm

[Cf. this site for a description of 4 ‘evaluation strategies’: ‘scientific-experimental models’; ‘management-oriented systems models’; ‘qualitative/anthropological models’; participant-oriented models’. It considers that ‘perhaps the most important basic distinction in evaluation types is that between formative and summative evaluation.]