Management and Evaluation Associates, Inc.


Robert M. Slivka
Management and Evaluation Associates, Inc.

The newest addition to the standards based educational reform movement is the implementation of school reform models. At the national level, the Comprehensive School Reform Demonstration (CSRD) program provides grants to schools on a competitive basis for implementing models which meet the federal definition of comprehensive school reform, and in New Jersey, the adoption of a whole school reform model is mandated in the most needy, or Abbott districts, as a result of the New Jersey Supreme Courtís ruling in Abbott v. Burke. Although the CSRD definition differs from New Jerseyís definition, what both have in common, is a model that is research based, has integrated and aligned school functions, focuses on professional development, and coordinates student and family resources.

Questions, Questions, Questions

This emphasis on school reform models raises a number of important questions: What is the impact of a school reform model when it is mandated, as in New Jersey, rather than selected on a voluntary basis? Are schools allowing adequate time to research and select a school reform model, as strongly suggested by federal guidelines for the CSRD program? Are school reform models aligned to state standards? Is the school budget adequate to fully implement the school reform model? What is the impact of school reform models within urban districts with high mobility, when schools select different models? Can developers, given the demand for their services, provide the necessary support for effective implementation? Can the management teams at schools that have implemented site based management, provide adequate attention to school reform implementation issues, in addition to their other responsibilities?

Mixed Results

The ultimate question, of course is, do school reform models work? The current research on the impact of school reform models is mixed. A recent article in the Title I Monitor1 regarding an initial report by the U. S. Office of Education on CSRD implementation indicated that " . . . it is widely acknowledged that very few models, no matter how respected, established, or well-known, can point to a body of research demonstrating their effectiveness." A recent study by the American Institutes for Research2 concluded that only three of 24 school reform models have strong evidence to support the claim that they improve student achievement. Among the three models with high ratings was Success for All, the model recommended by the New Jersey Department of Education. However, recent independent evaluations of Success for All conducted by the University of Maryland,3 the University of Delaware4 and the Miami-Dade County Public Schools,5 produced essentially negative results.

An Explanation

What accounts for these differences? The research suggests at least two reasons: how well the model has been implemented, and who conducts the evaluation. Lynn Olson in an article in Education Week6 reviewed the literature on school reform model implementation, and suggested that the effectiveness of school reform models is a function of the degree to which they have been implemented as envisioned by developers. She also indicated that implementation is typically problematic and inconsistent, being affected by the ability of school staff, the support received from central office, the capacity of developers and other factors.

Regarding the issue of who conducts the evaluation, Herbert Walberg and Rebecca Greenberg, in a recent article in Education Week,7 indicate that most positive studies on school reform models have been conducted by the developers, and as sited above, studies with negative findings have been conducted by an independent party. The authors termed this effect the Diogenes Factor, after the Greek philosopher who walked through daytime Athens with a lighted lantern, looking for honesty.

What Should I Do?

What course of action can schools take, given the results of the studies sited above? Clearly, a commitment to sound implementation is a primary step. School management teams, in cooperation with district and state staff, and the developer, must make quality implementation a priority. A rigorous ongoing evaluation of the school reform model, preferably by an independent agency, should be a second priority. In New Jersey the need for evaluation has been suggested by The Education Law Center8 which stated that " . . . existing programs must be evaluated on an ongoing basis . . . (using) proven research and evaluation methods . . . (and in the absence of this capability, schools should) seek technical assistance and training from their district, and from expert consultants and/or higher education."

The Program Evaluation Solution

What would be the purpose of program evaluation? The primary purpose of an evaluation of the school reform model would be to provide valid and reliable ongoing feedback on the implementation and impact of the model. Data collected from program evaluation would be reviewed by school staff and used to make program improvements as well as to document program impact. Major elements of a good program evaluation would include: 1) a team approach between the school staff and the evaluator; 2) a focus on the collection of implementation (including information on contextual and input variables) and impact data; 3) the use of valid and reliable multiple measures (both qualitative and quantitative); 4) the use of control or caparison groups; and 5) the use of a longitudinal design.9

A potential limitation to a good program evaluation in states which have developed targeted assessment programs (they only test at selected grades - a practice common in the northeast), can be the absence of valid and reliable data at the off grades, if districts choose to not test, or to test with locally developed and non standardized instruments. In most states with targeted assessment programs, the use of norm-referenced tests (NRTs) adopted at either the state level (a significant number of states have adopted NRTs as part of their state assessment system) or at the district level, currently serves to meet the need for valid and reliable off grade testing. This practice provides a practical alternative, which can be improved, if districts undertake studies designed to: 1) ensure the alignment of the NRT to their curriculum and to state standards; 2) determine the predictability between the NRT administered at off grades and the performance of students on state tests; and 3) generate customized standards based reports. In districts which have initiated such studies, the results have helped staff to make more valuable use of NRT reports to improve the delivery of instruction, and increase the potential for higher performance on state tests.

Final Words

The importance of good program evaluation cannot be over emphasized. As David Osborne and Ted Gaebler stated in their book on reinventing government10

What gets measured gets done
If you donít measure results, you canít tell success from failure
If you canít see success, you canít reward it
If you canít reward success, youíre probably rewarding failure
If you canít see success, you canít learn from it
If you canít recognize failure, you canít correct it
If you can demonstrate results, you can win public support.

Do school reform models work? At the present time the verdict is out. However, the research reviewed here suggests that school reform models (along with other factors), have the potential for producing higher levels of student achievement. With quality implementation and sound program evaluation as a one two punch, school reform models might just work.

1Despite some growing pains, CSRD is transforming schools (1999, June). Title I Monitor, Vol. 4, No. 6, p 8.

2Herman, R. (1999). Approaches to schoolwide reform: taking a critical look. Washington, D. C: American Institutes for Research.

3Jones, E. M., Gottfredson, G. D. & Gottfredson, D. C. (1998). Success for some: an evaluation of a "Success for All Program." College park, MD: University of Maryland.

4Venezky, R. L. (in press). An alternative perspective on Success for All. In K. K. Wong (Ed.), Advances in educational policy, Vol. 4. Greenwich, CT: JAI Press.

5Viadero, D. (January 27, 1999). Miami study critiques "Success for All.' Education Week on the WEB.

6Olson, L. (April 14, 1999). Following the plan. Education Week on the WEB.

7Walberg, H. J. & Greenberg, R. C. (April 8, 1998). The Diogenes Factor. Education Week on the WEB.

8Planing programs and budgets in Abbott schools (December 10, 1998). Abbott Opinion #5. Newark, NJ: The Education Law Center, p 4.

9Slivka, R. M. (1993). Impact Evaluation Model. Hightstown, NJ: Management and Evaluation Associates, Inc.

10Osborne, D. & Gaebler, T. (1993). Reinventing government: how the entrepreneurial spirit is transforming the public sector, New York, NY: Plume Books, pp 146-155.

Robert M. Slivka is President of Management and Evaluation Associates, Inc. (M and E) of Hightstown, New Jersey and Oro Valley, Arizona. M and E specializes in assessment and program evaluation services to school districts and organizations.

To be published in Connections, by Harcourt Educational Measurement, San Antonio Texas.

Services | Events | Clients | Staff | Updates | Coming | Quotes | Directions | Links | Home

Copyright © 1998-2012 Management & Evaluation Associates, Inc.
Hightstown, New Jersey

wpe1.jpg (7579 bytes)

Website Maintenance by:

Dunn Virtual Assistance
Webmaster: Julie Dunn