State board adopts new evaluation regulations
All Massachusetts school districts will have to adopt new evaluation systems based on a state framework that was approved by the Board of Elementary and Secondary Education on June 28.
The new system will be phased in over three years, beginning with Level 4 schools – those designated “underperforming” by the state – in the 2011-12 school year.
“This framework incorporates many of MTA’s recommendations and, if properly implemented, will lead to better evaluations and improved teaching, learning and leadership in our schools,” said MTA President Paul Toner. “School committees and local associations are going to have to work out the details of the new systems in bargaining to make sure they are workable, fair and effective. The MTA will provide local associations with guidance and support during this process.”
The MTA played a central role in shaping the new regulations, starting with last December’s release of a report titled Reinventing Educator Evaluation that was produced by MTA’s Center for Education Policy and Practice.
“Our goal has always been to have a system that is transparent and fair and that helps teachers and administrators improve their practice,” said Toner. He added that the MTA’s participation was critical to making sure that multiple measures of student learning are used, not just MCAS scores; that high-stakes decisions are not made based on student learning measures alone; and that critical decisions about the new district-based evaluation systems remain in collective bargaining.
Until the new system is phased in, districts will continue operating under the existing state framework, which was mandated by the Education Reform Act of 1993. That system establishes principles of effective teaching and administration and specifies the minimum number of times teachers must be evaluated. Beyond that, however, most details are left to collective bargaining.
The new system establishes streamlined standards for teachers and administrators and will require the following steps:
- The educator does a self-assessment, meets with the evaluator to develop goals and an initial plan and begins implementing the plan.
- Part-way through the evaluation cycle, the evaluator conducts a formative assessment through classroom observation and examination of the educator’s work products to help guide educator practice.
- At the end of the cycle, the evaluator conducts a summative evaluation to give the educator one of four ratings: Exemplary, Proficient, Needs Improvement or Unsatisfactory.
- The evaluator compares that rating with multiple measures of student learning, growth and achievement. There must be at least two measures for each educator: either two district-based measures or one district-based assessment and one measure of trends in MCAS Student Growth Percentile Scores (for the 17 percent of teachers for whom those scores are available) or on the Massachusetts English Proficiency Assessment. A trend is defined as at least two years of scores; MTA will advocate that local associations bargain a three-year minimum at the local level. The educator’s impact on student growth will be deemed to be low, moderate or high.
- An educator growth or improvement plan will be required depending on the relationship between the rating of educator practice and the student learning measures. That plan will serve as the basis for the next cycle of evaluation.
The most serious consequences are for those rated Unsatisfactory. These educators will be put on a one-year Improvement Plan and, if they fail to improve in that year, they may be dismissed or demoted. In a May 11-17 MTA poll, 71 percent of MTA members favored this provision.
Educators with an overall rating of Needs Improvement will be placed on a Directed Growth Plan for one year or less. At the end of that period, they must be rated either Proficient or Unsatisfactory. If they are rated Unsatisfactory, the one-year Improvement Plan process is implemented.
Educators with a Proficient or Exemplary rating and new teachers will be on different plans, with the least restrictive plans for those with a high rating and a moderate or high impact on student learning.
The new regulations also require districts to collect survey data from students in grades six or higher about teacher effectiveness starting in the 2013-14 school year. That year they will also have to collect staff feedback about administrators. In the future, parent feedback may also be required.
The Department of Elementary and Secondary Education is developing a “model” evaluation system that districts may adopt or modify. The MTA is already drafting a version of a model plan and will work closely with the DESE on the creation of the state plan.
Although more than two-thirds of the MTA members polled support using multiple measures of student growth as part of the evaluation, some have questions and concerns about what those measures will be and how they will be used.
Education Commissioner Mitchell Chester tried to allay some of their concerns.
“There’s no formulaic approach where the student learning piece trumps the evaluator’s judgment,” Chester said at a public forum in Agawam on June 7. “One question that your district will have to decide – and it’s a district-by-district decision – is what impact do we expect to see on student learning and does that differ for different students? Based on that you’re going to have make judgments about whether a given teacher or a given school is in fact reaching that expectation or not.”
Toner said that avoiding a formulaic use of scores has been among the MTA’s key objectives in this process.
“We were very clear that these measures should not account for a specific percentage of a teacher’s evaluation. And we were very clear that measures of student performance should not trump the evaluator’s judgment. We won both of those arguments,” he said.
“Everyone knows that teachers are not 100 percent responsible for how well their students perform,” Toner continued. “At the same time, it is common sense that teachers do have an impact on their students’ learning. How well their students are doing in school – based on multiple measures – is relevant to consider in the evaluation process.”
Several BESE members made similar comments at the June 28 meeting before they voted. Education Secretary and BESE member Paul Reville said again, as he has said in the past, that student learning measures should be “informative, not determinative.”
The MTA’s biggest remaining concern is over the training and experience of evaluators. BESE member Harneen Chernow offered an amendment to the regulations drafted by the MTA that would have required the DESE to provide all evaluators with training in the new system and would have mandated that teacher evaluators have five years of teaching experience.
Although several board members said they agreed with the sentiment behind the amendment, it was defeated at the recommendation of Commissioner Chester, who expressed confidence in the administrator corps and said he was concerned that the amendment would slow down implementation of the new regulations.
Administrators expressed their own concerns, mainly around where they will find the time and money to develop the new district-based assessments and conduct the more thorough evaluations. Presumably, Race to the Top districts will use some of those funds to implement the new system, and the DESE plans to allocate some of its share of RTTT funding to evaluator training.
While acknowledging that the new system will be time-consuming to implement, Reville said that developing staff is a high priority and must be done. “We are saying that in the Commonwealth, the evaluation of educators is the number one priority of administrators,” he said.
The new system is scheduled to be implemented in the state’s 35 Level 4 schools and in a small number of selected districts in the 2011-12 school year, in Race to the Top districts in 2012-13, and in all districts in 2013-14.
For more information, go to massteacher.org/evaluationregulations.