Teacher Evaluations, But Better (and Cheaper)
I'm "borrowing" a phrase from Josh Weissman's cooking channel
Josh Weissman is famous for his YouTube cooking channel, which features original recipes and also two series called But Better and But Cheaper where he makes famous recipes…cheaper and better.
I think we can do the same with teacher evaluation systems, since, as I discussed in my last post, they are both ineffective and expensive.
First, we need to decide the purpose of teacher evaluations. Ideally, they’re designed to help teachers improve their practice. They’ve already been shown to poorly differentiate teachers and fail to efficiently push harmful teachers out of the system, so let’s focus on helping teachers get better.
Less complex. One reason evaluation systems are so expensive and ineffective is because they’re too complex, with multiple data inputs, confusing rubrics, and fancy algorithms that spit out ratings teachers don’t really understand. Just because we can build a complex evaluation system to capture every aspect of a teacher’s work doesn’t mean we should — especially if they don’t actually help teachers get better.
Engagement > evaluation. There’s a booming marketplace for “employee engagement software,” which basically aims to replace traditional evaluation systems with software that facilitates evaluations while aiming to help employees feel more connected. I’m a true novice regarding this software, and I do know that teachers feel more disconnected and burned out than ever. Some engagement software ask employees to complete very brief weekly surveys to gauge their feelings towards work, among other sentiments. This is actionable data that can help leaders adjust course mid-year and push for long-term changes.
Upward feedback. Teachers have a lot to say, and it’s important they evaluate the principals, coaches, and central office staff who manage their resources and mandates. It can not only help rebuild the trust that’s missing in many of our education systems, it can also invest teachers in systemic change. They’re often immediately turned off by any new initiative — years of failed initiatives will do that — and having a formal mechanism for evaluating their superiors can help them feel more connected to promising initiatives.
Eliminate formal observations. As my friend famously said, “Formal observations are a dog and pony show.” I’ve never attended a dog and pony show, but I’ve done a lot of formal observations. Sometimes they paint an accurate picture of a teacher’s performance, and most often they represent an illusion.
Promote shorter observations. Kim Marshall (of The Marshall Memo) agrees: shorter, more frequent observations are much more effective at painting a picture of a teacher’s work. It’s like reading the topic sentence of each paragraph of a long essay instead of reading just one paragraph in the middle. It’s still an incomplete picture, but it much better captures the quotidian work that teachers do — the work that determines whether students are learning or not.
Student surveys. Universities routinely solicit student feedback for professors, and we rarely do it in K-12. Why? The Common Core tasks students with writing persuasive essays at a very young age, and I’m convinced students know good teaching when they experience. And that teachers would find the feedback insightful and constructive. Research backs this idea up, and Ronald Ferguson has some interesting ideas on how to make them work. The trick here would be to make student surveys teacher-facing only. Weaponizing student surveys as part of an evaluation formula would erode trust and cause all kinds of bad incentives.
Better rubrics. Almost a decade ago, Paul Bambrick-Santoyo (of Uncommon fame) wrote that our approach to teacher evaluations, using rubrics as a “scoreboard,” wasn’t effective. Rubrics that are so complex as to include every aspect of teaching often don’t help teachers improve. The one I currently use has 10 domains with 4 rating levels — each with many additional descriptors of effective teaching. In a recent evaluators conference, I asked a teacher to rate themself on it — and they couldn’t remember a single domain! Simpler rubrics, like TNTP’s Core Rubric, are at least less forgettable than ones with a billion descriptors.
Connect rubrics, curriculum, and development. Recently I’ve felt like we’ve lost sight of what good, fundamental teaching is: how we ask questions, collect formative data, deliver concise teaching points, conduct guided practice — the bread and butter components of good teaching have been supplanted by complex literacy and math frameworks and other mandates levied upon teachers. Whichever rubric we use needs to referenced constantly during curriculum trainings. Teachers should see a clear connection between every workshop they attend and how it helps them improve. If one doesn’t exist, then it’s not professional development — it’s just an information session.
Our return on investment for the $15-20 billion we’ve spent on complex evaluation systems is dismal. We can spend less on evaluating teachers (but cheaper!) and more on helping them get better (but better!) by simplifying our approach to the former. And maybe we’ll even have a few extra dollars to pay folks more.
It’s been a busy few weeks — I’m sorry for missing a few newsletters — and I’ve got some new posts I’m excited about, including one on cigarettes and another on Big Macs. Trying to incorporate more wellness recommendations here since there aren’t enough on TikTok and Instagram. jkjkjk
Thanks for reading — have a great week! (and apologies for any typos — I’ve tried editing a few times, and my eyes are giving out on me!)