Why looking at assessment data doesn’t impact student outcomes

I once joined a professional learning community (PLC) for their bi-monthly afternoon meeting. The team walked into one of the teacher’s classrooms where they chatted about the day until everyone had arrived. Once the meeting started, one of the teachers pulled up a document with four questions on it to record the minutes. 

The four questions were:

  • What do we expect our students to learn?
  • How will we know they are learning?
  • How will we respond when they don’t learn?
  • How will we respond if they already know it?

They collectively began to look at their benchmark data, fill in the answers, and search through the pacing guide to indicate which standards they were supposed to be teaching that week, which assessments they were giving, and how they were going to reteach to those that “didn’t get it”.

At the end of the meeting, they had completed the task, but not one teacher kept the information for themselves; it wasn’t seen as useful in their daily classroom practice. Their goal was clearly to complete the minutes to turn in and move on. This PLC was driven to complete a task, and although they wanted students to succeed and do well on the desired learning objectives, the meeting and conversations were focused solely on answering questions and submitting the minutes required.

This common practice is what has led to these recent insights from Harvard researchers, which suggest that analyzing student assessment data is a widely practiced activity (and often mandate) for teachers in the US yet shows almost no evidence in raising test scores. 

Yes, you read that correctly: Analyzing student data, understanding weaknesses, and reteaching has almost no evidence in raising test scores. 

The article highlights research from Goertz and colleagues who, “also observed that rather than dig into student misunderstandings, teachers often proposed non-mathematical reasons for students’ failure, then moved on. In other words, the teachers mostly didn’t seem to use student test-score data to deepen their understanding of how students learn, to think about what drives student misconceptions, or to modify instructional techniques.”

How much more useful could that PLC I observed have been had the focus been on using their time to raise questions about the students, understanding their needs to delve into ways to improve their practice? The four questions they asked are great if they are used to guide and investigate practices and to collaboratively explore new and better ways to meet the needs of learners. But simply answering these questions and filling in worksheets won’t change how students learn or how teachers teach. 

The authors articulate it this way, “understanding students’ weaknesses is only useful if it changes practice. And, to date, evidence suggests that it does not change practice—or student outcomes. Focusing on the problem has likely distracted us from focusing on the solution.”

In Innovate Inside the Box, George Couros and Katie Novak push on the idea of being data driven and instead remind us that we should be learner driven. Data-driven process are intended help identify areas of need and to “fix” the weakness and as they point out, “with all the attention turned on students weakness, no effort is made to proactively remove barriers to learning, not are their strengths nurtured.” 

DQu1zHRU8AAixGy.jpg

Many administrators have leveraged resources to ensure that students take regular benchmark assessments and that teachers come together to analyze the data but research indicates and many educators will tell you this practice might be easy to get a snapshot and sort and rank kids but as far as improving outcomes for students, it warrants some further investigation. As administrators spend money, resources and time on “data-driven practices”  it is beneficial to examine the utility of this practice and the impact on students outcomes. When our data-driven practices lead us to overemphasize weakness, we miss out on all the learners bring to the table and opportunities to grow them.

I also want to acknowledge that this finding that analyzing externally designed benchmark data is NOT surprising to many teachers who know that there are many other data points that are critical to understanding their students beyond these isolated testing events and scores that never tell the whole story of a learner and many get frustrated when their time is solely focused on these narrow measures of success that define a child.

To be clear, I am not arguing that looking at data is not important. However, the power in the collaborative time to analyze student outcomes is not in the data alone, it is in the opportunity to network and engage in meaningful conversations to question, learn and generate new ideas to impact student learning. It is important to know where a learner is in relationship to where they are trying to go and best when we can bring the learners into the process. Students and teachers alike are motivated when they have ownership over the work they are doing, when the data is meaningful and they have the means and resources to solve problems that are meaningful, not restricted to filling in worksheets.

2 Comments

  1. gabrielle

    “Analyzing student data, understanding weaknesses, and reteaching has almost no evidence in raising test scores.” Yep, this doesn’t surprise me and…should raising test scores even be the goal? As I watch the learners I work with interact with the world and each other, I am more and more convinced that we should be trying to increase human happiness and self-regulation “scores”, not test scores. I’m pretty sure high test scores aren’t going to make people better neighbors. And, yes, if teachers used data to change their practice, that would probably make learners feel like learning was more relevant and personalized.

    Reply
  2. Laura Spencer, Ed.D.

    It’s hard for people to see their own complicity in the data. It’s much easier to put the fault elsewhere: Not enough time; no parent support; student doesn’t want to work, etc. That’s why a lot of these data PLCs fail to see results.
    The four questions are fine, but I’d argue that “respond” is a passive reactive response that allows them to put the onus elsewhere. I’d be curious to hear what they say when the question is, “what actions am I taking to address potential areas in which students won’t understand?” or “what support systems am I putting in place to identify and address students who aren’t reaching the targets (or far exceed them)?”

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Dr. Katie Martin

Dr. Katie Martin is the author of Learner-Centered Innovation and VP of Leadership and Learning at Altitude Learning. She teaches in the graduate school of Education at High Tech High and is on the board of Real World Scholars. Learn More.

LEARNER-CENTERED INNOVATION

Subscribe

Sign up here to get the latest from Katie.
* indicates required
Favortie Posts

Pin It on Pinterest

Share This