A Better Way to Look at Assessment Data


I have been reading and talking and writing a lot about the shifts we need to make in how we think about our reading comprehension standards and our texts - how we need to move our focus away from the unsubstantiated idea that we can teach kids transferable comprehension "skills" and to the idea that the text needs to be at the center of instruction. You can read other blog posts I've written here and here, and Tim Shanahan does a fantastic job of tacking this topic in one of my favorite videos here.

These shifts are nuanced and don't seem that big, but they are, and they impact everything we do in the classroom - including the way we consider and respond to assessment data. 

Typically, when we get assessment data in, we analyze how students perform on items related to individual standards, see which ones they performed particularly well in, which ones not so much, and then plan a path forward. A lot of times, that path looks like, "These six kids struggled with standard 2, so we'll work with them in a small group and practice standard 2 some more, but maybe with a graphic organizer," and "As a whole, the class did well with standard 4, so we can move on from that."

In math, and even in ELA foundational skills, a standard-by-standard analysis makes a lot of sense, because those standards represent constrained skills that can be mastered. For example, I can master decoding words with certain phonics patterns or solving algebraic equations. However, in reading comprehension, the standards are not repeatable skills at all. Each text has its own main idea, is structured differently, and has different types of complexity, and what it takes to make sense of it and answer questions about it differs a lot from text to text. In fact, the 2006 ACT study “Reading Between the Lines” showed us that when we analyze reading assessment data to determine how well students can comprehend a text the type or category of question varies little. There's no pattern that shows us that some students can answer inferential questions but not literal, or that some struggle with main idea questions but do just fine with questions about relationships between words. Either students can answer many different questions about a specific text, or they can’t answer many at all. So, we aren't going to learn much of use by analyzing assessment results through a standard-by-standard approach.

However, we still need to know how well students are comprehending so that we know who and how to help. So, I like Tim Shanahan’s idea that it’s better to look at how students do with particular types of texts over time.

So, for each assessment I give over a certain period of time, I’d want to know the:

  • Type and Topic: What type of text they read on the assessment: literary, informational, or poetry. If it was informational, I'd want to know the topic of the text: science, social studies, connected to module content, etc.
  • Complexity: The Lexile level (and any other notations about complexity)
  • Length: The number of texts and the word count of the text(s)

Then, when I look at student performance across different types of texts over time, I can see who’s challenged by informational but not narrative texts, who hits a wall at a Lexile of 820, and who tends to struggle with longer texts and needs some work with stamina. When I respond to that data, I can make sure those groups of students get additional practice with the texts that pose the most challenge to them so they learn to work through them - not isolated comprehension standards.

If you’d like to see how this could look, I highly recommend Tim Shanahan’s piece about ELA comprehension assessment here. Also, “Placing Text at the Center of the ELA Standards-Aligned Classroom” has a great section on how we should respond to reading comprehension data.

I know it’s a very different way of looking at and responding to data, but it’s a change that can make a tremendous difference for our kids. 

Here's to simply teaching well,

No comments

Post a Comment