Marking student writing can be challenging. It’s not like there is one right answer like there is in math. So it is understandable that there is some variability in marks when we assess student writing. But, how much variability is acceptable? When does it become unacceptable? Those were some of the questions I had after reading two research papers that explored that question. The conclusions were quite surprising and disconcerting. I was taking one of our district’s Leadership in Assessment courses at the time, so I decided to do one of my projects on what that looked like in our district. What I found out was disconcerting and challenging.
I sent packages out to several different schools asking the Grade 9 L.A. teachers to mark the same 5-paragraph essay using Alberta Education’s Gr. 9 Language Arts Provincial Achievement Test rubric. I also asked the teachers to comment on it as they would on their own students’ work and to fill in a short questionnaire. The questionnaire asked them a few questions like how long they had been teaching grade 9 L.A., had they marked Gr 9 P.A.T.s for Alberta Education before and when, and the socioeconomic status of their school.
Once I had analyzed the results of the marked essays that were returned, I sat back and realized we are not doing a good job of assessing students’ written work. The teacher-awarded marks ranged from 60% to 85%., from a “C” to an “A”. Not very consistent, as were some of the teacher comments. For instance, one teacher wrote “great quote to open up your essay” while another wrote “it’s better to start with your own words”. These results were not what I had hoped.
But what was startling about the range of marks and comments was the experience level of the teachers who marked the essays. Teachers with ten or more years experience teaching L.A. and who had not marked for Alberta Education’s Gr. 9 Language Arts Provincial Achievement Tests were more likely to award marks on the high end or the low end of the range. The teacher with ten or more year’s experience and who had recently marked with Alberta Education was the one who assessed the essay closer to the actual mark, as did the teachers with 5 year’s experience. Socioeconomic status of the schools did not seem to play a role in the awarding of marks.
We need to provide students with reliable and consistent grading and with reliable and consistent grading criteria. Both within schools and across schools. Based on the results of this research, we are not doing this very well. And that is our challenge.
However, we can meet this challenge. Teachers need to start having conversations with students and each other about what good writing looks like, not just in English Language Arts but in Social Studies position papers and Science lab reports. Students and teachers need to establish criteria that makes for good writing. Teachers need to collaboratively mark student writing and continue the conversation about what good writing looks like. Teachers need to have a variety of examples of student writing in order to guide the writing process and the marking. These are some of the things we can do to help minimize the variability in marks.
Providing valid, reliable, and consistent assessment of student writing and learning starts with teachers actively collaborating about what it is students need to learn and actively collaborating in assessing it. These are best practices that have a powerful affect on the bottom line in our business: increased student achievement.
What might be some other things teachers can do to work towards consistent grading practices?