Before I start with my reflection today I want to begin with a comparison from Carnegie Mellon University Eberly School for Education’s definition of summative assessment. I’m putting this at the top of this post for anyone to refer back to and compare some of my thoughts and Twitter posts by #mschat participants. Why Carnegie Mellon? For the simple reason that it is the very first Google Search result that comes up when you type in summative assessment as your query. We are in the Google Age…
The goal of summative assessment is to evaluate student learning at the end of an instructional unit by comparing it against some standard or benchmark.
Summative assessments are often high stakes, which means that they have a high point value. Examples of summative assessments include:
- a midterm exam
- a final project
- a paper
- a senior recital
I nthe #mschat @garnet_hillman responding first to the question because she is the moderator.
“Summative assessments ‘sum up’ the learning and allows students to ‘Show me the Learning’”!
It is an fairly accurate and concise definition about what it is but I’m left wondering what the purpose of it is. As some of the responses started coming in I noticed that a few participants were not interested in distinguishing between formative and summative assessments. I find this saddening. While I agreed with one sentiment that all students are learners and deserve feedback, I’m concerned how not distinguishing between the two actually helps students learn. If all assessments are summative then there is nothing but grades in your gradebook and very little feedback on your instruction. If all assessments are formative, then there is nothing but feedback and no benchmarks to achieve. Now, I should clarify that I personally believe that all assessments should have some aspect of feedback to them summative assessments should contain a place for students to reflect on the assessment itself to see if it measured up to what they thought they were going to be tested on. But there needs to be a deep understanding on the teacher’s part of how these two types of assessments should be used in concert with each other. It is a cop out to say that you don’t distinguish between them.
O.K., I’m done with my mini-rant… back to the chat.
@nyrangerfan42 mentioned that summative assessments give students an idea of how close to the target their skills currently are. That sounded more like a formative assessment response to me and I would like to hear more about his thoughts in regards to this. How does that differ from formative? Maybe I’m reading too much into it.
Our moderator followed up her first tweet with a mention of how a summative assessment should be the only piece of evidence that contributes to a grade. I want to agree with this statement sooooo much but in reality I personally differ on a matter of philosophy. I personally agree that summative assessments are one of the only pieces of evidence used for a grade but that a separate subjective grade using the teacher’s expertise and experience in evaluating effort should also be included to some degree. (But this is a topic for another day…hmmm #mschat topic: What is the role of effort in grading?) Another participant mentioned that it is a ‘comprehensive checkpoint’. I really think this hits on the concept I agree with most. By being a ‘checkpoint’, it implies that it is not the final assessment to demonstrate learning. It may be comprehensive and include all sorts of information gathered from multiple points along the way but it is still a checkpoint. It implies that a student has the opportunity to return to this point and revise, edit, and recomplete missing gaps. YAY for @DrSteveRitter!
Most teachers in the chat had to agree that most of us (myself included) use the summative assessments as our grades. I feel like I should be more inspiring here but the reality is that most of our districts have policies that dictate that we need to have grades. Since that is the case we have no choice but to employ the summative test/assessment/quiz/project/recital/etc… as our grades.
I wish there was a better way. Oh wait, there is. I mentioned during the chat that the University of Virginia Medical School ran an experiment with the first year class of med students in 2007 and moved to as Pass/Fail model. Their findings were published and the results were conclusive. Even the Wall Street Journal wrote about it. Not only did students engage in collaborative learning more without grades, but their overall performance did not slide AT ALL. Hmmmm. As of this year most Ivy League med schools also use a Pass/Fail model. My wife graduated from Dartmouth Med School and she and I have had many conversations over a glass of wine why middle schools should seriously consider shifting their paradigm to this model. But once again it is a conversation for another #mschat.