TEP logoScholarly Teaching: Assessment as Research « Teaching Effectiveness Program

Scholarly Teaching: Assessment as Research

I have been thinking lately about assessment design. Most likely because as I write this it’s finals time and, walking through campus, I hear students agonizing over tests, culminating projects, term papers and all manner of academic gauntlets. They are under-slept and over-caffeinated, grumbling or speaking a million miles a minute about what they have studied, what they have yet to study, what projects they have completed or are working on, and— this is what I find most remarkable— how surprised or bewildered they were by their testing. I remember this experience quite clearly.

As an undergraduate there were tests that I prepared for by indiscriminately cramming as much information presented in the course as I could: names, times, dates, numbers, equations, cities and measurements, most of which were irretrievable by the end of finals week. The tests themselves felt like minefields in which I was blindsided by questions that I couldn’t connect to the course as I understood it. My internal dialogue went something like this “Okay I remember that one, okay … darn, I could answer that if I could remember that woman’s name …. okay… okay …… WHAT? We we’re supposed to know that? We never talked about that? That’s not in the book! That wasn’t in any of the Power Point slides! When did we learn THAT?”

Now that I’ve taught courses of my own, I understand what a terrific miscommunication this was. As a teacher, I intended tests and other culminating projects to measure and assess my students’ understanding of, and ability to apply, the concepts we had worked with in class. But as a student I thought that these exercises were meant to measure how much I remembered and how quickly and accurately I could recall what I’d been told. And I reasoned, implicitly at least, that if the test to determine the success of the course was about remembering, that must be the purpose of the class. In other words, I thought that to learn something was to hear it and remember it.

Now that I’m an instructor, I cringe to think this is my students’ understanding of what constitutes success in my courses. But if the ability to remember a slew of names dates and formulas, or the ability to recall information quickly and under time constraints, is the key to performing well on my tests, then that’s exactly the message I’m sending.

We all know about factors that create “noise” in our assessment data-set, like students’ varying abilities to memorize, compute sans calculator, parse the complex syntax of multiple-choice questions, think effectively under time constraints or stay focused in harshly lit and uncomfortable rooms. There are situations and disciplines in which these kinds of skills might be necessary, and are therefore legitimate targets of assessment. For example, it’s appropriate to ask medical students to memorize the major organs and their function, or to be able to diagnose a heart attack in a timely fashion. In other situations, assessment practices which rely on these skills can be inaccurate, unfair and, if my undergraduate experience is any indication, misleading. Still, instructors, myself included, continue to use these mechanisms to measure student success for various–sometimes very good– reasons.

You might want to have a leisurely fireside conversation with each of your 400 students to determine if they had progressed in the course, but unless you are alloted a sizable legion of GTFs, it’s difficult to avoid Scantron entirely. So there are always limitations, always a margin for error.

At the same time, there are compelling arguments for working to reduce this margin of error— in the same way as we might work to reduce the margin of error in our own scholarly research. Because, isn’t assessment just a term long study in the efficacy of your instructional methods and the extent of student engagement? Why shouldn’t we approach our teaching with the same methodology we apply to any important issue, question or problem in our discipline?

I wonder how far I’d have gotten in my academic career if every time I encountered a writer I didn’t understand, I assumed they were a bad writer, or that they had not tried hard enough, or if the thesis of my final project was something like “Derrida really needs to rethink his sentence construction”? How far any physicist would get if, when their data patterns were different from what they had expected, they tossed the results. Or how effective an experiment would have come from a question like ” Does neon catalyze reactions?” But so often, because we are tired, frustrated, overworked and overwhelmed, we don’t have the energy to approach course design with the curiosity and creativity we dedicate to our research. Which is why, despite my best intentions, I have ended up writing comments on student papers like ” I don’t understand this” or writing a rubric that describes an “A” thesis using language that describes their work in relation to some unspecified ideal. Words like, “excellent , well-written, highly-developed” which, without further information about which parts of the text are ” excellent” or what qualities and component make them “well- developed”, is neither tremendously descriptive nor, as you might guess, useful in justifying the grade should a dispute arise.

This is not to say that everything that happens to an undergraduate is the direct result of our teaching. Sometimes they just stayed up too late playing Halo and didn’t start studying until three in the morning. But it might be useful to begin with the assumption that their work contains information about their learning process, and if we are willing to critically analyze the data, we might discover how best to help them. After all, if we ask our students to be specific, accurate and critical aren’t we accountable for applying the same standards to our teaching?

Approaching teaching as research is also an excellent way to teach by example. Many of the skills we want students to learn are founded on rigorous critical thought. So shouldn’t we apply the same skills and thought process that we are teaching to methods of our assessment? In my own classes, I try my best to do this when I comment on student writing. I wouldn’t want them to use slang in their paper because it would be inappropriate and difficult to understand. For the same reasons, I don’t write comments heavy with pedagogical jargon. I ask them to make arguments based on common assumptions and understandings, arguments that work within the constraints of informal logic, arguments that utilize examples and specific references to the texts we’ve read. And in my responses to their papers, I try to make my comments clear, logical, and specific to the text. I try to make the best argument I can for my evaluation of their argument.

Though this particular example is best suited to the humanities, the principle is applicable to almost any discipline. For example, if you are in the sciences, you can use the scientific method to design your assessment. (Proponents of the scientific method, from what I’ve heard, are big on clarifying questions and eliminating extraneous variables from measurement mechanisms.) You might approach multiple-choice questions, or final projects in the same way you might devise the most efficient and accurate way to measure pheromone levels in zebra fish or the radioactivity of a waste material. Consider also that when you talk to your students about your process around assessment, you demonstrate how the skills required to practice your discipline ( critical thinking, good communication, reading comprehension, experiment design, etc.) can be very useful for answering questions and solving problems outside the confines of the course topic.

If you’d like more information on approaching your own teaching as research you can
• Contact TEP
• Go to the Brand Spankin’ New “Assessment Design” wing of our website, which will guide you through designing effective and relatively pain free assessment strategies.
• Sign up for Georgeanne Cooper’s series on “Course Design By Objectives”
• Come to my workshop on “Writing Rubrics for Difficult to Define Skills”

Comments are closed.