Messing up the test: the next installment

In the last few years I've been experimenting with the way I test our 3rd year mechanical engineering students in their 'Dynamics and Mechanisms' paper. I've chosen this paper because (a) it has more than a handful of students, and (b) I am in charge of it. When  I've suggested to my peers that I do something similar with other papers I teach on (but not in charge of) a "Don't you dare" tends to ring out rather clearly in response. So Dynamics and Mechanisms it has to be. 

I've tried 'tests you can talk in', with mixed success. This year, I tried an oral test. That involved giving every student a personal, fifteen minute interview. I had the idea from an article I read which talked about the problems with traditional written assessments, and discussed other possibilities* It's not a new article, but then universities have a lot of inertia, so it's no surprise that the troublesome written assessment still seems to stand as the perceived 'gold-standard' for assessment at university. 

But an oral test was a big risk for several reasons.  First, I had to declare what form the test would be on the paper outline (an official summary of what the paper involves) well in advance of it starting. This meant that I didn't know how many students they'd be, and how much work it would involve. I was expecting, based on previous years, something around forty students. I watched in horror in the week before semester as the student enrolments climbed well beyond this. In the event I had 55 interviews to do, last week and early this week. That was a high workload, fitting all that in amongst my other lectures and commitments. That said, preparing and marking a written test is a pretty demanding exercise time-wise as well. But I wouldn't want to repeat the exercise with a bigger class. 

Then I had to convince students that this was actually a reasonable thing to do. I spent an entire lecture session on discussing how the test would run. It was the best attended of any of the lectures in this paper! If that's not proof that students are motivated by assessment, I'm not sure what is. The feedback I've had so far has been mostly positive, which is reassuring, although there are some things that in hindsight I could have got better. 

Then, what if it all fell apart? What if I were sick? (I had no back-up plan here). Or students, for whatever reason, got the wrong idea of what was required? (I had given them a task to do beforehand which we'd talk about as part of the interview).   In the end, there were no such problems, but there could have been.  

So how did students do? I've had a number of positive comments (plus some negative ones) relating to how students felt that the oral test actually got them better prepared and engaged with their learning beforehand than a written test would. That was part of the plan! Also, from my point of view, I got to learn just what it was that the students had learned. The breadth of the learning took me by surprise. My first question to all of them (which they were expecting, because I'd told them) was "tell me about something you've learned in this paper". I had my own pre-conceived ideas about what they'd all choose, but I was wildly mistaken. Between them the students covered just about the whole paper, sometimes in accurate detail, including bits that I thought I'd glossed over. Aspects that I thought were really difficult were in fact grasped really well. Conversely, when I started to ask some probing questions, some things that I thought were straightforward, proved to be mis-understood. That feedback to me is more useful than years' worth of student appraisal questionnaires, and that reason alone is enough for me to view the oral tests as a success.

So what happens if I get 70 students next year?

*Biggs, J. (1999) Teaching for Quality Learning at University (pp. 165-203). Buckingham, UK: SRHE and Open University Press.

Leave a Reply