I have left this survey open still as there has been slow uptake and I have encouraged the students once again to give their opinion. I think students at UTS are surveyed out. Nevertheless, 25% have responded so far and during tutorial discussion yesterday, when I engaged them in conversation about flipped and student-centred learning, I got the rest of the feedback I was seeking. I discussed the survey responses with the students. I began with the ticks (self-congratulation): a clear majority of the students are enjoying the subject so far and agree that is it different from most of the study experiences they have had so far at UTS; there is unanimous opinion on the positive value of online, retrievable lectures and the mix of online and face to face; near unanimous student opinion in favour of designing their own test questions; unanimous opinion on the utility of a WordPress blog site to teach the subject; students also like the easy toolbar links at the top of the WordPress blog site; group work feedback is mostly positive, but more work needs to be done. Crosses or mixed response: tutorial dynamic needs more work, more structure, much more student involvement and control; group work was likewise mixed: a couple didn’t care for it.
An in-depth collection of responses is in attached Word doc. Survey Monkey results as of 26Oct14
I followed up and asked them face to face yesterday what they thought worked best for tutorial dynamics and group work based on their experience of the numerous subjects they have taken at UTS over the last 3 years. Some responses are trickling in: “Asking students to send questions based on the week’s reading to you before the tutorial is a good idea, but make the activity of how you use them in the tutorial a little more interesting. Some options: every student brings in question or two, they go into a hat and then students pick a question out that is not their’s to discuss / respond to, individually or in small groups. Questions that students send should be more like statements that we invite an agree / disagree response, then in class we can have mini debates. This will force us think and come up with a concrete argument to put forward”.
“Completing the individual and group components of the assignment simultaneously can be difficult, because we are not all sure of the conclusions are going to come to after our research, and if they are not what we expected / planned for with the group work, that throws the group component out. Suggestions to change this would be to: a) make the group component due a week later so that students have time to reconvene AFTER their research is complete, and bring their findings to the table to work with the group then; OR b) make the group components due first; students are given / decide on a more general topic, then after working together on the broader topic they then split up and look at different individual aspects”.
“I have enjoyed the content of this subject this semester. I’ve found every week to be very interesting. As one of the girls was saying today I would like to have more of a class discussion and argument about the readings, whether it be someone is allocated a week to discuss or whether a question is posed to the class every week and it is discussed. I also think that the idea of a paragraph summary which is worth two marks each week is a good idea as I believe more students will be inclined to read the set readings for that week without being ‘forced’ into doing so”.
I threw a hand grenade into the tutorial discussion today to see how students would react and to get them thinking about what the goal of learning actually is or might be: “what do you think of the idea of giving students a flat HD (85/100) at the start of the semester (or some other arbitrary mark above 50%) so we get the messy, partially subjective evaluation of their assignments out of the road? It would of course come at a price: you would have to sign a ‘learning contract’ with me which obliges you to attend every tutorial. If you miss a tutorial for any reason (illness, laziness, other priorities) and do not want to lose 5 marks and thus drop below a HD, you need to make up the lost tutorial somehow and to my satisfaction: for example, arrange to spend an hour with me in my office going over the tutorial information for the missed week and demonstrate what you have learned in absentia; or submit an extra small assignment based on that week’s pre-class learning materials and tutorial discussions. In addition, you also need to do the set assignments and demonstrate that you have sufficiently understood and achieved the key learning goals”. Sub-standard work could be critiqued and sent back to students to re-consider and re-work leading to re-submission. If students do not meet the minimum standards of engagement and demonstrated learning outcomes, their marks could be progressively reduced from a HD. Since UTS claims it does not apply a bell curve (Gaussian Distribution) for marking (a predetermined percentage of students will obtain each grade), the experiment would not upset anyone except traditionalists.
In the end, WHAT ARE WE TRYING TO ACHIEVE? Make students leap through a flaming hoop and then mark them up or down accordingly? Or are we testing to see if growth in self-learning behaviours (self-reflection, self-assessment) have occurred? I realise this idea would probably not translate across from Humanities/Social Science disciplines to other disciplines, especially ones that require exact replication of teaching materials (nursing principles on which people’s lives depend, engineering principles which determine if bridges stay up or fall down, etc), but the idea is to move away from producing parrots and instead allowing much more creative, independent learners to blossom. Is the goal to differentiate between students based on mark, or bring them all to a level of self-understanding and development which enables them for employment and life-long learning? Outstanding students will differentiate themselves, so we don’t need to fetishize marks. When students go for jobs, the interview process focuses on way more than just the grades they got at university (which may only indicate the ability to parrot back and to cram for exams). I admit that competition for marks using a bell curve creates incentives, but not for everyone, and what kind of incentive in the end? Competition does not necessarily foster team work, communication and solidarity with peers, which seem to be part of our ideal our graduate attributes. How did students respond to this idea?
The first group was initially shocked, then warmed to the idea, but still were non-committal, though some thought it would be an interesting experiment. Students (as with teachers) have been conditioned their whole lives to work with the individual, competitive, bell curve model. The second tutorial group was also intrigued, though there was one visceral and vociferous negative reaction: “it devalues high grades!”. To which I responded: “But what do high grades really represent? The ability to parrot back and to cram for exams and tests? Is that a satisfactory learning outcome within what UTS is trying to achieve with student-centred learning?” Another student thought it was a good idea, but with a lower starting point: “start at a mid-credit – 70% – then work up or down depending on performance”. Yet another asked: “what about pushing the starting mark up to higher than 85/100 if it is deserved?” “Sure”, I responded. Anyway, just an idea.
I indicated to the students in the subject outline at the start of the semester that they would co-write the test questions. I think they were initially taken aback – unknown territory – but survey results tell me they support the idea and have embraced it. We are currently finalising the test questions. We work-shopped them today. Initially they seemed incapable of generating the questions – they weren’t sure about what kind of questions I was angling for. I told them to think of questions that they think are relevant to testing the information they have gathered and the learning that has occurred – questions they think could answer in the test about aspects of the course learning materials that have sparked their interest or which have “stuck”. So to kick things off I put up on the screen the test questions from last year and gradually they gained the confidence to re-fashion some, eliminate others, suggest new ones, etc. We will end up with 6 questions. I will choose 4, but they will not know which 4. In the test they will choose 2 questions they think they can best answer. The idea is to remove exam anxiety and to test whether interested learning has occurred – not make them jump through a flaming hoop and punish them if they can’t cram. One student suggested that next year I assign one or two students each week to come up with a question that arises out of that week’s pre-class learning materials. We then workshop that question in the tutorial and that question joins an ongoing bank of questions that would be possible candidates for the end-of-semester test – not a bad idea. This move on my part is consonant with the idea of students co-generating content for greater buy-in and so there is a more organic, student-centred linking of content, assessment and student motivation.
I now regard my WordPress blog site, from where I teach engage with my students, as a shared, dynamic, evolving, integrated network hub co-developed by myself and students. I know that may sound pretentious, but that is how I feel. I realise now that my flipped learning experiment is thus just a variant and practical application of a more over-arching philosophy of student-centred learning in which students are enabled and encouraged to get involved in the subject design and evaluation. A future challenge is to integrate even more student flexibility and freedom and thus a more personalised learning experience.
I notice that as the end of semester nears, tutorial attendance and engagement with pre-class learning materials is once again waning, but certainly not as bad as the pre-flipped learning days. This is a perennial problem. What to do? In the penultimate post, I said: “One idea we came up with for next year is to require each student to submit to me via email at least 2 days before class, a question/query on the pre-class learning materials accompanied by a short paragraph explaining what they don’t understand. Students get 2 points for this each week over 10 weeks, which equals 20% of their final mark”. That’s fine for guaranteeing pre-class engagement with materials, but whether it is done via email or a program like Annotate, it does not require tutorial attendance. I asked the students what’s best to keep them turning up to tutorials: “make it so that students can only submit the question and short paragraph in hard copy in person at the beginning of the tutorial”. Good idea. That’s almost equivalent to marking attendance, but without doing so. I need to keep on thinking about this.
I asked students today how they were going at this time of the year juggling two degrees and various assignment deadlines, preparing for In-Country Study next year, working part-time, etc. They said lecturers have the tendency to set assignments at the same time near the end of semester, which makes for stressful and sub-standard performance. Since there is little prospect of lecturers changing assignment deadlines just to suit clashes with other subjects (past experience tells us that), I asked students to suggest a solution in the context of flipped learning and students taking co-charge of their own learning. At present assignment deadlines are:
1. Cultural cases study – due end of week 7
2. Critical literature review due end of Week 11
3. In-class test due week 13.
Students suggested moving lit review deadline to end of Week 5 and pushing the cultural case study to Week 9. The lit review also overlaps with the cultural case study in terms of having to find scholarly references and properly reference them. I will go with the students’ suggestion.