In this post I have an argument with myself about whether I will use PeerWise. I'm not sure which side wins...
PeerWise is fast replacing clickers as the educational technology of the moment. It is becoming increasingly popular, buoyed by impressive data demonstrating high usage statistics by students, and improved examination scores for students who have a high engagement with the online quizzing platform. If you are new to the idea, I would recommend the article by Bates and Galloway in Education in Chemistry to find out more.
Despite all the good news about PeerWise, I still haven't tried it out. I've ventured as far as creating an instructor account. On Monday I attended an excellent workshop where I acted as a student creating questions and answering/rating other questions (the latter is addictive). It prompted me to wonder what my hesitations are about using this technology, and so below I have listed out some reservations in the hope that they challenge my own thinking and invite contributions from others who have used the system. I do intend this to be a positive exercise, and in looking at my reservations, I think they probably say more about me and my perspective than they do about the technology!
1. Quality Control:
My first concern is a common one. Reports and presentations about PeerWise show that it is extremely popular with students, and among even relatively small groups (e.g. 50 - 75) a thousand questions can be generated with apparent ease. Obviously a lecturer cannot survey all of these questions, and therefore it is not clear whether some questions are incorrect, or whether they reconfirm common misconceptions.
Bates and Galloway address this issue in their EiC article. Their approach is to invest a lot of time in teaching students how to design suitable questions (and answers/distractors) and they describe their analysis of the taxonomical level of questions written by their students, which were predominantly good and often sophisticated. (Many of their suggestions on how to design MCQs would be valuable for staff development also!) In addition, they remark that students are quick to add that peers use the comments section to refine explanations or correct mistakes. This was also observed by Suzanne Fergus in her recent presentation at iVice (summarised here); she demonstrated the comments section was a place where some useful discussion could take place.
I take these points on board, and to wriggle out of the argument, I add: if the value is on writing good questions, with sensible distractors as well as answering and critiquing other questions, why the focus on large numbers of questions? Why not instead ask students to create a small number of high quality questions upon which they will be graded? Does the emphasis on "gamification" and question creation push the quality down?
Given the number of questions generated, there is a difficulty in assigning marks to student work when you don't know whether they have generated the question independently. What if a student copies another student's question, or copies a question from an online answer site? In principle a student could submit lots of plagiarised questions and get quite a good mark.
There appear to be two schools of thought on this. The first is that one can turn a blind eye to it; the assessment mark is low enough not to warrant much attention and the process of getting a relevant question from somewhere else and compiling it into the module account is informative in its own right. I can't say I nor many others would agree with this. More likely, the student will be caught in the act by other students, who may flag the question as inappropriate. In fairness to PeerWise, plagiarism is a much wider issue and likely applies to other forms of assessment.
Probably the most significant barrier to my own uptake of PeerWise is a vagueness about assessment, which to this very interested observer appears to be a bit of a black art. Students get contribution scores, based on a minimum expected contribution. For example, they might be asked to author, answer, and comment on a particular number of questions. Doing this achieves a pass mark. This is then supplemented by an extra effort mark - the PeerWise mark. I'm not too clear on how this generated; it appears to be based on the additional number of questions authored, and answers completed, and the difficulty of these by the community. There may be grading to a curve (e.g. the top contributor gets a 100%) or grading in bands (achieve a particular quota and get a particular grade). Again I wonder about the emphasis on quantity. I think I could consider a system where a student had to submit to the lecturer their top three questions, or submit where they added clarity to a suggested answer. These could be considered by the lecturer. This may address the plagiarism issue above.
Of course there is nothing to stop this happening with the current system, so perhaps this might be my first step to loosening the apron strings on content control, which in honesty is probably at the root of all my fears! Those with more confidence in the system will likely have the last laugh as I trawl through hundreds of questions, trying to assign grades, which will probably not differ very much from the PeerWise assigned grade. Maybe then, I'll let go!