MOOCs, peer marking and reputation – a placeholder post

I’m hastily blogging this ‘placeholder’ idea before I forget about it in the whirlwind that is #et4online.

The EDCMOOC teaching team has been discussing how to make the peer assessment process better. One thing we know we want is for people to be able to give feedback on the feedback they receive from peer markers.

At the same time I’ve been reading Accelerando (Charles Stross) – part of the premise of that book is a future society based on economics of reputation (Cory Doctorow writes about this as well in Down and Out in the Magic Kingdom – and maybe others that I can’t remember at the moment).

So, the bare bones of the idea as it sits at the moment is to:

a) let people gain reputation throughout the MOOC, and display this next to feedback they provide on peer assignments, so that those receiving the feedback would have one way of ‘reading’ that feedback.

b) give people with high reputation scores the ability to vet/filter/’assess’ peer feedback before it is delivered – perhaps returning comments to the feedback provider, or even asking them to expand, or rephrase…

Challenges I can think of immediately include:

– how can all the activity of the MOOC (which for us includes a lot of social media, blogging, twitter activity) contribute to a reputation score?

– how can a reputation score be meaningful in learning terms? Could people gain reputation on a number of metrics (constructive; challenging; insightful; knowledgeable)? Need to find out more about different approaches to this…

Amy Collier (sitting at the table across from me at this moment!) tells me that Venturelabs is working with reputation in their group-based MOOC platform, so there is a basis for this in MOOC developments.

Ideas or comments very welcome…

6 thoughts on “MOOCs, peer marking and reputation – a placeholder post”

  1. One of the challenges of peer assessment in MOOCs seems to be that students of very different levels get matched by the system, so that a very experienced student receives feedback on his/her work from a much more novice student. That’s maybe a good experience for the novice student (or not, depending on how well structured the peer assessment activity is), but not a particularly useful way to give the more experience student feedback.

    What if students could be matched up for peer assessment purposes in some adaptive way, perhaps using reputation scores? For instance, you might ensure that if a given student has a high reputation score, that at least one peer reviewer has a comparable reputation score. That way, the students with more experience / skills / whatever have a better chance of receiving really useful feedback.

  2. Jen, do you know when the MOOC is likely to be offered again? Hope it will be soon – several of us on the staff are interested in taking part. Good that you’re thinking things through so carefully.

  3. thanks for this, Derek. I found this article: http://www.theaustralian.com.au/higher-education/hail-caesar-when-peer-review-meets-crowdsourcing/story-e6frgcjx-1226552249240

    Of particular note:
    “Rather than randomly selecting reviewers for each section of code, Caesar considers a reviewer’s role – the reviewing pool for Miller’s class includes current students, alumni, and graders – and reputation…. [the software] might assign a chunk of code to someone with a particularly high reputation score, so the student who submitted the code receives valuable feedback, and to someone else with a low reputation score, so that person can learn from the comments left by the more skilled reviewer.”

    So adaptive matching could be quite sophisticated. I like the idea of ensuring that everyone gets feedback from at least one person who has a score indicating they have been actively engaged in, and constructively contributing to, the MOOC.

    The other thing I was thinking of yesterday that I forgot to mention was the “distributed proofreaders” project – http://www.pgdp.net/c/ . This had (maybe still has?) a layering mechanism where the work of newer proofreaders was checked and approved by more experienced proofreaders before it went into the database. That process could include feedback to the novice about their use of markup, for example. Could something like this work in a peer marking process?

  4. hi M-H – we think November, but will be confirming as soon as possible! It’ll be great to have your colleagues on board.

  5. HI Prof Jen

    I am one of the “alumni” students from the EDCmooc course. I thoroughly enjoyed it, but alas for reading some of my peers horrible expereinces with peer reviews of their assignments.

    I am currently doing The Beginner’s Guide to Irrational Behaviour by Prof Dan Ariely (Duke University) and now been in 3 moocs (4 with EDCmooc), I noticed a consistent problem with peer reviews: the quality of comment, the rating, the inability to respond or clarify feedback etc…

    Inspired by my own experiences and fellow students, I am writing a short paper (its my assignment for Irrational Behaviour) on the problematic behaviour of peer reviews and a possible solution, using the principles of Behavioral Economics.

    Thank you for your blog post and the link to the article, I hope I can write something useful for future mooc consideration.

    Yogita

  6. hi Yogita – thanks so much for your comment. I am aware of some of the bad experiences that people had with the peer review process. It’s an interesting phenomenon, because I have also noticed that where people have shared their work publicly, the response has been constructive – perhaps the anonymity of the peer marking process, and lack of mechanism for the reviewed person to respond, fosters an uncollegial environment.

    Please do come back and leave a link to your paper when it is complete!

Comments are closed.