Categories:

I’m sure we can all recall a moment in our life where we have been “assessed” and a judgement made on our learning in some way. Whether this be a spelling test in the early years of your education, a driving test (theory or practical) an exam at school or university (or an essay) it’s fair to say there’s quite a lot of assessing going on in our lives (formally and informally).

In the context of (UK) Higher Education the majority of this is an ‘output’ measure, by this I mean the assessment is usually as a result of a some learning that has taken place and as such is an output for that learning, a way to demonstrate what has been learnt (or in some cases just what knowledge has been retained). This in turn means that almost all of the methods of assessment focus entirely on the knowledge gained as an output measure, with little consideration for the ‘how’. Additionally, this also means that contact time between learner and tutor is more or less solely directed on “what” needs to be learnt and less on how to go about it.

What if we designed assessments to measure more of the ‘input’ rather than the ‘output’?

NOTE: This blog post has been in my draft folder since November 12th 2022. Everything above this block was written on that date, everything below was written on December 6th 2022.

The reason I have separated out this blog to show it has been written in two parts is to do with ChatGPT. When I started writing this blog I hadn’t come across the AI driven chat system but since then it’s exploded on to the scene. One area of concern that has been raised is it’s ability to write or structure essays.

Personally I’m not a huge fan of essays as an assessment (especially where it’s often the only form of assessment on a course) as it’s function beyond being a form of academic assessment is limited (it’s rare that you’ll use essay as a form of output beyond academia). However, it is widely used (mainly in humanities subjects) as a critical form of assessment, making up for a significant number of marks towards a degree programme, so potentially having an AI platform that can write essays for students has obviously scared the heebie-jeebies (feel free to look that up) out of some academics who use the essay as a form of assessment.

For me though, it’s a wake up call and an opportunity to rethink our approach to assessment. So in the original blog post I was proposing that we’d be better off assessing the process (input) rather than the output and ChatGPT just adds weight to this argument. For too long we have been measuring the output of learning (or in some cases regurgitating of knowledge) but what’s most important are the steps taken towards the output, the approaches taken and the journey experienced.

I welcome ChatGPT and it’s potential to reshape our understanding of what it is to be human. If an AI system can produce the same outputs as a human student then that just goes to show that what we are asking our students to do holds limited value for their future endeavours. Now is a good moment to ask two things:

  1. How can we make ChatGPT and other AI tools part of our teaching, learning and assessment tasks?
  2. How can we redesign our assessments to make them human purposeful – that is to design them in such a way as to value the uniqueness of human capabilities.

23 Responses

  1. Although the way a surgeon learns their skills is through the process – not the output. The reason they can do the job is through practicing the process. In a medical assessment scenario the process is as important as the outcome (it’s continuous).

  2. There are many assessments that are about process, but are badly designed, and whilst there were no easily accessible means to shortcut them, that is no longer always the case. (Blog post coming on Means, Motivation and Opportunity – and including a composite narrative)



Leave a Reply