Is computer marked assessment the answer to higher education’s challenges?

Image: Shutterstock/Kunst Bilder

At the most recent IOP Higher Education Network meeting, I led a session alongside Professor Chris Sangwin of the University of Edinburgh on computer-marked assessment.

It brought together directors of teaching and learning from higher education institutions across the UK with an aim to discuss and share good practice for traditional and computer-marked assessment.

To understand how computer-marked assessment can be implemented and used to its full potential, it is best to first consider the challenges that the HE community faces in regards to assessment.

These include a drive for standardisation of assessment across university subjects, providing high-quality, yet prompt, feedback, and achieving homogeneity of marking when using a team of markers. As universities drive to standardise assessment, the flexibility in the types of assessment that lecturers can use, and therefore the breadth of learning that can be assessed, declines.

Providing high-quality, prompt feedback can also prove challenging. When students are surveyed, they’ll regularly state feedback as a high priority to supplement their learning. However, lecturers tend to find that if marks are provided prior to feedback, many students don’t collect this feedback, which then in turn discourages lecturers from providing personalised feedback. To deal with the short timescales expected when marking of large amounts of work, departments will often use teams of markers – a common example of this being the marking of lab reports.

So can computer-marked assessment help us tackle these challenges? Computers can deal with large volumes of work to mark while also being viewed as objectively fair. You can also expect a fast response from a computer, with some systems even being able to provide tailored feedback. Additionally, computer-marked assessment can provide opportunities for research on assessment, such as gathering and data on whether certain questions differentiate by gender or race.

So why aren’t HE institutions switching over to computer-marked assessment en masse? Well, you will be limited to certain question types. However, with developments in computer marked assessment in recent years, these question types now include free text for numbers, letters words and sentences, as well as the expected multiple choice, multiple response and drag-and-drop questions. It can also be a lot of work to set questions up and provide a computer system with enough information so that it can give students feedback. The amount of work will decrease as lectures implement iterated versions of the same questions year on year, but the initial workload can be large.

To help decrease this workload, we spoke about a starting a national physics computer-marked assessment question bank, where lectures can share and contribute to questions. To begin with, questions will be written for open source Moodle, which supports question types such as Pattern Match, which are free-text questions, and STACK, which can recognise mathematical expressions.

Directors of teaching and learning, admissions tutors and those interested in finding out more about the possible national physics question bank please get in touch or let us know in the comments below. And if you are interested in other ways the IOP supports physics education research in universities you can find more information on the HE pages of the main IOP website.

Related posts

 

Sally Jordan

Sally Jordan

Sally Jordan is professor of physics education and head of the Open University’s School of Physical Sciences. She has longstanding interests in mathematical skills development for science students and in assessment, focusing on student engagement and automatic marking and feedback using sophisticated types of computer-marked assessment. She is also interested in conceptual understanding in physics and in demographic differences in attainment.
Sally Jordan
FacebooktwitterFacebooktwitter

Comments

Leave a Comment