Press "Enter" to skip to content

Tag: Assessment

AI and Assessment Workshop

Perplexity AI User Interface
Screenshot of Perplexity search options

Today I attended one of our own AI and Assessment Workshops to see what advice and guidance we are giving to academics and what their feelings and needs are around this topic. This is a new run of sessions which we have just started, and has been organised by one of our academics working on the topic alongside a member of my team.

Despite having published staff and student guidance documents and a dedicated SharePoint space to collate resources and our response, I found from conversing with staff at this event that there is still a prevailing feeling of lacking steer and direction. People were telling me they don’t know what tools it’s safe to use, or what students should be told to avoid. We also had a lot of people from the Library Service today, which is perhaps also indicative of the need for firmer student guidance.

I was pleased to note that there is some good practice filtering through too, such as using a quiz based declaration of use which students have to complete before unlocking their assignment submission link. We talked about adding this to our Canvas module template for next academic year, that’s something one of the academics suggested to us. On the other hand, I found people were still talking in terms of ChatGPT ‘knowing’ things, which is troubling because of the implication that these systems are more than they actually are.

While much of the session took the form of a guided dialogue, my colleague was also providing a hand’s on demo of various systems, including Perplexity which people liked for providing links out to the sources it had used (sometimes, not always), the ability to restrict answers to data from specific sources, such as ‘academic’, but noted a very US bias in the results, a consequence of the training data which has gone into these models. I was quite impressed when I tried to ‘break’ the model with leading prompts and it didn’t indulge me.

A new tool to me was Visual Electric, an image generation site aimed at producing high quality photo-like images. I have thoughts on some of their marketing… But I’m going to try and be more positive when writing about this topic, as I find it very easy to go into a rant! So instead of doing that, I have added a short disclaimer to the bottom of this post, which I’m also going to add to future posts which I write about AI.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing real people.
Leave a Comment

UK HE’s Thoughtful Response to Robot Writing

Screenshot of three Borg drones from Star Trek
Am I implying an equivalence between The Borg and ChatGPT?

There’s no escaping the robots, resistance is futile. ChatGPT has been a gathering storm since the back end of last year, and Sunderland cannot escape the pull. However, we need to learn more about this and related technology in order to be able to be able to provide a thoughtful and measured response to it for our staff and students. To which end, I signed up for this session drawing together senior academics from across UK HE to share thoughts and experience. I have a few more such sessions coming in the next few weeks, so I’ll wait and share my thoughts in a dedicated post when I have the time and space to synthesis them.

Leave a Comment

ALT NE User Group: November 2022

Photo of the Owl microphone and camera in action
Stock photo of the Owl mic

And lo! November 2022 did bring forth the first, proper, ALT North East User Group since The Before Times. Though we did have a catch-up meeting in January to check-in and talk about how the pandemic has affected us all.

I was unable to make any of the management meetings to help organise and set the agenda, and so was duly punished by being putting up first to give me now almost routine talk about how our pilot year with Studiosity has gone.

Next up was Newcastle University and how they have rolled out digital assessment. Interestingly, they made a decision not to implement any kind of online proctoring software over the pandemic, a decision I very much support. They have been using, and are now scaling up, the use of Inspera for in-person exams. This was chosen over others for its ability to save local copies of exams – which it does every 6 seconds – as a contingency against network outage, and which in extreme cases can be retrieved from the computer as an encrypted file and uploaded on the students’ behalf. They are using a bring-your-own-device model, with power supply available for around 10% of the exam room capacity, and a laptop loan scheme available for 5%, which have been sufficient to cover them. For improved convenience, they are now looking at providing portable power banks rather than running extension cables around the room.

Next, my old muckers from Northumbria talked about their digital literacy scheme which sees TEL colleagues mentoring staff on digital technologies, and an expanded IT Place which now features TEL as well as IT staff, supported by a range of asynchronous content with certificates for staff who complete set courses. They are looking at digital badges to replace / complement this moving forwards.

After lunch, Durham talked about their experience of dual-mode teaching, including the use of Owl telepresence devices, as featured in the pic above which I gratuitously pinched from their website (please don’t sue, I have no money). It was an interesting experience, mixed. A conclusion from the learning technologies team was that they were great for meetings and small rooms, but the mics and cameras weren’t up to the job in larger teaching spaces. That didn’t stop their IT department from purchasing them en masse and kitting out every room though! Ah, classic IT.

Finally, we ended with a roundtable discussion on the use of student data. Again, Newcastle I feel are ahead of the curve here in banning the use of predictive analytics outright. Durham talked about their experience of the Blackboard feature which allows automated messages to be sent to students based on performance – they turned it off. They felt it was problematic for student motivation as the messages didn’t provide sufficient (any?) contextual information for students.

Leave a Comment

Moodle Munch: Nov. 2021

Leaner Analytics Data Sources
An overview of the various data sources going into DCUs learner analytics system

They’re experimenting with the format a little, as this week saw three 5 minute talks from different departments on the subject of peer assessment. One theme that came out of all the sessions was the need to use anonymised marking for student confidence in the fairness of the process. I was particularly interested in Robert Gillanders experience of using negative marking as a motivator – for every 3% that students deviated from the mean in their peer marking, their own grade was reduced by 1%. I’m very curious about how this worked in practice and how ethical considerations were handled, and Gillanders has published a paper on this which I’m going to have to read.

The second session was on learner analytics from Cormac Quigley who talked about how they have taken data from multiple sources, only one of which was Moodle, and combined in Microsoft Power BI to produce a comprehensive learning analytics system, with the data and reports made available to staff via Teams. However, they also talked about the basic reporting functionality of Moodle, how you can combine grade book functionality with progress bars to create effective results for staff and students.

The full Moodle Munch archive is available online here.

Leave a Comment

Teaching, Learning and Assessment in a Digital World

100 years of learning theories showing the learner as the active agent
The learner must be the active agent in the learning process

This was Bob Harrison’s inaugural lecture as a Visiting Professor at the University of Wolverhampton. Bob has been in education for over 50 years, and I have known his name in Ed Tech circles for a long time.

His talk was on the dangers of over-emphasising the power of technology as a solution to the problem of online and distance education, and the need to continually relearn the lessons that successful learning, no matter whatever physical distances may be involved, needs to be driven by the learner as the active agent in the learning process, supported by well-designed content delivered by caring and competent teachers. And if I’ve mangled Bob’s thesis in this summary, you can read it more eloquently in his own words in this article, Why there is nothing remote about online learning, published last year. And for an example of how you can’t magically improve online learning just by throwing money and technology at the issue, Wired’s article on the ‘LA iPad debacle’ is a good read.

I thoroughly enjoyed Bob’s lecture, and his dismantling of technological solutionism, neoliberalism in education, and his barely checked scorn for the Department for Education and their fixation on remote teaching.

The screenshot which I grabbed to illustrate this post shows a continuation of the theme of learners as the active agents of learning in the most influential learning theories spanning the past century.

Leave a Comment

Moodle Munch: Jan. 2021


The January Moodle Munch recording

Welcome to 2021, folks! Let’s hope it’s going to be a better year for all.

Today marked the return of Moodle Munch, with two presentations as always. Mark Glynn from Dublin City University began by discussing some tools and techniques they are using to add some gamification to online modules to improve student engagement. First, the use of formative and summative quizzes, but not just quizzes, the pedagogy around their use, emphasising the opt-in nature of the summative component and giving students the ‘freedom to fail’. Mark presented some interesting research showing both strong positive student feedback and improved pass marks on the formative assessment component with the group of students who had engaged with the summative quizzes. As it should be, but it’s nice to see such strong evidence! Mark then showed us the Level Up Plus plugin for Moodle which can be used to add gamification elements to module spaces, such as progress bars and leaderboards.

The second presentation was from Nic Earle at the University of Gloucestershire who demonstrated the custom electronic marking and assessment system they have developed for managing student assessments, and how it integrates with their Moodle and student records system. They switched over to this system in 2017 wholesale, and again Nic was able to show very positive results demonstrating increased use of the VLE (even before the pandemic), and improved NSS scores.

As always, the presentations were recorded and I have embedded the YouTube above this post.

Leave a Comment

Speedwell OSCE Training

Some further on-site training from Speedwell today, this time on how the tool can be used to deliver OSCE and MMI testing – that’s observations of clinical practice and multiple-mini interviews which we use to interview potential medical students. Training covered both configuration and live marking, including how to manage breaks and how to have a spare iPad for a non-configured marker to be able to step in.

We also learned about some new features coming to Speedwell which sound pretty good – the ability for multiple markers to moderate and agree a final mark to record in the system, and ‘killer questions’ which means that students have to pass the specified question as well as the exam / interview as a whole.

Leave a Comment

Speedwell Train the Trainer – Advanced

MCQ Exams Meme

The team and I had follow-up webinar training from Speedwell today recapping some of the basic functionality now that we’ve been using if for a few months, and looking at some of the more advanced features which are currently available, and some which are going to be available to us from next week when we upgrade to the latest version of the web app. This will relocate much of the functionality of the admin system, such as checking student performance and running reports, to the system which end users (academics) access through the browser.

Leave a Comment

Speedwell Training

Well, that took a while, but we are finally ditching our antiquated EDPAC forms for high stakes MCQ style exams, and we didn’t go for either Examsoft or Respondus, but Speedwell.

This was our main training session on the system where we had a trainer from Speedwell onsite for the day to run through all aspects of the system with us, from initial configuration to creating questions and exams. We will also be deploying the Safe Exam Browser as part of this project.

2 Comments

LTA Workshop: Running up That Hill


Listening to this while reading this post will have no therapeutic value

This was the second Learning and Teaching Academy workshop I attended this semester, to give it it’s subtitle: Engaging Students to Learning Through Teaching and Assessment. (The big boss, who put the programme together, has a penchant for naming events with a musical theme.)

The workshop was given by Prof. Terry Young, Emeritus Professor of Healthcare Systems at Brunel University, who talked about his experience in academia over the past 15 or so years, having come from industry where he spent a similar length of time, giving him an interesting take on the academic world and conventional practice. On assessment, he asked us to consider what are the actual tangible benefits of, for example, spending time on decided on whether or not a given piece of work is worth 82% or 85%, and could this be time better spent elsewhere? Terry argues that there is little value in time spent this way, instead advocating for threshold based marking, first deciding whether a piece of work is a pass or a fail, and then asking either if it’s a fail, is it a good fail and can the student therefore be guided to a pass in future assignments, or if it’s a pass, is it just a pass or a good pass?

Terry also reflected on the nature of work academics are going to need to do over the next 20-30 years in the face of changes driven by automation and artificial intelligence. He concluded that there are three key tasks which will continue to be necessary: writing and specifying the requirements for programmes and assessments, curating and filtering content, and working closely with students to ensure their development and wellbeing.

Leave a Comment