Press "Enter" to skip to content

Tag: HeLF

AI-Augmented Marking

Chart showing correlation of human and KEATH.ai grading
Accuracy of KEATH.ai Grading vs. Human Markers

This was a HeLF webinar facilitated by Christopher Trace at the Surrey Institute of Education, to provide us with an introduction to KEATH.ai, a new generative AI powered feedback and marking service which Surrey have been piloting.

It looked very interesting. The service was described as a small language model, meaning that it is trained on very specific data which you – the academic end user – feeds into it. You provide some sample marked assignments, the rubric they were marked against, and the model can then grade new assignments with a high level of concurrence to human markers, as shown in the chart above of Surrey’s analysis of the pilot. Feedback and grading of a 3-5,000 word essay-style assignment takes less than a minute, and even with that being moderated by the academic for quality, which was highly recommended, it is easy to see how the system could save a great deal of time.

In our breakout rooms, questions arose around what the institution would do with this ‘extra time’, whether they would even be willing to pay the new upfront cost of such a service when the cost of marking and feedback work is already embedded into the contracts of academic and teaching staff, and how students would react to their work being AI graded? Someone in the chat shared this post by the University of Sydney discussing some of these questions.

Leave a Comment

To Infinity and B-yound!

This webinar was presented as part of the ongoing HeLF development series, and this time around we had Stephanie DeMarco and Alex Rey from Birmingham City University leading a discussion on the Office for Students Conditions of Registration, specifically the ‘B’ metrics on quality, standards, and outcomes.

Even more specifically, we were looking at B3 which is about delivering positive outcomes for students, and is the metric most directly under our sphere of influence as learning technologists and academic developers.

B3 has three measures underneath it, related to continuation, completion and progression, which here means that students have gone into graduate level employment. These measures are not open to any kind of interpretation, and HEIs must meet the set targets of 80% continuation, 75% completion and 60% progression.

B3 also contains within if four aims, which are open to some level of interpretation and debate. These are participation, experience, outcomes, and value for money. The last being particularly contentious in the climate surrounding HE in the United Kingdom of late. (Has my undergraduate degree in philosophy provided value for money? Absolutely.)

Stephanie and Alex then presented a case study of activity which they had undertaken to help academics better meet these outcomes, concentrating on areas such as authentic assessment, project-based learning, how to write programme validation documentation, etc.

And finally, there was a shared Padlet board in which we could all share thoughts and best practice. From this I have picked up the Curriculum Scan model, development by Alexandra Mihai, which can be used for auditing modules. This reminded me of storyboarding process done as part of instructional design before a module goes live, but for auditing and checking a module which is ongoing.

Leave a Comment

Exploring Modality in the Context of Blended and Hybrid Education

It’s come to my attention, because I’ve just been writing about this for my CMALT portfolio review, that I don’t always record HeLF webinars on my CPD record, so here I am, doing just that. The ‘Heads of eLearning Working in UK HE’ forum facilitates regular CPD webinars for its members, and this one was exploring different kinds of attendance in a post-pandemic context.

Simon Thomson, of the University of Manchester, began with a discussion on how they have previously used the TPACK Framework in academic development, but found that people often got too caught up in the technology aspect to the exclusion of other factors. Simon has therefore adapted this model, replacing ‘technology’ with ‘modality’ to create the ‘Subject, Pedagogy & Modality’ Framework, or SPAM, instead. The models are captured in the first screenshot taken from the presentation, above. This led into a discussion on the rationale and value of specific modalities, and confusion over terminology. From the second screenshot, the idea of student choice resonated with me. I think it is very much the wrong tack when institutions, or worse, the government, dictate how students should be learning for non-pedagogical reasons. (Like checking visa compliance for example…!)

Sue Buckingham, from Sheffield Hallam, picked up on the confusing terminology in their part of the presentation. How many students would be able to confidently define ‘HyFlex’ learning for example, or explain the different between blended, hybrid, and hyflex? Could you? Could I!? HyFlex is exactly what I’ll be doing when my own module starts up again next week. It’s all been planned and designed to be in person, but I’m also going to stick a laptop at the front, pointed at me and the board, and have a concurrent Teams session running too. Students in 2024 have rich, complex lives. Jobs, school runs, caring commitments, so give them a choice as a reasonable accommodation and act of compassion.

Leave a Comment