Press "Enter" to skip to content

Tag: HeLF

Should I Be Researching?

An excellent question, posed by the HeLF folks, to which the only possible answer is a resounding ‘yes’. But that would make for a very short webinar, so we discussed the issues around this too. Obviously a very interesting session for me, as I have been trying to push my career in this direction over the past few years, as you can probably tell, and the work I’ve been doing on Studiosity has afforded me an excellent opportunity to do so.

We had a good discussion on the nature of research and the differences between research and evaluation. The latter, generally, being something which is done for internal purposes and audiences only, while research is likely of wider interest and therefore there is value in sharing via relevant publications. Within our community, however, there may be barriers which prevent, or make it difficult for professional services staff to publish. One colleague mentioned a publication, not named to protect the guilty, which charged for publication, but gave steep discounts to academic contracted staff, but none if you happened to have ‘professional services’ on your contract.

We also talked a lot about ethics committees, which again can be hard to access, with another colleague reporting that they weren’t even allowed to submit something to an ethics panel, while at another institution professional service staff were kicked out of their ethics board because it was felt to be having a negative impact on their REF submission.

That all sounds rather bleak, but there are solutions to these problems. Some people reported having nominal 0.2 academic contracts to get over institutional barriers, while others are running their own internal ethics boards. It was a very good discussion this morning, and something which is going to become a series, so I will be learning and writing more on this.

Relevant related reading: Defining the Scholarship of Teaching and Learning, by Ann M. Gansemer-Topf, Laila I. McCloud, and John M. Braxton.

Leave a Comment

Supporting Staff and Students in Moving from AI Scepticism to AI Exploration

How could I miss the latest HelF staff development session, as an avowed AI sceptic? Today Alice May and Shivani Wilson-Rochford from Birmingham City University talked about their approach to responding to the emergence of generative AI. As can be seen on the ‘roadmap’ above, this has included an AI working group, collaboration with staff and students on producing guidelines on use, sharing those via staff and student workshops, and collating resources on a SharePoint site. All things which mirror our approach at Sunderland.

Something they are doing which I liked was providing template text which academic staff can copy and paste into their assignment briefs on what kind of AI students are permitted to use, at four different levels from fully unrestricted, to fully prohibited. They are also working on an assessment redesign project which takes the risks of GAI into account, based on work from the University of Sydney which analysed all of the different types of assessment they have and put them into two lanes based on how secure they are to GAI plagiarism. It’s Table 2 on the page I’ve linked to, it’s a very good table. I like it a lot.

Briefly mentioned was the fact that Birmingham are one of the few institutions in the UK who have enabled Turnitin’s AI detection tool, and I would have liked to have learned more about this. From a student survey on GAI, the second screenshot above, concerns about the accuracy of AI detection was one of the big things they raised.

Alice and Shivani left us with plans for going forwards, which is to build a six-pillar framework on the different aspects of GAI’s impact on HE (third screenshot). Pillar 5 is ‘Ethical AI and Academic Integrity’. This one stood out as, once again, the ethical issues of the environmental impact and copyright were raised. Briefly. And then we moved on. It consistently bothers me, and I don’t have any brilliant answers, but I will reiterate the very basic one of simply choosing not to use these services unless they are solving a genuine problem.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Leave a Comment

Institutional Experiences of Microsoft Copilot

Diagram of MS Copilot architecture
Diagram of Microsoft Copilot Architecture

November 2023, I wrote a rambling post about my thoughts on generative AI and where it was going to go for the ALT Blog. I made a prediction there that someone was going to buy a site license for ChatGPT, and lo! This HeLF discussion was about exactly that. Sort of. It’s Microsoft’s Copilot tool that the majority of people are going for, because we are all, or mostly, existing Microsoft customers and they are baking it into their Office 365 offering. Though there are a couple of institutions looking at ChatGPT as an alternative.

Costs and practically was a big issue under discussion. Microsoft are only giving us the very basic service for free, and if you want full Copilot Premium that it’s an additional cost of around £30 a month per individual. Pricey, but it gets worse. They have tiers upon tiers, and if you want to do more advanced things like having your own Copilot chatbot available in your VLE for example, then you’re into another level of premium which goes up to hundreds a month.

We also discussed concerns about privacy and data security. If Copilot is given access to your OneDrive and SharePoint files for example, then you need to make sure that everything has correct data labels, or else you run the risk of the chatbot surfacing confidential information to users.

At Sunderland we have no plans for any premium generative AI tools at present, the costs are just prohibitive. And it’s not just at this level, the entire field of generative AI is hugely expensive and completely unsustainable. So I’ll end as I began, with prognostications: OpenAI is haemorrhaging money, they lost over half a billion dollars last year. They are living on investment capital, and unless the finance bods start seeing a serious return, they are going to pull the plug. Sooner rather than later I reckon. I don’t think OpenAI will go under exactly, but I do think they are going to get eaten by one of the big players, Microsoft most likely. A lot of headlines were made last year about Microsoft’s $10 billion investment, but people haven’t read the fine print – that $10 billion was in the form of server credits, so Microsoft is going to get that back one way or another. I’m going to give the AI bubble another six to eighteen months.

What will come after that? Generative AI isn’t going to go away of course, it’s a great technological achievement, but I think we will see a shift towards smaller models being run locally on our personal devices. It will be interesting to see how Apple Intelligence will pan out, they aren’t putting all of their eggs into the ChatGPT basket. And as for the tech and finance industries? They’ll just move onto the next bubble. Quantum computing anyone?

Leave a Comment

AI-Augmented Marking

Chart showing correlation of human and KEATH.ai grading
Accuracy of KEATH.ai Grading vs. Human Markers

This was a HeLF webinar facilitated by Christopher Trace at the Surrey Institute of Education, to provide us with an introduction to KEATH.ai, a new generative AI powered feedback and marking service which Surrey have been piloting.

It looked very interesting. The service was described as a small language model, meaning that it is trained on very specific data which you – the academic end user – feeds into it. You provide some sample marked assignments, the rubric they were marked against, and the model can then grade new assignments with a high level of concurrence to human markers, as shown in the chart above of Surrey’s analysis of the pilot. Feedback and grading of a 3-5,000 word essay-style assignment takes less than a minute, and even with that being moderated by the academic for quality, which was highly recommended, it is easy to see how the system could save a great deal of time.

In our breakout rooms, questions arose around what the institution would do with this ‘extra time’, whether they would even be willing to pay the new upfront cost of such a service when the cost of marking and feedback work is already embedded into the contracts of academic and teaching staff, and how students would react to their work being AI graded? Someone in the chat shared this post by the University of Sydney discussing some of these questions.

Leave a Comment

To Infinity and B-yound!

This webinar was presented as part of the ongoing HeLF development series, and this time around we had Stephanie DeMarco and Alex Rey from Birmingham City University leading a discussion on the Office for Students Conditions of Registration, specifically the ‘B’ metrics on quality, standards, and outcomes.

Even more specifically, we were looking at B3 which is about delivering positive outcomes for students, and is the metric most directly under our sphere of influence as learning technologists and academic developers.

B3 has three measures underneath it, related to continuation, completion and progression, which here means that students have gone into graduate level employment. These measures are not open to any kind of interpretation, and HEIs must meet the set targets of 80% continuation, 75% completion and 60% progression.

B3 also contains within if four aims, which are open to some level of interpretation and debate. These are participation, experience, outcomes, and value for money. The last being particularly contentious in the climate surrounding HE in the United Kingdom of late. (Has my undergraduate degree in philosophy provided value for money? Absolutely.)

Stephanie and Alex then presented a case study of activity which they had undertaken to help academics better meet these outcomes, concentrating on areas such as authentic assessment, project-based learning, how to write programme validation documentation, etc.

And finally, there was a shared Padlet board in which we could all share thoughts and best practice. From this I have picked up the Curriculum Scan model, development by Alexandra Mihai, which can be used for auditing modules. This reminded me of storyboarding process done as part of instructional design before a module goes live, but for auditing and checking a module which is ongoing.

Leave a Comment

Exploring Modality in the Context of Blended and Hybrid Education

It’s come to my attention, because I’ve just been writing about this for my CMALT portfolio review, that I don’t always record HeLF webinars on my CPD record, so here I am, doing just that. The ‘Heads of eLearning Working in UK HE’ forum facilitates regular CPD webinars for its members, and this one was exploring different kinds of attendance in a post-pandemic context.

Simon Thomson, of the University of Manchester, began with a discussion on how they have previously used the TPACK Framework in academic development, but found that people often got too caught up in the technology aspect to the exclusion of other factors. Simon has therefore adapted this model, replacing ‘technology’ with ‘modality’ to create the ‘Subject, Pedagogy & Modality’ Framework, or SPAM, instead. The models are captured in the first screenshot taken from the presentation, above. This led into a discussion on the rationale and value of specific modalities, and confusion over terminology. From the second screenshot, the idea of student choice resonated with me. I think it is very much the wrong tack when institutions, or worse, the government, dictate how students should be learning for non-pedagogical reasons. (Like checking visa compliance for example…!)

Sue Buckingham, from Sheffield Hallam, picked up on the confusing terminology in their part of the presentation. How many students would be able to confidently define ‘HyFlex’ learning for example, or explain the different between blended, hybrid, and hyflex? Could you? Could I!? HyFlex is exactly what I’ll be doing when my own module starts up again next week. It’s all been planned and designed to be in person, but I’m also going to stick a laptop at the front, pointed at me and the board, and have a concurrent Teams session running too. Students in 2024 have rich, complex lives. Jobs, school runs, caring commitments, so give them a choice as a reasonable accommodation and act of compassion.

Leave a Comment