This month’s big team meeting was given over to the University’s Security Manager for a session on personal safety, and touched upon conflict management. More security than safety then, but ‘safety’ is a friendlier term. When I think of personal safety I tend to think more along the lines of the great Colin Furze and his shenanigans.
It was unexpected training, and pretty useful. We learned about de-escalating situations from a number of problem-based learning scenarios, the institutional and personal responsibilities with regards to duty of care and health and safety, and what the University is doing to keep us all safe at work. This includes, however you feel about it, the network of 400 security cameras on Sunderland campus, the relatively new wider campus card controlled building access, and the Estates team are pushing for us to get a system called Safe Zone which is an app based panic button. We are, apparently, the only university in the North East not already using this.
Part two of Kent’s Digitally Enhanced Education series looking at how generative AI is affecting critical thinking skills. This week we had stand out presentations from:
Professor Jess Gregory, of Southern Connecticut State University (nice to see reach of the network, well, reaching out), who presented on the problem of mastering difficult conversations for teachers in training. These students will often find themselves thrust into difficult situations upon graduation, having to deal with stubborn colleagues, angry parents, etc., and Jean has developed a method of preparing them by using generative AI systems with speech capabilities to simulate difficult conversations. This can, and has, been done by humans of course, but that is time consuming, could be expensive, and doesn’t offer the same kind of safe space for students to practice freely.
David Bedford, from Canterbury Christ Church University, presented on how the challenges of critical analysis are not new, and that anything produced as a result of generative AI needs to be evaluated in just the same way as we would the results of an internet search, or a Wikipedia article, or from books and journals. He presented us with the ‘BREAD’ model, first produced in 2016, for analysis (see first screenshot for detail). This asks us to consider Bias, Relevance, Evidence, Author, and Date.
Nicki Clarkson, University of Southampton, talked about co-producing resources about generative AI with students, and noted how they were very good at paring content down to the most relevant parts, and that the final videos were improved by having a student voiceover on them, rather than that of staff.
Dr Sideeq Mohammed, from the University of Kent, presented about his experience of running a session on identifying misleading information, using a combination of true and convincingly false articles and information, and said of the results that students always left far more sceptical and wanting to check the validity of information at the end of sessions. My second screenshot is from this presentation, showing three example articles. Peter Kyle is in fact a completely made-up government minister. Or is he?
Finally, Anders Reagan, from the University of Oxford, compared generative AI tools to the Norse trickster god, Loki. As per my third screenshot, both are powerful, seemingly magic, persuasive and charismatic, and capable of transformation. Andres noted, correctly, that now that this technology is available, we must support it. If we don’t, students and academics are still going to be using it on their own initiative, the allure being too powerful, so it is better for us as learning technology experts to provide support and guidance. In so doing we can encourage criticality, warn of the dangers, and encourage more specialised research based generative AI tools such as Elicit and Consensus.
From the University of Kent’s Digitally Enhanced Education series, a two-parter on the theme of how generative AI is affecting student’s critical thinking skills, with the second part coming next week. We’ve been living with generative AI for a while now, and I am finding diminishing returns from the various webinars and training I have been attending. Nevertheless, there’s always new things to learn and nuggets of wisdom to be found in these events. The Kent webinar series has such a wide reach now that the general chat, as much as the presentations, is a fantastic resource. Phil has done a magnificent job with this initiative, and is a real credit in the TEL community.
Dr Mary Jacob, from Aberystwyth University, presented an overview of their new AI guidance for staff and students, highlighting for students that they shouldn’t rely on AI; for staff to understand what it can and can’t do, and the legal and ethical implications of the technology; and for everyone to be critical of the output – is it true? Complete? Unbiased?
Professor Earle Abrahamson, from the University of Hertfordshire, presented on the importance of using good and relevant prompts to build critical analysis skills. The first screenshot above is from Earle’s presentation, showing different perceptions on generative AI from students and staff. There were some good comments in the chat during Earle’s presentation, on how everything we’ve discussed today comes back from information literacy.
Dr Sian Lindsay, from the University of Reading, talked about the risks of AI on critical thinking, namely that students may be exposed to a narrower range of ideas due to the biases inherent in all existing generative AI systems and the limited ranges of data they have access to, and are trained upon. The second screenshot is from Sian’s presentation, highlighting some of the research in this area.
I can’t remember who shared this, if it came from one of the presentations or the chat, but someone shared a great article on Inside Higher Ed on the option to opt out of using generative AI at all. Yes! Very good, I enjoyed this very much. I don’t agree with all of it. But most of it! My own take in short: there is no ethical use of generative artificial intelligence, and we should only use it when it serves a genuine need or use.
The University launched a new Centre for Inclusive Learning in March to help us meet our goals in widening participation and providing an inclusive educational experience for all students. CELT are of course working with them on many objectives, and in this, the Centre’s launch event, we were there to present on how we can help academics with instructional design and universal design for learning.
I was also able to attend many of the other sessions throughout the day, and learned a lot about some great work being done across the institution. For example, in our Faculty of Health, Science and Wellbeing, I learned that in our bank of PCPIs (Patient, Carer and Public Involvement), who are consulted on the delivery of medical and health modules, we now have a considerable contingent with experience of health care systems outside of the UK who are providing valuable insight and perspectives.
In another talk on decolonising the curriculum using a trauma informed approach, there was a great discussion about problematic language. ‘Deadline’, or ‘fire me an email’, for example, but also using ‘Due Date’ when talking about assessments could be problematic for people with experience of miscarriage. I feel like this is an area where we are making good progress societally. I’ve been very pleased to watch the technology sector jettison the language of ‘master/slave’ over the past few years, and more and more systems are now including options for pronouns and preferred name.
But of course, my main purpose on the day was to facilitate our team’s discussion around UDL. I felt that it was important for CELT to be contributing to the conference in some capacity, and I was also able to use the event to give some of my team experience in presenting at a conference. It’ll be good for them! If that’s the direction they want to take their careers of course. So I did introductions and a little bit of context setting, and then handed over to two of my team to tag-team the bulk of our presentation.
November 2023, I wrote a rambling post about my thoughts on generative AI and where it was going to go for the ALT Blog. I made a prediction there that someone was going to buy a site license for ChatGPT, and lo! This HeLF discussion was about exactly that. Sort of. It’s Microsoft’s Copilot tool that the majority of people are going for, because we are all, or mostly, existing Microsoft customers and they are baking it into their Office 365 offering. Though there are a couple of institutions looking at ChatGPT as an alternative.
Costs and practically was a big issue under discussion. Microsoft are only giving us the very basic service for free, and if you want full Copilot Premium that it’s an additional cost of around £30 a month per individual. Pricey, but it gets worse. They have tiers upon tiers, and if you want to do more advanced things like having your own Copilot chatbot available in your VLE for example, then you’re into another level of premium which goes up to hundreds a month.
We also discussed concerns about privacy and data security. If Copilot is given access to your OneDrive and SharePoint files for example, then you need to make sure that everything has correct data labels, or else you run the risk of the chatbot surfacing confidential information to users.
At Sunderland we have no plans for any premium generative AI tools at present, the costs are just prohibitive. And it’s not just at this level, the entire field of generative AI is hugely expensive and completely unsustainable. So I’ll end as I began, with prognostications: OpenAI is haemorrhaging money, they lost over half a billion dollars last year. They are living on investment capital, and unless the finance bods start seeing a serious return, they are going to pull the plug. Sooner rather than later I reckon. I don’t think OpenAI will go under exactly, but I do think they are going to get eaten by one of the big players, Microsoft most likely. A lot of headlines were made last year about Microsoft’s $10 billion investment, but people haven’t read the fine print – that $10 billion was in the form of server credits, so Microsoft is going to get that back one way or another. I’m going to give the AI bubble another six to eighteen months.
What will come after that? Generative AI isn’t going to go away of course, it’s a great technological achievement, but I think we will see a shift towards smaller models being run locally on our personal devices. It will be interesting to see how Apple Intelligence will pan out, they aren’t putting all of their eggs into the ChatGPT basket. And as for the tech and finance industries? They’ll just move onto the next bubble. Quantum computing anyone?
It’s that time of year, and I was once again down at our London Campus for block teaching of my module on designing learning and assessment, in the glorious sunshine, but first I attended the London Campus’s first conference on ‘International Education for Sustainable Development’. The conference was booked in before my teaching, and most of the students wanted to attend, so we scheduled teaching around it and I attended the conference also.
The first keynote was very interesting, delivered by our interim Pro-VC for Learning and Teaching, who comes from a background as an evolutionary psychologist. From this perspective she talked about our tendency of ‘future discounting’, sating our present needs over taking action about things aren’t going to impact us for some time. She also talked about how we can overcome this, by framing climate action as something which will benefit our families ahead of ourselves – ‘kin selection’.
Many of the presentations during the day focused on the value of the UN’s Sustainable Development Goals, and how they can be embedded into teaching and learning, and the morning ended with a panel discussion on how to fit this into the curriculum and the University’s goals. We are aiming to be ‘net zero’ in all campuses by 2040, and fully net zero by 2050, to cover all aspects of University work, including commuting.
In the afternoon there was a good talk about the balance of globalisation versus localisation, and how, for example, home 3D printing has a high energy consumption cost, but savings are made by cutting out transportation costs. There was a great slide in this talk, poorly photographed above, showing global energy consumption by source over the past 200 years, and how reliant we still are on hydrocarbons despite the gains made by renewables in the past couple of decades.
The conference ended with a second panel discussion, considering the impact of generative AI on our efforts to meet the sustainable development goals and the relationship between the SGDs and equality, diversity and inclusion. I was pleased to note the panel’s acknowledgement of the negative environmental impact of the new generative AI data centres, but dismayed that they are all still going to plough on using these tools anyway, and thus bringing us right back to the problem of future discounting.
To avoid ending this post on a negative note, as my thoughts on AI inevitably seem to do, I will paraphrase the most sane man on the panel, ‘the only way out of this mess is to reduce consumption!’, said as he pointed out his ten year old suit. This is the way. Keep wearing the old clothes that are in perfect condition; don’t upgrade your phone every few years; and maybe don’t use generative AI unless you are genuinely using it to solve a problem in the most efficient way, and not just because it’s there and it’s easy and it’s convenient.
This was a HeLF webinar facilitated by Christopher Trace at the Surrey Institute of Education, to provide us with an introduction to KEATH.ai, a new generative AI powered feedback and marking service which Surrey have been piloting.
It looked very interesting. The service was described as a small language model, meaning that it is trained on very specific data which you – the academic end user – feeds into it. You provide some sample marked assignments, the rubric they were marked against, and the model can then grade new assignments with a high level of concurrence to human markers, as shown in the chart above of Surrey’s analysis of the pilot. Feedback and grading of a 3-5,000 word essay-style assignment takes less than a minute, and even with that being moderated by the academic for quality, which was highly recommended, it is easy to see how the system could save a great deal of time.
In our breakout rooms, questions arose around what the institution would do with this ‘extra time’, whether they would even be willing to pay the new upfront cost of such a service when the cost of marking and feedback work is already embedded into the contracts of academic and teaching staff, and how students would react to their work being AI graded? Someone in the chat shared this post by the University of Sydney discussing some of these questions.
Northumbria’s turn to do hosting honours this time around. It’s been a while since I was on my old campus, and I was shocked to see that the Library refurb ran out of money to finish the ceiling. I did like the ceiling mounted power extensions that look like Mario coin blocks though. Solves the problem of tripping over or accessing floor panel extensions, but introduces new problems for the vertically challenged. Julie said she couldn’t reach them to pull them down, while I, on the other end of the spectrum, had to duck and weave to avoid bonking my head on them at times. I wouldn’t mind if they actually dispensed gold coins, but no such luck.
Anyway, that’s enough shade thrown at my previous employer, time to be serious. Generative AI once again dominated our morning discussions, with a presentation by Tadhg, an academic at Northumbria, who has revamped their Business module with content related to Generative AI, teaching students how to use it to help write research proposals. This was followed by Ralph in their learning technologies team who has been using D-ID and Elevenlabs to create animated videos to supplement written case studies for students in Nursing. Dawn from Northumbria’s Library service then gave us a talk on their experience of Adobe Creative Campus, and reported a much more positive experience than Teesside.
After lunch we had some open discussions on digital exams. Newcastle are using Inspera to facilitate a proportion of their exams, and have mixed feelings about it. I was pleased to note that they have strongly pushed back on using online proctoring on ethical grounds. Emma from Teesside led a discussion on WCAG changes which prompted us to discuss getting the balance right between supporting all students along the principles of UDL, while being practical and having to work within the technical and cultural limits of the systems we have to use and processes we have to follow. Student record systems only allowing one assignment per module, for example.
Finally, Craig from Northumbria gave us a demo of some interactive 360 degree content they have created, including surgical simulations, nursing scenarios, and examining crime scenes. They are producing this content such that the scenarios can be accessed via any web browser, at the expenses of immersion, but they are also exported into a format that can be used with their bank of Vive VR headsets for students to get the full experience.
This webinar was presented as part of the ongoing HeLF development series, and this time around we had Stephanie DeMarco and Alex Rey from Birmingham City University leading a discussion on the Office for Students Conditions of Registration, specifically the ‘B’ metrics on quality, standards, and outcomes.
Even more specifically, we were looking at B3 which is about delivering positive outcomes for students, and is the metric most directly under our sphere of influence as learning technologists and academic developers.
B3 has three measures underneath it, related to continuation, completion and progression, which here means that students have gone into graduate level employment. These measures are not open to any kind of interpretation, and HEIs must meet the set targets of 80% continuation, 75% completion and 60% progression.
B3 also contains within if four aims, which are open to some level of interpretation and debate. These are participation, experience, outcomes, and value for money. The last being particularly contentious in the climate surrounding HE in the United Kingdom of late. (Has my undergraduate degree in philosophy provided value for money? Absolutely.)
Stephanie and Alex then presented a case study of activity which they had undertaken to help academics better meet these outcomes, concentrating on areas such as authentic assessment, project-based learning, how to write programme validation documentation, etc.
And finally, there was a shared Padlet board in which we could all share thoughts and best practice. From this I have picked up the Curriculum Scan model, development by Alexandra Mihai, which can be used for auditing modules. This reminded me of storyboarding process done as part of instructional design before a module goes live, but for auditing and checking a module which is ongoing.
I attended my third Studiosity Partner Forum today, which kind of began last night with a dinner and discussion about generative artificial intelligence led by Henry Aider. Generative AI and Studiosity’s new GAI powered Writing Feedback+ service was of course the main topic of conversation throughout the event. Writing Feedback+ launched in February, and they have reported that uptake is around 40% of eligible students, which compares with 15-20% for the classic Writing Feedback service. The model has been built and trained internally, using only writing feedback provided by Studiosity’s subject specialists, no student data. The output of WF+ is being closely quality assured by those agents, and they estimate that quality is around 95-97% as good as human provided feedback.
David Pike, from the University of Bedfordshire presented on their experience with the service in the afternoon. They made it available to all of their students in February, around 20,000, and usage has already exceeded usage of the classic Writing Feedback service since September last year. The average return time from WF+ is around one and a half minutes, and student feedback on the service is very positive at 88.5%. However, he did also note that a number of students who have used both versions of the service stated that they preferred the human provided feedback.
On the flip side of AI, last year Studiosity were exploring a tool to detect submissions which had been written by generative AI. That’s gone. Nothing has come of it as they found that the reliability wasn’t good enough to roll out, especially so for students who have English as a second language. No surprises for me there, detection is a lie.
The keynote address was delivered by Nick Hillman from the Higher Education Policy Institute (HEPI), who talked about their most recent report on the benefits and costs associated with the graduate visa route. It’s overwhelmingly positive for us as a country, and it would be madness to limit this.
Other things which I picked up included learning more about Crossref, a service for checking the validity of academic references; a course on Generative AI in Higher Education from Future Learn was recommended; and Integrity Matters, a new course developed by the University of Greenwich and Bloom to teach new students about academic integrity.
Finally I was there presenting myself, doing my Studiosity talk about our implementation at Sunderland and the data we now have showing a strong positive correlation between engagement with Studiosity and student outcomes and continuation.