Press "Enter" to skip to content

Tag: GAI

Helping Students Develop Critical Thinking Skills When Using Generative AI (Part 1)

From the University of Kent’s Digitally Enhanced Education series, a two-parter on the theme of how generative AI is affecting student’s critical thinking skills, with the second part coming next week. We’ve been living with generative AI for a while now, and I am finding diminishing returns from the various webinars and training I have been attending. Nevertheless, there’s always new things to learn and nuggets of wisdom to be found in these events. The Kent webinar series has such a wide reach now that the general chat, as much as the presentations, is a fantastic resource. Phil has done a magnificent job with this initiative, and is a real credit in the TEL community.

Dr Mary Jacob, from Aberystwyth University, presented an overview of their new AI guidance for staff and students, highlighting for students that they shouldn’t rely on AI; for staff to understand what it can and can’t do, and the legal and ethical implications of the technology; and for everyone to be critical of the output – is it true? Complete? Unbiased?

Professor Earle Abrahamson, from the University of Hertfordshire, presented on the importance of using good and relevant prompts to build critical analysis skills. The first screenshot above is from Earle’s presentation, showing different perceptions on generative AI from students and staff. There were some good comments in the chat during Earle’s presentation, on how everything we’ve discussed today comes back from information literacy.

Dr Sian Lindsay, from the University of Reading, talked about the risks of AI on critical thinking, namely that students may be exposed to a narrower range of ideas due to the biases inherent in all existing generative AI systems and the limited ranges of data they have access to, and are trained upon. The second screenshot is from Sian’s presentation, highlighting some of the research in this area.

I can’t remember who shared this, if it came from one of the presentations or the chat, but someone shared a great article on Inside Higher Ed on the option to opt out of using generative AI at all. Yes! Very good, I enjoyed this very much. I don’t agree with all of it. But most of it! My own take in short: there is no ethical use of generative artificial intelligence, and we should only use it when it serves a genuine need or use.

As always, recordings of all presentations are available on the @digitallyenhancededucation554 YouTube channel.

Leave a Comment

AI-Augmented Marking

Chart showing correlation of human and KEATH.ai grading
Accuracy of KEATH.ai Grading vs. Human Markers

This was a HeLF webinar facilitated by Christopher Trace at the Surrey Institute of Education, to provide us with an introduction to KEATH.ai, a new generative AI powered feedback and marking service which Surrey have been piloting.

It looked very interesting. The service was described as a small language model, meaning that it is trained on very specific data which you – the academic end user – feeds into it. You provide some sample marked assignments, the rubric they were marked against, and the model can then grade new assignments with a high level of concurrence to human markers, as shown in the chart above of Surrey’s analysis of the pilot. Feedback and grading of a 3-5,000 word essay-style assignment takes less than a minute, and even with that being moderated by the academic for quality, which was highly recommended, it is easy to see how the system could save a great deal of time.

In our breakout rooms, questions arose around what the institution would do with this ‘extra time’, whether they would even be willing to pay the new upfront cost of such a service when the cost of marking and feedback work is already embedded into the contracts of academic and teaching staff, and how students would react to their work being AI graded? Someone in the chat shared this post by the University of Sydney discussing some of these questions.

Leave a Comment

ALT NE User Group: June 2024

Northumbria Uni library ceiling with power 'blocks' from the ceiling, and a humours 8-bit Mario hitting one of them
I’m not the only one who sees this, right?

Northumbria’s turn to do hosting honours this time around. It’s been a while since I was on my old campus, and I was shocked to see that the Library refurb ran out of money to finish the ceiling. I did like the ceiling mounted power extensions that look like Mario coin blocks though. Solves the problem of tripping over or accessing floor panel extensions, but introduces new problems for the vertically challenged. Julie said she couldn’t reach them to pull them down, while I, on the other end of the spectrum, had to duck and weave to avoid bonking my head on them at times. I wouldn’t mind if they actually dispensed gold coins, but no such luck.

Anyway, that’s enough shade thrown at my previous employer, time to be serious. Generative AI once again dominated our morning discussions, with a presentation by Tadhg, an academic at Northumbria, who has revamped their Business module with content related to Generative AI, teaching students how to use it to help write research proposals. This was followed by Ralph in their learning technologies team who has been using D-ID and Elevenlabs to create animated videos to supplement written case studies for students in Nursing. Dawn from Northumbria’s Library service then gave us a talk on their experience of Adobe Creative Campus, and reported a much more positive experience than Teesside.

After lunch we had some open discussions on digital exams. Newcastle are using Inspera to facilitate a proportion of their exams, and have mixed feelings about it. I was pleased to note that they have strongly pushed back on using online proctoring on ethical grounds. Emma from Teesside led a discussion on WCAG changes which prompted us to discuss getting the balance right between supporting all students along the principles of UDL, while being practical and having to work within the technical and cultural limits of the systems we have to use and processes we have to follow. Student record systems only allowing one assignment per module, for example.

Finally, Craig from Northumbria gave us a demo of some interactive 360 degree content they have created, including surgical simulations, nursing scenarios, and examining crime scenes. They are producing this content such that the scenarios can be accessed via any web browser, at the expenses of immersion, but they are also exported into a format that can be used with their bank of Vive VR headsets for students to get the full experience.

Leave a Comment

ALT NE User Group: March 2024

GIF of Jonny 5 reading a book really fast
Now this is the kind of AI I was promised as a kid

The latest ALT North East User Group was hosted at Middlesbrough College, and had a very generative AI heavy agenda. But first, Tamara at Middlesbrough presented on ‘ED Tech and Pedagogy’ which was quite similar to a TEL and pedagogy session I do on our PG Cert, and I picked up a few points that I can integrate into future presentations. Including the argument that it is really Gen Z who are the first true digital natives which will be useful as I still use Prensky’s original talk to explore the idea that different generations approach technology differently.

Next we had a round robin session on how we are approaching AI at our respective institutions. I talked about the in-year changes we made to student regulations in response to the release of ChatGPT, something Middlesbrough College have also done, and Northumbria are using a cover sheet template for student assignments for them to delicate if and how they have used AI to help with their work. Quite a few of us are pressing forwards with Microsoft Co-Pilot now that it is available.

Ross from Durham then presented on an AI chatbot they have created using Cody AI to assist students on a large module where, for various reasons, information is located in different places, including Blackboard and SharePoint. Cody looks interesting. It’s using various models under the hood, I’m sure Ross said models from multiple provides were available, but I only saw OpenAI based ones in their demo. You train the chatbot on your own data which you upload to Cody, and sharing that data and use of the model back with OpenAI is allegedly opt-in. (Perhaps I’m being overly cynical, but I wouldn’t OpenAI on this.)

Finally, after lunch, I presented on something not AI, but EDI – the Equality, Diversity and Inclusion Portal which I have created at Sunderland in partnership with our EDI team in an effort to widen access to our various EDI educational resources.

Leave a Comment

UoS Learning and Teaching Conference 2023

Learning and Teaching Conference, 2023
The big boss up on stage, doing the introductions

Another out of this world conference this year, but alas nobody who was one degree of separation from walking on the moon this time, as our attention instead turned to… yes, you guessed it, generative artificial intelligence.

The morning keynote was given by Thomas Lancaster of Imperial College London who has done a lot of research over the years on contract cheating, and who has now turned his attention to the new AI tools which have appeared over the past year. Interestingly, he commented that essay mill sites are being pushed to students as much as they ever have, but I suspect that these agencies are now themselves using generative AI tools to displace already low paid workers in the developing world who were previously responsible for writing assignments on demand for Western students.

The first breakout session I attended was ‘Ontogeny: Mentoring students to succeed in a world of AI’ by Dr Thomas Butts and Alice Roberts who discussed how medical students are using GAI and the issues this is causing in terms of accuracy, as these models are often presenting wrong information as truth, which has particularly serious consequence in medicine. There was an interesting observation on culture and social skills, that students now seem to prefer accessing the internet for help and information rather than simply asking their teachers and peers.

The second session was ‘Enhancing the TNE student experience through international collaborative discussions and networking opportunities’ by Dr Jane Carr-Wilkinson and Dr Helen Driscoll who discussed the Office for Students’ plans to regulate TNE (trans-national education), though no-one quite seems to know how they are going to do this. Including the OfS. This was an interesting discussion which explored the extent of our TNE provision (I don’t think I had appreciated the scale before, over 7,000 students across 20 partners), and the issues involved in ensuring quality across the board.

There was also a student panel discussion who were asked about their use of GAI and understanding of the various issues surrounding plagiarism. They demonstrated quite a robust level of knowledge, with many of them saying that they are using ChatGPT as a study assistant to generate ideas, but I did groan to hear one person talk about the "plagiarism score" in Turnitin and how "20% plagiarism is a normal amount", and they don’t worry until it gets higher. The myths penetrate deep.

The final afternoon keynote was given by Dr Irene Glendinning of Coventry University who talked about her research on the factors which lead to plagiarism and cheating. This included a dense slide on various factors such as having the opportunity, thinking they won’t be detected, etc., but nowhere on there were cultural factors identified, and the way that higher education in the UK has been marketized over the recent past. I’ve certainly came across comments along the nature of, if students are paying £9,000 a year on tuition, why not just pay a few hundred more to make assessment easier or guarantee better results? But I’m noticing more and more that people don’t seem to be willing or able to challenge the underlying political decisions anymore.

Leave a Comment

AI in Education: Unleashing Creativity and Collaboration

Word cloud showing positivity towards AI
Word cloud showing some positivity towards AI

This was the University of Kent’s third Digitally Enhanced Education webinar on the topic of AI in education, this time with a focus on how AI can be used positively to support creativity and collaboration. An open poll on our thoughts ran throughout the afternoon, and as you can see from the screenshot above the group was far more optimistic about it all than us doom-saying learning technologists at ALT North East. All of the presentations were recorded and are available on their YouTube channel.

A few themes stood out for me. On how GAI is impacting students, Dr Sam Lau of Hong Kong Baptist University talked about a student survey they have done in which students talked about how they are starting to use GAI tools as a new, ‘better’ type of search engine and teaching assistant. Cate Bateson, Hannah Blair and Clodagh O’Dowd, students at Queen’s University Belfast, reported that students want clarity and guidance from their institutions on where and how they are allowed to use AI tools. This was echoed by Liss Chard-Hall, a study skills tutor, who said that students have reported to her a new reluctance to use tools which already were using AI before ChatGPT, such as Grammarly, because they aren’t sure if it’s allowed by their institution. One person in the chat even commented that they knew of a student who was scared to use the spelling and grammar checker in Word lest they break new university rules about using AI in assessment.

Also from the chat, there was a discussion about which areas of university life are going to be most disrupted. Literature reviews was a big one, as what benefits are there from conducting a complex, time consuming search of literature when you can ask an AI model to do it for you? To which end, I learned about a new tool that claims to be able to do just this: Elicit. Another useful discovery from this session is This Person Does Not Exist which generates photo-realistic images of people.

On impacts in the wider world, Dr Ioannis Glinavos of the University of Westminster made the case that jobs in many areas will become more about verifying information and accuracy, as has happened with translators in the past couple of decades. While it is still a necessary skill, and possible to make a living as a translator, machine translation does the bulk of the work now, with human translators doing post-editing and checking for contextual and cultural relevance.

Finally, Anna Mills from the College of Marin in the US brought ethical questions back to the fore. First, reminding us that these new GAI models are designed to output plausible sounding responses, not truth – they don’t care about truth – hence we all need to be mindful to verify any information sourced from GAI tools. Anna then talked about two facets of “AI colonialism”. First, that GAI models are primarily trained on source texts written in the West (and as we know from stats about who writes Wikipedia articles and Reddit posts for example, we can infer a predominance of certain genders and skin colours too – biases feeding biases…), and second, that content moderation is being outsourced to low paid workers in the developing world, an inconvenient truth that isn’t getting enough attention. Anna’s presentation is available under a CC license and is well worth reading in full.

Leave a Comment