Press "Enter" to skip to content

Tag: GAI

UoS Learning and Teaching Conference 2023

Learning and Teaching Conference, 2023
The big boss up on stage, doing the introductions

Another out of this world conference this year, but alas nobody who was one degree of separation from walking on the moon this time, as our attention instead turned to… yes, you guessed it, generative artificial intelligence.

The morning keynote was given by Thomas Lancaster of Imperial College London who has done a lot of research over the years on contract cheating, and who has now turned his attention to the new AI tools which have appeared over the past year. Interestingly, he commented that essay mill sites are being pushed to students as much as they ever have, but I suspect that these agencies are now themselves using generative AI tools to displace already low paid workers in the developing world who were previously responsible for writing assignments on demand for Western students.

The first breakout session I attended was ‘Ontogeny: Mentoring students to succeed in a world of AI’ by Dr Thomas Butts and Alice Roberts who discussed how medical students are using GAI and the issues this is causing in terms of accuracy, as these models are often presenting wrong information as truth, which has particularly serious consequence in medicine. There was an interesting observation on culture and social skills, that students now seem to prefer accessing the internet for help and information rather than simply asking their teachers and peers.

The second session was ‘Enhancing the TNE student experience through international collaborative discussions and networking opportunities’ by Dr Jane Carr-Wilkinson and Dr Helen Driscoll who discussed the Office for Students’ plans to regulate TNE (trans-national education), though no-one quite seems to know how they are going to do this. Including the OfS. This was an interesting discussion which explored the extent of our TNE provision (I don’t think I had appreciated the scale before, over 7,000 students across 20 partners), and the issues involved in ensuring quality across the board.

There was also a student panel discussion who were asked about their use of GAI and understanding of the various issues surrounding plagiarism. They demonstrated quite a robust level of knowledge, with many of them saying that they are using ChatGPT as a study assistant to generate ideas, but I did groan to hear one person talk about the "plagiarism score" in Turnitin and how "20% plagiarism is a normal amount", and they don’t worry until it gets higher. The myths penetrate deep.

The final afternoon keynote was given by Dr Irene Glendinning of Coventry University who talked about her research on the factors which lead to plagiarism and cheating. This included a dense slide on various factors such as having the opportunity, thinking they won’t be detected, etc., but nowhere on there were cultural factors identified, and the way that higher education in the UK has been marketized over the recent past. I’ve certainly came across comments along the nature of, if students are paying £9,000 a year on tuition, why not just pay a few hundred more to make assessment easier or guarantee better results? But I’m noticing more and more that people don’t seem to be willing or able to challenge the underlying political decisions anymore.

Leave a Comment

AI in Education: Unleashing Creativity and Collaboration

Word cloud showing positivity towards AI
Word cloud showing some positivity towards AI

This was the University of Kent’s third Digitally Enhanced Education webinar on the topic of AI in education, this time with a focus on how AI can be used positively to support creativity and collaboration. An open poll on our thoughts ran throughout the afternoon, and as you can see from the screenshot above the group was far more optimistic about it all than us doom-saying learning technologists at ALT North East. All of the presentations were recorded and are available on their YouTube channel.

A few themes stood out for me. On how GAI is impacting students, Dr Sam Lau of Hong Kong Baptist University talked about a student survey they have done in which students talked about how they are starting to use GAI tools as a new, ‘better’ type of search engine and teaching assistant. Cate Bateson, Hannah Blair and Clodagh O’Dowd, students at Queen’s University Belfast, reported that students want clarity and guidance from their institutions on where and how they are allowed to use AI tools. This was echoed by Liss Chard-Hall, a study skills tutor, who said that students have reported to her a new reluctance to use tools which already were using AI before ChatGPT, such as Grammarly, because they aren’t sure if it’s allowed by their institution. One person in the chat even commented that they knew of a student who was scared to use the spelling and grammar checker in Word lest they break new university rules about using AI in assessment.

Also from the chat, there was a discussion about which areas of university life are going to be most disrupted. Literature reviews was a big one, as what benefits are there from conducting a complex, time consuming search of literature when you can ask an AI model to do it for you? To which end, I learned about a new tool that claims to be able to do just this: Elicit. Another useful discovery from this session is This Person Does Not Exist which generates photo-realistic images of people.

On impacts in the wider world, Dr Ioannis Glinavos of the University of Westminster made the case that jobs in many areas will become more about verifying information and accuracy, as has happened with translators in the past couple of decades. While it is still a necessary skill, and possible to make a living as a translator, machine translation does the bulk of the work now, with human translators doing post-editing and checking for contextual and cultural relevance.

Finally, Anna Mills from the College of Marin in the US brought ethical questions back to the fore. First, reminding us that these new GAI models are designed to output plausible sounding responses, not truth – they don’t care about truth – hence we all need to be mindful to verify any information sourced from GAI tools. Anna then talked about two facets of “AI colonialism”. First, that GAI models are primarily trained on source texts written in the West (and as we know from stats about who writes Wikipedia articles and Reddit posts for example, we can infer a predominance of certain genders and skin colours too – biases feeding biases…), and second, that content moderation is being outsourced to low paid workers in the developing world, an inconvenient truth that isn’t getting enough attention. Anna’s presentation is available under a CC license and is well worth reading in full.

Leave a Comment