Word cloud showing some positivity towards AI
This was the University of Kent’s third Digitally Enhanced Education webinar on the topic of AI in education, this time with a focus on how AI can be used positively to support creativity and collaboration. An open poll on our thoughts ran throughout the afternoon, and as you can see from the screenshot above the group was far more optimistic about it all than us doom-saying learning technologists at ALT North East. All of the presentations were recorded and are available on their YouTube channel.
A few themes stood out for me. On how GAI is impacting students, Dr Sam Lau of Hong Kong Baptist University talked about a student survey they have done in which students talked about how they are starting to use GAI tools as a new, ‘better’ type of search engine and teaching assistant. Cate Bateson, Hannah Blair and Clodagh O’Dowd, students at Queen’s University Belfast, reported that students want clarity and guidance from their institutions on where and how they are allowed to use AI tools. This was echoed by Liss Chard-Hall, a study skills tutor, who said that students have reported to her a new reluctance to use tools which already were using AI before ChatGPT, such as Grammarly, because they aren’t sure if it’s allowed by their institution. One person in the chat even commented that they knew of a student who was scared to use the spelling and grammar checker in Word lest they break new university rules about using AI in assessment.
Also from the chat, there was a discussion about which areas of university life are going to be most disrupted. Literature reviews was a big one, as what benefits are there from conducting a complex, time consuming search of literature when you can ask an AI model to do it for you? To which end, I learned about a new tool that claims to be able to do just this: Elicit. Another useful discovery from this session is This Person Does Not Exist which generates photo-realistic images of people.
On impacts in the wider world, Dr Ioannis Glinavos of the University of Westminster made the case that jobs in many areas will become more about verifying information and accuracy, as has happened with translators in the past couple of decades. While it is still a necessary skill, and possible to make a living as a translator, machine translation does the bulk of the work now, with human translators doing post-editing and checking for contextual and cultural relevance.
Finally, Anna Mills from the College of Marin in the US brought ethical questions back to the fore. First, reminding us that these new GAI models are designed to output plausible sounding responses, not truth – they don’t care about truth – hence we all need to be mindful to verify any information sourced from GAI tools. Anna then talked about two facets of “AI colonialism”. First, that GAI models are primarily trained on source texts written in the West (and as we know from stats about who writes Wikipedia articles and Reddit posts for example, we can infer a predominance of certain genders and skin colours too – biases feeding biases…), and second, that content moderation is being outsourced to low paid workers in the developing world, an inconvenient truth that isn’t getting enough attention. Anna’s presentation is available under a CC license and is well worth reading in full.
Leave a Comment