Press "Enter" to skip to content

Tag: AI

Navigating the Future: Innovation and Integrity in the Era of AI

I was at St James’ Park today, I believe the local football fans are rather fond of the place, but I was there for Turnitin’s first roundtable discussion since before the pandemic. Trying to start this post with ‘not AI’, we had a look at Turnitin’s product roadmap which is all about the new Feedback Studio. The new version has been redesigned from the ground-up to be screen reader accessible, a common complaint about the old version, and to be fully responsive, rather than Turnitin developing mobile apps for the platform. The rubric manager has also been rewritten to make improvements in managing and archiving rubrics, and adding the ability to import rubrics from common file formats like Excel, rather than the previous propriety format they used. It goes live on July 15th, but institutions can opt-out, and they are expecting a long period of transition. Alas that we are switching to the Canvas framework integration so our staff won’t benefit from this.

And that’s about it for ‘not AI’. In the opening remarks Turnitin presented on the outcomes of a global staff and student survey on perceptions of generative artificial intelligence. Overall, 78% of respondents were positive about the potential of AI, while at the same time 95% believed that AI was being misused. Among students only, 59% were concerned that an over-reliance on AI would result in reduced critical thinking skills (I have thoughts on this that I’ll circle back to later). In the slightly blurry photo above (I was sat at the back) you can see the survey results broken down by region, showing that in the UK and Ireland we are the least optimistic about AI having a positive impact on education, at only 65%, while India has the most positive outlook at 93%. All regions report being overwhelmed by the availability and volume of AI, which is unsurprising when every application and website is adding spurious AI tools to their services in a desperate attempt to be The One that sticks and ends up making a profit. (Side note to remind everyone that no-one is making any money out of actual AI systems in the current boom, these large language models are horrifically expensive to train and run, and the whole thing is being sustained by investment capital in a huge gamble on future returns. What could possibly go wrong!?)

The keynote address was delivered by Stephen Gow, Leverhulme Research Fellow at Edinburgh Napier University, who discussed the StudentXGenAI research project, and the ELM tool at the University of Edinburgh which is an institutionally provided front-end for accessing various language models but which has safeguards built-in to prevent misuse. Stephen reported on the mixed success of this. While it seems like a good idea, and the kind of thing I believe universities should be providing to ensure equitable access for all students, uptake has been poor, and students report that the they don’t like using the tool because the feel it’s ‘spying on them’, and would rather use AI models directly – highlighting issues of trust and autonomy. Stephen pointed us to C. Thi Nguyen’s paper ‘Trust as an Unquestioning Attitude‘ for a more detailed discussion of trust as it pertains to complex IT systems, and how trust should be viewed not as a binary, but a delicate and negotiated balance.

During our breakout roundtable discussions, my group discussed how AI is a divisive issue, people either love it or hate it, with few in the middle ground. There is some correlation along generational lines here, with younger staff and students being more positive, but it isn’t an exact mapping. One of my table colleagues reported having an intern, a young, recent graduate, who refuses to use any Gen AI systems on environmental ethical grounds, while another colleague won’t use it because they fear offloading their thinking skills to it. That was the second time such a sentiment had been expressed today, and it made me think of the parallels with the damage that social media has done to attention spans, but while that concept took a long time to enter the public consciousness (and we are barely starting to deal with the ramifications), there seems to be more voices raising the problem of AI’s impact on cognitive ability, and it’s happening sooner in the cycle, which gives me some limited optimism. Another colleague at my table also introduced me to the concept of ‘AI shaming‘, from a paper by Louie Giray.

Finally, we were given a hands-on experience of Clarity, Turnitin’s new product which provides students with a web interface for written assessments with a built-in AI chat assistant. The idea is to provide students with an AI system that they can use safely, and which gives confidence to both them and their tutors that there has been no abuse of Gen AI to write the essay. I like the idea of this, and I have advocated for Sunderland to provide clear guidance to students on what they can and can’t use, and that we should be providing something legitimate for students which would have safe guards of some kind to prevent misuse. Why, therefore, when presented with just such a solution, was I so sceptical and disappointed; unable to see anything but its flaws? Maybe the idea just doesn’t work in practice.

I was hoping to see and learn more about Clarity today, so I was very pleased that we were given this opportunity. Of course I immediately started to try and break it. I went straight in with the strawberry test, but the system just kept telling me it wouldn’t help with spelling, and directed me to write something addressing the essay question instead. I did get it to break though, first, by inserting the word into my essay and asking it to check my spelling and grammar, but after I had something written in the input window I found that it would actually answer the question directly, reporting that ‘strawberries’ is actually spelled with one r and two b’s. Fail. When I overheard a colleague at another table reporting that it seemed to be directing them to use US English spelling, I decided to experiment by translating my Copilot produced ‘essay’ into Spanish with Google Translate. Clarity then informed me that the assignment required the essay to be in English, a straight-up hallucination as there was no such instruction. What there was, as Turnitin told us, was that the system has been built on US English and can’t yet properly handle other variations and languages. There were also quite transparent on the underlying technology which is based on Anthropic’s Claude model, which I appreciated as I have found other companies offering AI tools to be evasive, insisting that they have developed their own models based on the own training data only, which I’m highly sceptical about given the resource requirements.

Fun as it may be to try and break AI models with spelling challenges, it’s not what they are built for, and there is an old fashioned spell checker built into the text entry box. However, that doesn’t mean that when presented with an AI chatbot in a setting like this, students aren’t going to ask it questions about spelling and grammar. This seems like a perfectly legitimate use case, and the reason I suspect that Turnitin have installed a ‘guard rail’ here is that they are well aware that large language models are no good for this kind of question, just as they are no good for mathematical operations. Or, for that matter, providing straight facts. The development of people using these models like they were search engines should frighten everyone. Our table chuckled when one of us reported that ChatGPT was confidently telling them that Nigel Farage was the Prime Minister (did I say chuckle? I meant shudder.), but more subtle errors can be far harder to spot, and could have terrible ramifications in the fractured, post-truth world we’ve built. I’m sure I’ve said something like this before on here, and I probably will again, but calling these systems ‘intelligent’ has been a huge mistake. There is no intelligence to be found here. There is no understanding. Only very sophisticated predication systems about what comes next after a given input.

I’m most doubtful about the assumptions that students will want to use Clarity in the first place. Am I giving myself aways as old when I say that I would never even contemplate writing something as important as a multi-thousand word essay in an online web interface that requires a stable, constant internet connection? Clarity has no ability for students to upload their written work, and though you can copy and paste text into it, this would be immediately flagged by Clarity as an issue for investigation. There’s no ability for versioning, no ability to export and save offline, limited formatting options and fonts, no ability to use plugins for reference management, etc. I also can’t imagine any circumstances in which I would recommend students use Clarity. It is not an infrequent problem that academics come to us reporting that they have spent hours writing student feedback in Turnitin’s Writing Feedback tool, only to find out later that their comments haven’t saved properly and just aren’t there. It is such a big problem that we routinely train our staff to write all of their feedback offline first, and then copy and paste it into Feedback Studio. Colleagues in the room challenged Turnitin about this, and the response was that in their evaluation students reported being very happy with the system.

Nevertheless, Turnitin believe that some kind of process validation is going to be necessary to ensure the academic integrity of written work going forwards, and I do think they have a point. But the only way I can see Clarity, or something like it working, is if academics mandate its use for assessment with students having to do everything in the browser, in which case unless they are teaching a module on how to alienate your students and make them hate you, it isn’t going to go down well. As much as Turnitin would like it to be so, I don’t think there’s a technological solution to this problem. I increasing think that in order to validate student knowledge and understanding we are going to have to use some level of dialogic assessment, which doesn’t scale in the highly marketised higher education system we now find ourselves in.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Leave a Comment

Innovation and Integrity in the Age of AI

I don’t usually attend these Turnitin product updates, not out of a lack of interest, just because it’s something that lies more with the other half of the team here at Sunderland, so I leave them to it and to cascade what’s important to the rest of us when required. This one piqued my interest though, after seeing a preview of the new user interface at NELE last week. You can see some of the planned changes to the Feedback Studio and the Similarity Report view above. I asked a question about the lack of audio feedback following NELE, and was told that this, along with new video feedback capabilities are on the roadmap and coming soon.

I was also interested in their new Clarity tool, which will allow students to submit or write their work through a web interface, and get immediate feedback with help on how to improve their writing from Turnitin’s AI chatbot. Very similar to how Studiosity’s Writing Feedback+ service works, so that’s going to be very interesting for me to see how that develops.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Leave a Comment

NELE: June 2025

The final NELE meeting of the year took place at Newcastle University, and we began by looking at Northumbria and Durham’s experience of piloting Turnitin’s new user interface for Feedback Studio. It ‘looks’ good, modern and fresh, and there are some good additional features, such as the ability to link Quick Marks to parts of a rubric, but there are also missing features such as peer marking and audio feedback; features not working quite as they should, such as anonymous marking being rendered somewhat moot by the ‘Reveal Identity’ button; and perennial issues which remain unresolved, such as the infinitely nested scroll bars in the rubric view (first photo).

Next up, the team at Newcastle talked about their ongoing experience of using Inspera to manage digital exams. They shared some good practice of using videos within exams, using an example of giving health students an ultrasound recoding to watch and then asking questions about it. They are also still holding the line on proctoring, citing their testing experience of being able to easily trigger far too many false flags. Good for them.

Rounding off the morning, Adam and I from Sunderland, and Dan from Newcastle led a discussion on VLE standards. I liked the work Newcastle have done on a specimen ‘perfect’ module that meets everything to show academics how it’s done, while our ‘MOT’ service, monitoring processes, and friendly interventions with academics on how they can improve their modules, are completing the circle.

After lunch, and some unscheduled physical activity for me (don’t ask), Newcastle presented on their learning analytics system, NULA, which has been developed in collaboration with Jisc. They had very good things to say about Jisc on this one, that they’ve been very supportive and responsive on building ways of monitoring and reporting on the measures which Newcastle wanted to set.

Next, it was Dan from Newcastle again, who talked about their experience of working with students to develop their new module template which has been designed to be mobile friendly first (second photo). Something many of us claim to do, but which actually seems to be quite rare.

Finally, we were joined by Emily from Newcastle’s library team who presented on the things which keep a librarian up at night. It’s AI. It’s always AI. Specifically, every publisher is experimenting with their own generative AI tools to help people find and analyse the resources in that database. The problems are many. First, these features are coming and changing at the whim of the publisher, without warning or any ability to test and evaluate. One particularly egregious example Emily mentioned was a journal that would provide temporary access to their AI search tool to academics who had attended specific training events, or happened on specific buttons and options on their website. Secondly, Emily was deeply concerned about AI literacy and who is responsible for teaching it. It seems to be falling on interested parties in different departments in different places, when it is really something that needs direction and dedicated roles and senior staff sponsorship. Finally there are the hidden costs. While publishers are marketing these services as free improvements to their search tools, in reality they are raising subscriptions costs on the back end, at a time when the sector is struggling and almost every institution is closing courses and laying off staff.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Leave a Comment

Supporting Staff and Students in Moving from AI Scepticism to AI Exploration

How could I miss the latest HelF staff development session, as an avowed AI sceptic? Today Alice May and Shivani Wilson-Rochford from Birmingham City University talked about their approach to responding to the emergence of generative AI. As can be seen on the ‘roadmap’ above, this has included an AI working group, collaboration with staff and students on producing guidelines on use, sharing those via staff and student workshops, and collating resources on a SharePoint site. All things which mirror our approach at Sunderland.

Something they are doing which I liked was providing template text which academic staff can copy and paste into their assignment briefs on what kind of AI students are permitted to use, at four different levels from fully unrestricted, to fully prohibited. They are also working on an assessment redesign project which takes the risks of GAI into account, based on work from the University of Sydney which analysed all of the different types of assessment they have and put them into two lanes based on how secure they are to GAI plagiarism. It’s Table 2 on the page I’ve linked to, it’s a very good table. I like it a lot.

Briefly mentioned was the fact that Birmingham are one of the few institutions in the UK who have enabled Turnitin’s AI detection tool, and I would have liked to have learned more about this. From a student survey on GAI, the second screenshot above, concerns about the accuracy of AI detection was one of the big things they raised.

Alice and Shivani left us with plans for going forwards, which is to build a six-pillar framework on the different aspects of GAI’s impact on HE (third screenshot). Pillar 5 is ‘Ethical AI and Academic Integrity’. This one stood out as, once again, the ethical issues of the environmental impact and copyright were raised. Briefly. And then we moved on. It consistently bothers me, and I don’t have any brilliant answers, but I will reiterate the very basic one of simply choosing not to use these services unless they are solving a genuine problem.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Leave a Comment

Studiosity Partner Forum 2025

Today saw me visiting London once again for Studiosity’s fourth annual UK Partner Forum. In the keynote service update from CEO, Mike Larson, it was all AI, all the time. Their pivot to AI powered feedback continues at a rapid pace, and the messaging has changed from personalised feedback provided by actual human beings a few years ago, to, this isn’t fast enough for students who often work in a ‘just in time’ frame, therefore they need feedback in minutes, not hours. They seem to be doing alright from it, as a substantial number of partners have now switched to Studiosity+, and they are working on a new tool for academics to help with course content creation. Previously announced human-powered services, like Study Assist, are still in development, but didn’t warrant a mention in the slides, someone had to ask the question of what was happening with them.

Rebecca Mace, an independent researcher, presented on their work reviewing early real-world usage of Studiosity+, which our pilot on Study Online Canvas has contributed to (I have writing about this forthcoming). Next, Andy Jaffrey from Ulster University presented about their experience in winning the Times Higher University of the Year Award. This was largely tangential, but there was some discussion about values and their emphasis on human-to-human contact, which is why, like Sunderland, they are staying with the Studiosity Classic service.

After lunch we had Sharon Perera and Nathaniel Pickering from the University of Greenwich presenting on their ‘Write With Confidence’ initiative, inspired by our Write it Right. That’s going very well for them, with enough data now to show improved continuation and progression rates, and a 20% uptake across the university. All very similar to our findings. One difference is that they have gone for the AI powered service.

Finally, Nick Hillman from the Higher Education Policy Institute (HEPI) gave the afternoon keynote on the state of UK Higher Education. I feel like Studiosity always has someone offering this kind of perspective at these events, and I always find them fascinating. Some highlights I noted included that 90% of students report using generative AI, but believe that if used for direct cheating they would be caught by institutional policies and technology. Shown in the third photo, above, HEPI surveyed students on institutions going bust and found that 31% were quite or very worried about this possibility. Finally, and related to this, Nick offered a prediction that there would be mergers of HEIs in the next few years to prevent worst case scenarios, but that, like the crisis in FE a few years ago, the sector would leave it too late and wait for a precipitating event to happen instead of getting ahead of the situation. I don’t think there was anything in his analysis that I would disagree with.

Slides from the day and other supporting documentation are available on Studiosity’s website, so you don’t have to squint at the scance few photos I took.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Leave a Comment

Generative AI: A Problematic Illustration

Screenshot of a slide from the presentation, showing some delicious pancakes
Mmm… pancakes…

To give the workshop its full title, Generative AI: A Problematic Illustration of the Intersections of Racialized Gender, Race, and Ethnicity. Facilitated by Nayiri Keshishi from the University of Surrey and Dustin Hosseini from the University of Glasgow, and based on Dustin’s blog post. Hands down, the best session on generative AI I’ve attended over the past two years. It was so good I’m going to rework the timetable of our PG Cert to include a version of this for the cohort I’m currently teaching.

Why was it so good? Because it took some of the ethical issues over the use of generative AI and turned them into an interactive session where we, as participants, could interrogate the problems for ourselves. This was done via the medium of a seemingly innocuous prompt which was put into an image generating AI system: ‘Create an image of a sweet, old X grandmother making pancakes’, where X was a given nationality, e.g. Russian or American. We were then asked to analyse the generated results using a framework which asked us to consider atmosphere, decor and clothing, and expressions and ethnicity.

Discussions about what we can do about this included cascading the learning and knowledge more widely, which is why all of the slides and resources to deliver the session have been published under a Creative Commons licenses on ALDinHE’s website. Another suggestion was to document the issues we encounter when using these technologies and share them on relevant forums and social spaces, and finally, what I think is the best and most useful thing we can do as educators, is to embed AI literacy in the curriculum.

The only note I had coming out of the session was that there was a statement, an assumption, that all of these new AI companies are making huge amounts of money. There is certainly a lot of money moving around in the space, but it’s all speculative investment on presumed future returns. In actuality, OpenAI lost $5 billion last year, and they’re on track to lose another $10 billion this year.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Leave a Comment

Copyright and Artificial Intelligence Consultation

Funny meme showing DeepSeek as a cat, stealing OpenAI's fish, which is stolen data
A gratuitously stolen meme from Reddit. Oh, the irony! The hypocrisy!

The UK government are currently running an open consultation on copyright and artificial intelligence, and have outlined their preferred solution to “include a mechanism for right holders to reserve their rights, enabling them to license and be paid for the use of their work in AI training” and to introduce “an exception [into copyright law] to support use at scale of a wide range of material by AI developers where rights have not been reserved.”

The main issue I have with this proposal is that it does nothing to respond to the wholesale copyright theft which the tech industry has already conducted. Additionally, it firmly places the emphasis on individual creators for protecting their copyright, when the bleak reality is that it is already the case that individuals have no practical means of redress against multinational mega corporations like Meta, OpenAI and DeepSeek*, who openly admit to copyright theft to train their large language models. I would much prefer that the government spent its efforts towards enforcing existing laws in order to protect the livelihoods of artists, authors and creators, rather than appeasing the tech industry.

But that’s just my opinion. If you have your own thoughts on the matter, you can read the full proposal on the gov.uk website and complete the consultation online. Like every government consultation I’ve ever engaged with, it’s dense, complicated, and time consuming. Almost like it was designed to be off-putting and to lead to a foregone conclusion. I was guided in my submission by the work of the Author’s Licensing and Collecting Society.

As well as seeking individual responses, organisations are also invited to respond to the consultation as collective bodies. ALT are doing so behalf of the learning technology community, and are asking for feedback to them by the 18th of February, with the consultation closing a week later on the 25th.

* My compliments to DeepSeek on training their AI model on OpenAI’s AI model, then releasing it as open AI, which OpenAI is not, something which has irked them greatly, and for that alone they are worthy of praise.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
1 Comment

AI and Assessment Workshop

Perplexity AI User Interface
Screenshot of Perplexity search options

Today I attended one of our own AI and Assessment Workshops to see what advice and guidance we are giving to academics and what their feelings and needs are around this topic. This is a new run of sessions which we have just started, and has been organised by one of our academics working on the topic alongside a member of my team.

Despite having published staff and student guidance documents and a dedicated SharePoint space to collate resources and our response, I found from conversing with staff at this event that there is still a prevailing feeling of lacking steer and direction. People were telling me they don’t know what tools it’s safe to use, or what students should be told to avoid. We also had a lot of people from the Library Service today, which is perhaps also indicative of the need for firmer student guidance.

I was pleased to note that there is some good practice filtering through too, such as using a quiz based declaration of use which students have to complete before unlocking their assignment submission link. We talked about adding this to our Canvas module template for next academic year, that’s something one of the academics suggested to us. On the other hand, I found people were still talking in terms of ChatGPT ‘knowing’ things, which is troubling because of the implication that these systems are more than they actually are.

While much of the session took the form of a guided dialogue, my colleague was also providing a hand’s on demo of various systems, including Perplexity which people liked for providing links out to the sources it had used (sometimes, not always), the ability to restrict answers to data from specific sources, such as ‘academic’, but noted a very US bias in the results, a consequence of the training data which has gone into these models. I was quite impressed when I tried to ‘break’ the model with leading prompts and it didn’t indulge me.

A new tool to me was Visual Electric, an image generation site aimed at producing high quality photo-like images. I have thoughts on some of their marketing… But I’m going to try and be more positive when writing about this topic, as I find it very easy to go into a rant! So instead of doing that, I have added a short disclaimer to the bottom of this post, which I’m also going to add to future posts which I write about AI.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Leave a Comment

Helping Students Develop Critical Thinking Skills When Using Generative AI (Part 2)

Part two of Kent’s Digitally Enhanced Education series looking at how generative AI is affecting critical thinking skills. This week we had stand out presentations from:

Professor Jess Gregory, of Southern Connecticut State University (nice to see reach of the network, well, reaching out), who presented on the problem of mastering difficult conversations for teachers in training. These students will often find themselves thrust into difficult situations upon graduation, having to deal with stubborn colleagues, angry parents, etc., and Jean has developed a method of preparing them by using generative AI systems with speech capabilities to simulate difficult conversations. This can, and has, been done by humans of course, but that is time consuming, could be expensive, and doesn’t offer the same kind of safe space for students to practice freely.

David Bedford, from Canterbury Christ Church University, presented on how the challenges of critical analysis are not new, and that anything produced as a result of generative AI needs to be evaluated in just the same way as we would the results of an internet search, or a Wikipedia article, or from books and journals. He presented us with the ‘BREAD’ model, first produced in 2016, for analysis (see first screenshot for detail). This asks us to consider Bias, Relevance, Evidence, Author, and Date.

Nicki Clarkson, University of Southampton, talked about co-producing resources about generative AI with students, and noted how they were very good at paring content down to the most relevant parts, and that the final videos were improved by having a student voiceover on them, rather than that of staff.

Dr Sideeq Mohammed, from the University of Kent, presented about his experience of running a session on identifying misleading information, using a combination of true and convincingly false articles and information, and said of the results that students always left far more sceptical and wanting to check the validity of information at the end of sessions. My second screenshot is from this presentation, showing three example articles. Peter Kyle is in fact a completely made-up government minister. Or is he?

Finally, Anders Reagan, from the University of Oxford, compared generative AI tools to the Norse trickster god, Loki. As per my third screenshot, both are powerful, seemingly magic, persuasive and charismatic, and capable of transformation. Andres noted, correctly, that now that this technology is available, we must support it. If we don’t, students and academics are still going to be using it on their own initiative, the allure being too powerful, so it is better for us as learning technology experts to provide support and guidance. In so doing we can encourage criticality, warn of the dangers, and encourage more specialised research based generative AI tools such as Elicit and Consensus.

You can find recordings of all of the sessions on the @digitallyenhancededucation554 YouTube channel.

Leave a Comment

Helping Students Develop Critical Thinking Skills When Using Generative AI (Part 1)

From the University of Kent’s Digitally Enhanced Education series, a two-parter on the theme of how generative AI is affecting student’s critical thinking skills, with the second part coming next week. We’ve been living with generative AI for a while now, and I am finding diminishing returns from the various webinars and training I have been attending. Nevertheless, there’s always new things to learn and nuggets of wisdom to be found in these events. The Kent webinar series has such a wide reach now that the general chat, as much as the presentations, is a fantastic resource. Phil has done a magnificent job with this initiative, and is a real credit in the TEL community.

Dr Mary Jacob, from Aberystwyth University, presented an overview of their new AI guidance for staff and students, highlighting for students that they shouldn’t rely on AI; for staff to understand what it can and can’t do, and the legal and ethical implications of the technology; and for everyone to be critical of the output – is it true? Complete? Unbiased?

Professor Earle Abrahamson, from the University of Hertfordshire, presented on the importance of using good and relevant prompts to build critical analysis skills. The first screenshot above is from Earle’s presentation, showing different perceptions on generative AI from students and staff. There were some good comments in the chat during Earle’s presentation, on how everything we’ve discussed today comes back from information literacy.

Dr Sian Lindsay, from the University of Reading, talked about the risks of AI on critical thinking, namely that students may be exposed to a narrower range of ideas due to the biases inherent in all existing generative AI systems and the limited ranges of data they have access to, and are trained upon. The second screenshot is from Sian’s presentation, highlighting some of the research in this area.

I can’t remember who shared this, if it came from one of the presentations or the chat, but someone shared a great article on Inside Higher Ed on the option to opt out of using generative AI at all. Yes! Very good, I enjoyed this very much. I don’t agree with all of it. But most of it! My own take in short: there is no ethical use of generative artificial intelligence, and we should only use it when it serves a genuine need or use.

As always, recordings of all presentations are available on the @digitallyenhancededucation554 YouTube channel.

Leave a Comment