Press "Enter" to skip to content

Sonya's Blog Posts

Copyright and Artificial Intelligence Consultation

Funny meme showing DeepSeek as a cat, stealing OpenAI's fish, which is stolen data
A gratuitously stolen meme from Reddit. Oh, the irony! The hypocrisy!

The UK government are currently running an open consultation on copyright and artificial intelligence, and have outlined their preferred solution to “include a mechanism for right holders to reserve their rights, enabling them to license and be paid for the use of their work in AI training” and to introduce “an exception [into copyright law] to support use at scale of a wide range of material by AI developers where rights have not been reserved.”

The main issue I have with this proposal is that it does nothing to respond to the wholesale copyright theft which the tech industry has already conducted. Additionally, it firmly places the emphasis on individual creators for protecting their copyright, when the bleak reality is that it is already the case that individuals have no practical means of redress against multinational mega corporations like Meta, OpenAI and DeepSeek*, who openly admit to copyright theft to train their large language models. I would much prefer that the government spent its efforts towards enforcing existing laws in order to protect the livelihoods of artists, authors and creators, rather than appeasing the tech industry.

But that’s just my opinion. If you have your own thoughts on the matter, you can read the full proposal on the gov.uk website and complete the consultation online. Like every government consultation I’ve ever engaged with, it’s dense, complicated, and time consuming. Almost like it was designed to be off-putting and to lead to a foregone conclusion. I was guided in my submission by the work of the Author’s Licensing and Collecting Society.

As well as seeking individual responses, organisations are also invited to respond to the consultation as collective bodies. ALT are doing so behalf of the learning technology community, and are asking for feedback to them by the 18th of February, with the consultation closing a week later on the 25th.

* My compliments to DeepSeek on training their AI model on OpenAI’s AI model, then releasing it as open AI, which OpenAI is not, something which has irked them greatly, and for that alone they are worthy of praise.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing real people.
Leave a Comment

AI and Assessment Workshop

Perplexity AI User Interface
Screenshot of Perplexity search options

Today I attended one of our own AI and Assessment Workshops to see what advice and guidance we are giving to academics and what their feelings and needs are around this topic. This is a new run of sessions which we have just started, and has been organised by one of our academics working on the topic alongside a member of my team.

Despite having published staff and student guidance documents and a dedicated SharePoint space to collate resources and our response, I found from conversing with staff at this event that there is still a prevailing feeling of lacking steer and direction. People were telling me they don’t know what tools it’s safe to use, or what students should be told to avoid. We also had a lot of people from the Library Service today, which is perhaps also indicative of the need for firmer student guidance.

I was pleased to note that there is some good practice filtering through too, such as using a quiz based declaration of use which students have to complete before unlocking their assignment submission link. We talked about adding this to our Canvas module template for next academic year, that’s something one of the academics suggested to us. On the other hand, I found people were still talking in terms of ChatGPT ‘knowing’ things, which is troubling because of the implication that these systems are more than they actually are.

While much of the session took the form of a guided dialogue, my colleague was also providing a hand’s on demo of various systems, including Perplexity which people liked for providing links out to the sources it had used (sometimes, not always), the ability to restrict answers to data from specific sources, such as ‘academic’, but noted a very US bias in the results, a consequence of the training data which has gone into these models. I was quite impressed when I tried to ‘break’ the model with leading prompts and it didn’t indulge me.

A new tool to me was Visual Electric, an image generation site aimed at producing high quality photo-like images. I have thoughts on some of their marketing… But I’m going to try and be more positive when writing about this topic, as I find it very easy to go into a rant! So instead of doing that, I have added a short disclaimer to the bottom of this post, which I’m also going to add to future posts which I write about AI.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing real people.
Leave a Comment

Personal Safety Training


It’s okay, he has his safety tie on

This month’s big team meeting was given over to the University’s Security Manager for a session on personal safety, and touched upon conflict management. More security than safety then, but ‘safety’ is a friendlier term. When I think of personal safety I tend to think more along the lines of the great Colin Furze and his shenanigans.

It was unexpected training, and pretty useful. We learned about de-escalating situations from a number of problem-based learning scenarios, the institutional and personal responsibilities with regards to duty of care and health and safety, and what the University is doing to keep us all safe at work. This includes, however you feel about it, the network of 400 security cameras on Sunderland campus, the relatively new wider campus card controlled building access, and the Estates team are pushing for us to get a system called Safe Zone which is an app based panic button. We are, apparently, the only university in the North East not already using this.

Leave a Comment

Helping Students Develop Critical Thinking Skills When Using Generative AI (Part 2)

Part two of Kent’s Digitally Enhanced Education series looking at how generative AI is affecting critical thinking skills. This week we had stand out presentations from:

Professor Jess Gregory, of Southern Connecticut State University (nice to see reach of the network, well, reaching out), who presented on the problem of mastering difficult conversations for teachers in training. These students will often find themselves thrust into difficult situations upon graduation, having to deal with stubborn colleagues, angry parents, etc., and Jean has developed a method of preparing them by using generative AI systems with speech capabilities to simulate difficult conversations. This can, and has, been done by humans of course, but that is time consuming, could be expensive, and doesn’t offer the same kind of safe space for students to practice freely.

David Bedford, from Canterbury Christ Church University, presented on how the challenges of critical analysis are not new, and that anything produced as a result of generative AI needs to be evaluated in just the same way as we would the results of an internet search, or a Wikipedia article, or from books and journals. He presented us with the ‘BREAD’ model, first produced in 2016, for analysis (see first screenshot for detail). This asks us to consider Bias, Relevance, Evidence, Author, and Date.

Nicki Clarkson, University of Southampton, talked about co-producing resources about generative AI with students, and noted how they were very good at paring content down to the most relevant parts, and that the final videos were improved by having a student voiceover on them, rather than that of staff.

Dr Sideeq Mohammed, from the University of Kent, presented about his experience of running a session on identifying misleading information, using a combination of true and convincingly false articles and information, and said of the results that students always left far more sceptical and wanting to check the validity of information at the end of sessions. My second screenshot is from this presentation, showing three example articles. Peter Kyle is in fact a completely made-up government minister. Or is he?

Finally, Anders Reagan, from the University of Oxford, compared generative AI tools to the Norse trickster god, Loki. As per my third screenshot, both are powerful, seemingly magic, persuasive and charismatic, and capable of transformation. Andres noted, correctly, that now that this technology is available, we must support it. If we don’t, students and academics are still going to be using it on their own initiative, the allure being too powerful, so it is better for us as learning technology experts to provide support and guidance. In so doing we can encourage criticality, warn of the dangers, and encourage more specialised research based generative AI tools such as Elicit and Consensus.

You can find recordings of all of the sessions on the @digitallyenhancededucation554 YouTube channel.

Leave a Comment

Nothin’ But Blue Skies


Smiling at You

I’m on Bluesky now. Of course I am, it’s having a moment.

Long have I been a fan of Mastodon, its open source and decentralised model closely align with my personal values, but it’s never taken off. There is a technical barrier to entry with Mastodon and though it has seen slow and steady growth over the years, network effect has never kicked in for it. Bluesky is built on a similar decentralised model, but removes the friction of entry. Paired with the social media site formally known as Twitter reaching new heights in its villain arc, Bluesky adoption is skyrocketing.

I’m also on Threads. But who cares? Bolted on to Instagram overnight like a bad school project, Threads was garbage from the outset. I think I took one look at it and checked out.

Starting over again on yet another social media site can be scary and intimidating, but I found a great Firefox plugin called Sky Follower Bridge which will scan your followers on that old site, and look for matches on Bluesky. I found 69! And I thought I was an early adopter.

Leave a Comment

Helping Students Develop Critical Thinking Skills When Using Generative AI (Part 1)

From the University of Kent’s Digitally Enhanced Education series, a two-parter on the theme of how generative AI is affecting student’s critical thinking skills, with the second part coming next week. We’ve been living with generative AI for a while now, and I am finding diminishing returns from the various webinars and training I have been attending. Nevertheless, there’s always new things to learn and nuggets of wisdom to be found in these events. The Kent webinar series has such a wide reach now that the general chat, as much as the presentations, is a fantastic resource. Phil has done a magnificent job with this initiative, and is a real credit in the TEL community.

Dr Mary Jacob, from Aberystwyth University, presented an overview of their new AI guidance for staff and students, highlighting for students that they shouldn’t rely on AI; for staff to understand what it can and can’t do, and the legal and ethical implications of the technology; and for everyone to be critical of the output – is it true? Complete? Unbiased?

Professor Earle Abrahamson, from the University of Hertfordshire, presented on the importance of using good and relevant prompts to build critical analysis skills. The first screenshot above is from Earle’s presentation, showing different perceptions on generative AI from students and staff. There were some good comments in the chat during Earle’s presentation, on how everything we’ve discussed today comes back from information literacy.

Dr Sian Lindsay, from the University of Reading, talked about the risks of AI on critical thinking, namely that students may be exposed to a narrower range of ideas due to the biases inherent in all existing generative AI systems and the limited ranges of data they have access to, and are trained upon. The second screenshot is from Sian’s presentation, highlighting some of the research in this area.

I can’t remember who shared this, if it came from one of the presentations or the chat, but someone shared a great article on Inside Higher Ed on the option to opt out of using generative AI at all. Yes! Very good, I enjoyed this very much. I don’t agree with all of it. But most of it! My own take in short: there is no ethical use of generative artificial intelligence, and we should only use it when it serves a genuine need or use.

As always, recordings of all presentations are available on the @digitallyenhancededucation554 YouTube channel.

Leave a Comment

Prospects Looking Bright

Ceramic pots with faces on them, and cacti growing out of them. Surrounded by lush greenery.
Some prickly boys from my holiday. Who’s good at GeoGuessr then?

I went away on holiday in October, and when I came back my team no longer existed! CELT, the Centre for Enhancement of Learning and Teaching at the University of Sunderland, is no more. Our individual teams, my Learning Design Team, along with the TEL Team and the Academic Development Team, have been merged into the Centre for Graduate Prospects where, for the time being at least, we continue to operate pretty much as we were.

The University has also gone under wider changes, moving from five Faculties to three. I think anyone reading this, with some knowledge of the context of UK Higher Education will be able to infer the reasons for this!

The Centre for Graduate Prospects has been around for a couple of years now, bringing together various teams and resources to provide a more wholistic experience for students to help them successfully transition from study to employment. We’ve already been doing good work together, co-creating resources for the Canvas module template and providing dedicated sessions on the PG Cert, so the restructure will provide more opportunities and possibilities. The goal is to make the new Centre a model of good practice and innovation, a trailblazer for other institutions, much as the first CELTs did a decade ago.

Leave a Comment

Inclusive Learning Festival

Slide showing diversity of student body at Sunderland
Some stats on the diversity of UoS students

The University launched a new Centre for Inclusive Learning in March to help us meet our goals in widening participation and providing an inclusive educational experience for all students. CELT are of course working with them on many objectives, and in this, the Centre’s launch event, we were there to present on how we can help academics with instructional design and universal design for learning.

I was also able to attend many of the other sessions throughout the day, and learned a lot about some great work being done across the institution. For example, in our Faculty of Health, Science and Wellbeing, I learned that in our bank of PCPIs (Patient, Carer and Public Involvement), who are consulted on the delivery of medical and health modules, we now have a considerable contingent with experience of health care systems outside of the UK who are providing valuable insight and perspectives.

In another talk on decolonising the curriculum using a trauma informed approach, there was a great discussion about problematic language. ‘Deadline’, or ‘fire me an email’, for example, but also using ‘Due Date’ when talking about assessments could be problematic for people with experience of miscarriage. I feel like this is an area where we are making good progress societally. I’ve been very pleased to watch the technology sector jettison the language of ‘master/slave’ over the past few years, and more and more systems are now including options for pronouns and preferred name.

But of course, my main purpose on the day was to facilitate our team’s discussion around UDL. I felt that it was important for CELT to be contributing to the conference in some capacity, and I was also able to use the event to give some of my team experience in presenting at a conference. It’ll be good for them! If that’s the direction they want to take their careers of course. So I did introductions and a little bit of context setting, and then handed over to two of my team to tag-team the bulk of our presentation.

Leave a Comment

ALT Cancels Twitter, Bravo

Crow sitting on a grave marker
The only crow you need to see this month

Just a quick one to say kudos to ALT for suspending their Twitter accounts and activity. To quote:

“Following recent events that conflict with our values and in consultation with our Trustees, staff, and members of the community, we will cease all activity on X from 30 August 2024.

To safeguard our identity, we will retain our @A_L_T and @OERconf accounts. Individual members who are still active on X may continue to post about ALT’s activities. We will no longer post, respond or retweet as an organisation.”

Good stuff! We need to see more big organisations removing themselves from the platform. I feel a twinge of guilt at not having deleted my account entirely, but akin to ALT, I have a very unique name and feel like I need to own it on such spaces, so I too have merely mothballed it.

ALT can now be found on LinkedIn, Bluesky, and Mastodon.

Leave a Comment

Institutional Experiences of Microsoft Copilot

Diagram of MS Copilot architecture
Diagram of Microsoft Copilot Architecture

November 2023, I wrote a rambling post about my thoughts on generative AI and where it was going to go for the ALT Blog. I made a prediction there that someone was going to buy a site license for ChatGPT, and lo! This HeLF discussion was about exactly that. Sort of. It’s Microsoft’s Copilot tool that the majority of people are going for, because we are all, or mostly, existing Microsoft customers and they are baking it into their Office 365 offering. Though there are a couple of institutions looking at ChatGPT as an alternative.

Costs and practically was a big issue under discussion. Microsoft are only giving us the very basic service for free, and if you want full Copilot Premium that it’s an additional cost of around £30 a month per individual. Pricey, but it gets worse. They have tiers upon tiers, and if you want to do more advanced things like having your own Copilot chatbot available in your VLE for example, then you’re into another level of premium which goes up to hundreds a month.

We also discussed concerns about privacy and data security. If Copilot is given access to your OneDrive and SharePoint files for example, then you need to make sure that everything has correct data labels, or else you run the risk of the chatbot surfacing confidential information to users.

At Sunderland we have no plans for any premium generative AI tools at present, the costs are just prohibitive. And it’s not just at this level, the entire field of generative AI is hugely expensive and completely unsustainable. So I’ll end as I began, with prognostications: OpenAI is haemorrhaging money, they lost over half a billion dollars last year. They are living on investment capital, and unless the finance bods start seeing a serious return, they are going to pull the plug. Sooner rather than later I reckon. I don’t think OpenAI will go under exactly, but I do think they are going to get eaten by one of the big players, Microsoft most likely. A lot of headlines were made last year about Microsoft’s $10 billion investment, but people haven’t read the fine print – that $10 billion was in the form of server credits, so Microsoft is going to get that back one way or another. I’m going to give the AI bubble another six to eighteen months.

What will come after that? Generative AI isn’t going to go away of course, it’s a great technological achievement, but I think we will see a shift towards smaller models being run locally on our personal devices. It will be interesting to see how Apple Intelligence will pan out, they aren’t putting all of their eggs into the ChatGPT basket. And as for the tech and finance industries? They’ll just move onto the next bubble. Quantum computing anyone?

Leave a Comment