Press "Enter" to skip to content

Sonya's Blog Posts

Book

Photos of a book, where I have been published!
My name in print. Immortality = assured.

I wrote a thing. But this time, for the first time, it’s been published in a book! My case study on Sunderland’s first year of our partnership with Studiosity has been published as ‘Case Study 3.6: Enabling Students to Evaluate Their Academic Writing’ in ‘Teaching and Learning with Innovative Technologies in Higher Education: Real-World Case Studies‘, edited by Gelareh Roushan, Martyn Polkinghorne and Uma Patel.

That’s a big life goal achieved, but I guess I’m going to have to get my name on the front of a book next!

Leave a Comment

NELE: February 2025

It’s the glorious return of NELE! The North East Learning Environments user group. Except, not really, because the first meeting of reformed NELE was in November, and I got a cold the day before and couldn’t go. NELE is the same group of folks as per ALT North East, but we have mutinied and left the umbrella of ALT, to become NELE once again, which it was before we partnered with ALT. Times change, organisations change, logos change, but the people and the purpose remain.

Today we talked a lot about ePortfolio systems, sharing our experiences of Mahara, both hosted and self-administered, PebblePad, and other bespoke solutions such as using OneNote and the NEE Pad solution for healthcare professionals in the region. The one area of consensus was that whatever solution you choose, take-up rates remain modest. At Sunderland, we’re looking at potentially replacing Mahara following disappointment with the Canvas integration, but after hearing what folks are paying for PebblePad, I don’t think we’ll be going down that route. Neither did OneNote come out of the discussion well, with people noting major problems with syncing and complications caused by multiple versions of the app, all looking and working a little differently.

The second topic of conversation was around digital accessibility and VLE threshold standards, and how to maintain a minimum level of quality. It’s interesting to see a few institutions are picking up on the idea of baseline standards of some kind, a topic we’ll come back to in a future meeting, as we’re doing some good work on this at Sunderland. I learned about Global Accessibility Awareness Day which is going to be on May 15th. That’s going to present a good opportunity to do some awareness work leading up to it.

After lunch we had a tour of Teesside’s new BIOS building for health sciences, which is where all of the photos come from today. They have an interactive room powered by Gener8, which is a lot like our Immersive Interactive room, but the technology has come along a bit. Projectors are low profile and flush to the ceiling, very inconspicuous, and there are infrared sensors all along the top of the walls to make the ‘touchscreen’ functionality work. We also had a look in their simulation suite, which includes a whole surgery room, and the microscopy lab was very impressive, with microscopes able to output to screens for everyone to see.

Finally, we had a discussion about note taking applications and approaches, commenting on how these kind of skills are not taught as part of course work, but only, at best, in optional study skills sessions usually run by the library. Some of the software we looked at included Obsidian and Notion.

Leave a Comment

CSET 2025

Photo of a phone with a thinking emoji on screen
Photo by Markus Winkler on Unsplash

CSET 2025, Critical Studies of Education and Technology, is a global research project, organised by Neil Selwyn, Professor of Education at Monash University, Australia, to bring together academics and educators with an interest in digital technology to discuss the issues we are facing in small groups, to feed back to the central project and to build local communities. I was pleased to find that Durham University had picked up the initiative in the North East and a number of representatives from Sunderland were able to attend the event. Rather than recapping the discussions at the event itself, I’ve instead decided to give my individual written response to each of the four research questions below, but informed by those conversations.

1. What are the pressing issues, concerns, tensions and problems that surround EdTech in our locality? What questions do we need to ask, and what approaches will help us research these questions?

I think it’s increasingly difficult to separate ‘EdTech’ from ‘technology’ in general, and my first thoughts about the impact of technology on the ‘issues, concerns, tensions and problems’ on people in the North East of England, and Sunderland in particular, one of the country’s most deprived cities, is how social media has, over the past 10-15 years, destroyed the idea of a common truth.

This is a concern which should be at the heart of universities as places of learning, but instead I feel that our time and efforts are increasing spent at the whim of whatever tech craze is current, struggling to stay ahead with little criticality. Just in my time as a learning technologist, the hype bubbles I’ve seen come and go include virtual reality, the blockchain, MOOCs, machine learning, the metaverse (VR again), and now generative AI (sparkling machine learning). Big Tech has sold every one of these innovations as the next big thing, driving us to adopt virtual and augmented reality head gear, or convert our modules to fully self-directed, online courses, only for the benefits to be rather niche. Meanwhile, the Canvas.net modules I helped develop have been quietly abandoned and then deleted, and the Meta Quest sits atop our lockers gathering dust.

I will grant that generative AI feels a little different, as the pressure there feels more like something which is coming from the bottom up – from student’s use and misuse of them, to which we have to respond to uphold the integrity of our degrees and awards. AI literacy is something that we really need to get on top of.

2. What social harms are we seeing associated with digital technology and education in our locality?

There is a lack of ownership when it comes to technology. The big, central VLE is a university-owned and controlled space, with students as consumers of content, and when we provide spaces which try to flip the pedagogy and make them student-owned, like an ePortfolio, I find that use is limited. Instead, students develop their own personal learning environments on platforms like WhatsApp and WeChat. It was perhaps ever thus, going back to my own university student experience the Facebook groups which used to pop up for each module were invaluable sources for information and sharing things that perhaps our teachers and the institution wouldn’t want us sharing, old exam papers for example. But these informal spaces can be problematic too, from inequalities of access, to bullying and harassment which is hidden away.

There is also an increasing problem of rentier capitalism, as technology has shifted from a model of buy once and own the software, to recurring subscriptions where you lose your access and data if you can’t pay. Many of these services are also tiered, with better off students able to pay higher subscriptions for more or better features, which exacerbates poverty and contributes to wealth inequality, the everything bagel that is behind pretty much every social and political problem of our age.

3. What does the political economy of EdTech look like in our region? What do local EdTech markets look like? How are global Big Tech corporations manifest in local education systems? What does EdTech policy look like, and which actors are driving policymaking? What do we find if we ‘follow the money’?

Follow the money, and you’re going to end up in the USA. Maybe Australia. Australia has quite a nice little pocket industry of learning technology, e.g. Studiosity, but whichever side of the world you end up in, EdTech is dominated by their own tech giants like Blackboard, Instructure, and Turnitin. This means that we are often working around design and teaching conventions from a US market that don’t work in the UK. At Sunderland, our Canvas modules use a repurposed ‘syllabus’ page for our module template, despite the concept of a syllabus not being a thing in UK HE. Secure and private data storage is always an issue, and I don’t have a lot of faith in the integrity of the various ad-hoc data sharing agreements between the US and the UK / EU which have cropped up since GDPR and EU privacy legislation came into effect.

The UK has traditionally had quite a strong open source contingent, the Moodle and Mahara collaboration, but I feel like that’s fallen away a little in the past few years. The problem with open source solutions is that the software may be ‘free’, but they aren’t free to run, and HEIs using this approach need to have a team of learning technologists and developers to look after them, something which I fear can be seen as a cost saving in a move to hosted solutions with SLAs. But the more consolidated the sector becomes the less power we have to drive change in the direction we want. I am glad that we still have organisations like Jisc and ALT that can advocate for us, are indeed formed of us, and can negotiate and innovate from a more powerful position. More of that in my answer to the next question.

Vendor lock-in is another issue with the big EdTech companies. There is EU regulation on data sharing and ownership, but propriety features and functionality render this next to useless in my experience. When I ditched Spotify and started buying music again, I was able to export a huge spreadsheet of my library, which is lovely, but I can’t do anything with it! I feel like EdTech is even worse. When Sunderland migrated from Pearson LearningStudio (don’t ask…) to Canvas, we had to start again from a blank canvas, if you’ll pardon the pun. I’ve also attempted migrating my ePortfolio from PebblePad to Mahara using the Leap2a standard which technically worked, but with very poor results.

4. What grounds for hope are there? Can we point to local instances of digital technology leading to genuine social benefits and empowerment? What local push-back and resistance against egregious forms of EdTech is evident? What alternate imaginaries are being circulated about education and digital futures?

I worry that I’m becoming increasingly grouchy about technology as I get older, and my youthful optimism in general has been taking a battering since 2016. Yes, very specifically 2016. But there are reasons to be hopeful! There are events like this which bring like-minded people together to share our experience and, if nothing else, afford us the opportunity to really pin down the issues we are dealing with.

Then there are the industry bodies and communities like Jisc, ALT, Advance HE, and even our wee North East Learning Environments group that has sprung back to life like an elephant-shaped phoenix, that are leading a collective response to emerging challenges and finding innovative solutions. A good recent case being Turnitin who, having captured pretty much the entire UK HE sector with their originality checking tool, tried to do the same thing again with their AI detector by offering it for free on a limited time basis to everyone, only for a collective response to emerge from the community to say ‘no’, we want the ability to turn this off and make decisions that are best for us as individual institutions. A feature which was then added.

Modern EdTech, for all its problems, has also created huge opportunities to expand education to people for whom a tertiary education would have been unobtainable even a generation ago. I am myself an Open University graduate who was unable to follow the conventional post-18 university route for a number of reasons. Many of the tools and systems also bring big quality of life improvements to all of us, genuinely making our work as educators easier. Last week, for example, I received an automated email from Canvas alerting me to a number of broken links in the module I’m currently teaching which I was then able to easily find and fix.

Finally, there are still great tools and solutions being created by smaller teams and often shared as open source or under a creative commons license. A great example from our region in this space is Numbas, Newcastle University’s bespoke solution to online maths testing.

Leave a Comment

Generative AI: A Problematic Illustration

Screenshot of a slide from the presentation, showing some delicious pancakes
Mmm… pancakes…

To give the workshop its full title, Generative AI: A Problematic Illustration of the Intersections of Racialized Gender, Race, and Ethnicity. Facilitated by Nayiri Keshishi from the University of Surrey and Dustin Hosseini from the University of Glasgow, and based on Dustin’s blog post. Hands down, the best session on generative AI I’ve attended over the past two years. It was so good I’m going to rework the timetable of our PG Cert to include a version of this for the cohort I’m currently teaching.

Why was it so good? Because it took some of the ethical issues over the use of generative AI and turned them into an interactive session where we, as participants, could interrogate the problems for ourselves. This was done via the medium of a seemingly innocuous prompt which was put into an image generating AI system: ‘Create an image of a sweet, old X grandmother making pancakes’, where X was a given nationality, e.g. Russian or American. We were then asked to analyse the generated results using a framework which asked us to consider atmosphere, decor and clothing, and expressions and ethnicity.

Discussions about what we can do about this included cascading the learning and knowledge more widely, which is why all of the slides and resources to deliver the session have been published under a Creative Commons licenses on ALDinHE’s website. Another suggestion was to document the issues we encounter when using these technologies and share them on relevant forums and social spaces, and finally, what I think is the best and most useful thing we can do as educators, is to embed AI literacy in the curriculum.

The only note I had coming out of the session was that there was a statement, an assumption, that all of these new AI companies are making huge amounts of money. There is certainly a lot of money moving around in the space, but it’s all speculative investment on presumed future returns. In actuality, OpenAI lost $5 billion last year, and they’re on track to lose another $10 billion this year.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Leave a Comment

Copyright and Artificial Intelligence Consultation

Funny meme showing DeepSeek as a cat, stealing OpenAI's fish, which is stolen data
A gratuitously stolen meme from Reddit. Oh, the irony! The hypocrisy!

The UK government are currently running an open consultation on copyright and artificial intelligence, and have outlined their preferred solution to “include a mechanism for right holders to reserve their rights, enabling them to license and be paid for the use of their work in AI training” and to introduce “an exception [into copyright law] to support use at scale of a wide range of material by AI developers where rights have not been reserved.”

The main issue I have with this proposal is that it does nothing to respond to the wholesale copyright theft which the tech industry has already conducted. Additionally, it firmly places the emphasis on individual creators for protecting their copyright, when the bleak reality is that it is already the case that individuals have no practical means of redress against multinational mega corporations like Meta, OpenAI and DeepSeek*, who openly admit to copyright theft to train their large language models. I would much prefer that the government spent its efforts towards enforcing existing laws in order to protect the livelihoods of artists, authors and creators, rather than appeasing the tech industry.

But that’s just my opinion. If you have your own thoughts on the matter, you can read the full proposal on the gov.uk website and complete the consultation online. Like every government consultation I’ve ever engaged with, it’s dense, complicated, and time consuming. Almost like it was designed to be off-putting and to lead to a foregone conclusion. I was guided in my submission by the work of the Author’s Licensing and Collecting Society.

As well as seeking individual responses, organisations are also invited to respond to the consultation as collective bodies. ALT are doing so behalf of the learning technology community, and are asking for feedback to them by the 18th of February, with the consultation closing a week later on the 25th.

* My compliments to DeepSeek on training their AI model on OpenAI’s AI model, then releasing it as open AI, which OpenAI is not, something which has irked them greatly, and for that alone they are worthy of praise.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
1 Comment

AI and Assessment Workshop

Perplexity AI User Interface
Screenshot of Perplexity search options

Today I attended one of our own AI and Assessment Workshops to see what advice and guidance we are giving to academics and what their feelings and needs are around this topic. This is a new run of sessions which we have just started, and has been organised by one of our academics working on the topic alongside a member of my team.

Despite having published staff and student guidance documents and a dedicated SharePoint space to collate resources and our response, I found from conversing with staff at this event that there is still a prevailing feeling of lacking steer and direction. People were telling me they don’t know what tools it’s safe to use, or what students should be told to avoid. We also had a lot of people from the Library Service today, which is perhaps also indicative of the need for firmer student guidance.

I was pleased to note that there is some good practice filtering through too, such as using a quiz based declaration of use which students have to complete before unlocking their assignment submission link. We talked about adding this to our Canvas module template for next academic year, that’s something one of the academics suggested to us. On the other hand, I found people were still talking in terms of ChatGPT ‘knowing’ things, which is troubling because of the implication that these systems are more than they actually are.

While much of the session took the form of a guided dialogue, my colleague was also providing a hand’s on demo of various systems, including Perplexity which people liked for providing links out to the sources it had used (sometimes, not always), the ability to restrict answers to data from specific sources, such as ‘academic’, but noted a very US bias in the results, a consequence of the training data which has gone into these models. I was quite impressed when I tried to ‘break’ the model with leading prompts and it didn’t indulge me.

A new tool to me was Visual Electric, an image generation site aimed at producing high quality photo-like images. I have thoughts on some of their marketing… But I’m going to try and be more positive when writing about this topic, as I find it very easy to go into a rant! So instead of doing that, I have added a short disclaimer to the bottom of this post, which I’m also going to add to future posts which I write about AI.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Leave a Comment

Personal Safety Training


It’s okay, he has his safety tie on

This month’s big team meeting was given over to the University’s Security Manager for a session on personal safety, and touched upon conflict management. More security than safety then, but ‘safety’ is a friendlier term. When I think of personal safety I tend to think more along the lines of the great Colin Furze and his shenanigans.

It was unexpected training, and pretty useful. We learned about de-escalating situations from a number of problem-based learning scenarios, the institutional and personal responsibilities with regards to duty of care and health and safety, and what the University is doing to keep us all safe at work. This includes, however you feel about it, the network of 400 security cameras on Sunderland campus, the relatively new wider campus card controlled building access, and the Estates team are pushing for us to get a system called Safe Zone which is an app based panic button. We are, apparently, the only university in the North East not already using this.

Leave a Comment

Helping Students Develop Critical Thinking Skills When Using Generative AI (Part 2)

Part two of Kent’s Digitally Enhanced Education series looking at how generative AI is affecting critical thinking skills. This week we had stand out presentations from:

Professor Jess Gregory, of Southern Connecticut State University (nice to see reach of the network, well, reaching out), who presented on the problem of mastering difficult conversations for teachers in training. These students will often find themselves thrust into difficult situations upon graduation, having to deal with stubborn colleagues, angry parents, etc., and Jean has developed a method of preparing them by using generative AI systems with speech capabilities to simulate difficult conversations. This can, and has, been done by humans of course, but that is time consuming, could be expensive, and doesn’t offer the same kind of safe space for students to practice freely.

David Bedford, from Canterbury Christ Church University, presented on how the challenges of critical analysis are not new, and that anything produced as a result of generative AI needs to be evaluated in just the same way as we would the results of an internet search, or a Wikipedia article, or from books and journals. He presented us with the ‘BREAD’ model, first produced in 2016, for analysis (see first screenshot for detail). This asks us to consider Bias, Relevance, Evidence, Author, and Date.

Nicki Clarkson, University of Southampton, talked about co-producing resources about generative AI with students, and noted how they were very good at paring content down to the most relevant parts, and that the final videos were improved by having a student voiceover on them, rather than that of staff.

Dr Sideeq Mohammed, from the University of Kent, presented about his experience of running a session on identifying misleading information, using a combination of true and convincingly false articles and information, and said of the results that students always left far more sceptical and wanting to check the validity of information at the end of sessions. My second screenshot is from this presentation, showing three example articles. Peter Kyle is in fact a completely made-up government minister. Or is he?

Finally, Anders Reagan, from the University of Oxford, compared generative AI tools to the Norse trickster god, Loki. As per my third screenshot, both are powerful, seemingly magic, persuasive and charismatic, and capable of transformation. Andres noted, correctly, that now that this technology is available, we must support it. If we don’t, students and academics are still going to be using it on their own initiative, the allure being too powerful, so it is better for us as learning technology experts to provide support and guidance. In so doing we can encourage criticality, warn of the dangers, and encourage more specialised research based generative AI tools such as Elicit and Consensus.

You can find recordings of all of the sessions on the @digitallyenhancededucation554 YouTube channel.

Leave a Comment

Nothin’ But Blue Skies


Smiling at You

I’m on Bluesky now. Of course I am, it’s having a moment.

Long have I been a fan of Mastodon, its open source and decentralised model closely align with my personal values, but it’s never taken off. There is a technical barrier to entry with Mastodon and though it has seen slow and steady growth over the years, network effect has never kicked in for it. Bluesky is built on a similar decentralised model, but removes the friction of entry. Paired with the social media site formally known as Twitter reaching new heights in its villain arc, Bluesky adoption is skyrocketing.

I’m also on Threads. But who cares? Bolted on to Instagram overnight like a bad school project, Threads was garbage from the outset. I think I took one look at it and checked out.

Starting over again on yet another social media site can be scary and intimidating, but I found a great Firefox plugin called Sky Follower Bridge which will scan your followers on that old site, and look for matches on Bluesky. I found 69! And I thought I was an early adopter.

Leave a Comment

Helping Students Develop Critical Thinking Skills When Using Generative AI (Part 1)

From the University of Kent’s Digitally Enhanced Education series, a two-parter on the theme of how generative AI is affecting student’s critical thinking skills, with the second part coming next week. We’ve been living with generative AI for a while now, and I am finding diminishing returns from the various webinars and training I have been attending. Nevertheless, there’s always new things to learn and nuggets of wisdom to be found in these events. The Kent webinar series has such a wide reach now that the general chat, as much as the presentations, is a fantastic resource. Phil has done a magnificent job with this initiative, and is a real credit in the TEL community.

Dr Mary Jacob, from Aberystwyth University, presented an overview of their new AI guidance for staff and students, highlighting for students that they shouldn’t rely on AI; for staff to understand what it can and can’t do, and the legal and ethical implications of the technology; and for everyone to be critical of the output – is it true? Complete? Unbiased?

Professor Earle Abrahamson, from the University of Hertfordshire, presented on the importance of using good and relevant prompts to build critical analysis skills. The first screenshot above is from Earle’s presentation, showing different perceptions on generative AI from students and staff. There were some good comments in the chat during Earle’s presentation, on how everything we’ve discussed today comes back from information literacy.

Dr Sian Lindsay, from the University of Reading, talked about the risks of AI on critical thinking, namely that students may be exposed to a narrower range of ideas due to the biases inherent in all existing generative AI systems and the limited ranges of data they have access to, and are trained upon. The second screenshot is from Sian’s presentation, highlighting some of the research in this area.

I can’t remember who shared this, if it came from one of the presentations or the chat, but someone shared a great article on Inside Higher Ed on the option to opt out of using generative AI at all. Yes! Very good, I enjoyed this very much. I don’t agree with all of it. But most of it! My own take in short: there is no ethical use of generative artificial intelligence, and we should only use it when it serves a genuine need or use.

As always, recordings of all presentations are available on the @digitallyenhancededucation554 YouTube channel.

Leave a Comment