We’ve had a rebrand. Now that we have a new Deputy Vice Chancellor who has changed the title to DVC Education, our annual Teaching and Learning Conference has been renamed to the Education Conference to match.
The day began with welcome messages from the DVC, and our VC, David Bell, who talked about the challenges of identifying truths and falsehoods in our increasingly siloed and partisan culture, and the importance of ensuring students develop critical thinking skills to cope in both education and employment.
The keynote talk was delivered by James Coe, Associate Editor (Research & Innovation), at WonkHE, and a local lad from the North East. The key message of his talk was about the challenges and pressures students now face as a result of cultural changes since his own time at university, back in ye olden days of 2011, before the start of £9k student fees and a time when he received a £4k bursary. Now that students face far harsher financial challenges and graduating into a stagnant labour market, James talked about how pressures have flipped, with students now having to fit in lectures and study around work, rather than the other way round as it was in the recent past and, I would argue, how it should be. This leaves them a lack of space and time to study and benefit from the formative experience of being a student.
The name of the day may have changed, but something that stayed the same was the always excellent student panel discussion. I have always found this very useful and insightful. This year a lot of the discussion was about the real world use of generative AI tools, as might be expected. The panel talked about how they are using these tools to help structure their work and adjust their writing voice, and were well aware of the dangers of overuse and offloading their thinking to these tools. Specifically, they commented about a fear that they would reduce their writing skills. I was also pleased, if that’s the right word, that the panel echoed my concern that the University does not provide sufficient and clear guidance for students on what they are and aren’t allowed to use exactly, and how they are allowed to use them.
I joined this UCISA webinar at almost the last minute, when I found out that they were going to be talking about Cadmus via the HeLF email list. Cadmus is something I need to learn a lot more about in connection with a project I’m involved with this year.
It opened with Julie Voce, of City St George’s, University of London, delving into some of the challenges the sector is facing in relation to generative AI. She talked about human-based detection of AI plagiarism by looking for hallmarks of LLM content, such as the use of words like ‘delve’ and the em dash—for an explanation of why this is problematic, watch Etymology Nerd’s short on the feedback loops which I’ve embedded above—Julie also talked about a practice she has observed in staff, finding that some people will dock students a few percentage points when marking if they suspect LLMs have been used, but can’t prove it.
This led on to Tom Hey’s case study on their use of Turnitin’s AI detection tool at Leeds Beckett. They have been a longtime user of Turnitin’s authorship tool which launched in 2019 to help detect contract cheating, and adopted the AI detection tool when that launched as it was already part of their license, academics wanted it, and they didn’t want staff using unauthorised tools at their own discretion. Tom reported good success with this, but noted that it had to be framed by their ‘Academic Honesty Policy’ for staff and students, which emphasises that these tools are a backstop to help academics, and are nor foolproof detectors of plagiarism. In the chat, someone posted a link to a Jisc paper on the validity of AI detection systems which makes for interesting / depressing reading (delete as appropriate).
Finally, Chie Adachi from Queen Mary University presented about their experience of using Camdus to support assessment during a pilot which ran over the previous year. Unfortunately I didn’t get to see the tool itself or learn much about it, but the results of the pilot were very positive, with 82% of students reporting a positive experience, a 7% increase in average grade, and a 38% decrease in first time failure rate.
Another couple of useful links from the discussion which I thought worth sharing: A Harvard Business Review article on how ‘workslop’ is harming office productivity, unfortunately behind a paywall, but you may have access through your Library, and a report from MIT (PDF), on the impact of Generative AI on business to date – “95% of organizations are getting zero return”.
AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Today I had the pleasure of attending the first RIDL:HE conference at Northumbria University, chaired by Nic Whitton and Alex Mosley, with an online Discord presence facilitated by Malcom Murray from Durham. Rather than my usual boring recap of sessions I attended and what I learned, when revising my notes this time I find that there are some themes which did not emerge, but which I identified as spanning across different sessions and discussions throughout the day. So let’s try looking at it from that lens:
Playfulness: The Research in Digital Learning conference opened by articulating a mission, to make conferences fun and relevant again by injecting playfulness (or mischief, as identified in one of the pillars) and criticality. This in response to a feeling that academic conferences have become too focused on selling services or solutions to problems. “If you enjoy what you’re doing, you’ll do more of it”, as someone said. I was also very pleased when they said that, in accordance with the principle of ‘integrity’, the catering for the conference was entirely vegetarian / vegan, as this has the biggest environmental impact on hosting a conference.
The only way I get to attend conferences of this nature is by submitting something, which makes my boss happy (and is good for me) so I was there to talk about the work we’ve done on a pilot of Studiosity’s Gen AI powered version of Writing Feedback+ on our Sunderland Online version of Canvas. Presenters were briefed to make their sessions fun, interactive, and engaging, so during my talk I press-ganged everyone into joining my new venture, Sonyaosity, to give them a taste of providing feedback on a sample piece of academic writing. The idea was to demonstrate how difficult and time-consuming this can be, and then to compare and contrast this with how quickly Studiosity’s AI can do the same job. It did work, but not as well I would have liked. I used a piece of my own writing on ethical grounds, but Malcom said to me after I shouldn’t have told people this in advance, as it may have made them reluctant to criticise me as much as I wanted them to.
Polycrisis: I was fortunate that due to a shuffle of the schedule I was able to attend two sessions ran by the team behind the University of Banford: Lawrie Phipps, Peter Bryant, and Donna Lanclos. Banford is a hyperreal institution designed to explore the issues facing higher education in an exaggerated, playful manner. See, for example, The Department Most Likely to be Shut Down in Austerity, in the Faculty of Old Things. In the first session we were tasked to imagine ourselves as academics at Banford, being pulled in different directions as the institution pivots around teaching online / in person, or being against / pro AI, at the whims of the senior leadership team.
This introduced us to the concept of the ‘polycrisis’, the constant state of crisis afflicting HE, and the never-ending technological hype cycle which those of us in learning technology are especially burdened with: ‘Meta decides to hype VR again, so teaching has to utilise VR headsets now’; ‘Large language models get good and are rebranded as AI, so now everything needs to have an AI chatbot’, etc. Another aspect of polycrisis came from a session on ‘Becoming a Digital Scholar’ by Dr Jane Secker, who reflected on the whiplash of Covid, pivoting from making all teaching online at the height of the pandemic, to the decrees from government that all teaching had to go back to in person over concerns about students fees. This particularly affected disabled student who, during the pandemic, finally got the teaching and support they had been asking for for decades, only to have it whipped away again.
In the second Banford session the team deconstructed the exercise to explore some of the concepts. This started with an exercise asking us about we feel about the end of learning design – ‘sad’, ‘anarchy’, ‘dystopia’ – before talking about the role of learning designers and how we are particularly exposed as the “first responders” to whatever new thing or policy lands on our heads. There was a good discussion on how austerity and neoliberalism robs us of our time to be able to reflect and understand. After all, there is no time to question what we’re doing if we constantly have to be responding to the latest crisis.
Open Educational Practice: Finally, I was introduced to the term ‘Open Educational Practice’ by Dr Secker, a collective term which encompasses and expands on OER (Open Educational Resource) to include open access publishing, and technologies and pedagogies which encourage collaborative and flexible approaches to teaching and learning. Joining the themes together, I think a good argument could be made for the adoption of OEP in response to some of the crisis which are afflicting HE, such as austerity, marketisation, and the growth of authoritarianism.
I was at St James’ Park today, I believe the local football fans are rather fond of the place, but I was there for Turnitin’s first roundtable discussion since before the pandemic. Trying to start this post with ‘not AI’, we had a look at Turnitin’s product roadmap which is all about the new Feedback Studio. The new version has been redesigned from the ground-up to be screen reader accessible, a common complaint about the old version, and to be fully responsive, rather than Turnitin developing mobile apps for the platform. The rubric manager has also been rewritten to make improvements in managing and archiving rubrics, and adding the ability to import rubrics from common file formats like Excel, rather than the previous propriety format they used. It goes live on July 15th, but institutions can opt-out, and they are expecting a long period of transition. Alas that we are switching to the Canvas framework integration so our staff won’t benefit from this.
And that’s about it for ‘not AI’. In the opening remarks Turnitin presented on the outcomes of a global staff and student survey on perceptions of generative artificial intelligence. Overall, 78% of respondents were positive about the potential of AI, while at the same time 95% believed that AI was being misused. Among students only, 59% were concerned that an over-reliance on AI would result in reduced critical thinking skills (I have thoughts on this that I’ll circle back to later). In the slightly blurry photo above (I was sat at the back) you can see the survey results broken down by region, showing that in the UK and Ireland we are the least optimistic about AI having a positive impact on education, at only 65%, while India has the most positive outlook at 93%. All regions report being overwhelmed by the availability and volume of AI, which is unsurprising when every application and website is adding spurious AI tools to their services in a desperate attempt to be The One that sticks and ends up making a profit. (Side note to remind everyone that no-one is making any money out of actual AI systems in the current boom, these large language models are horrifically expensive to train and run, and the whole thing is being sustained by investment capital in a huge gamble on future returns. What could possibly go wrong!?)
The keynote address was delivered by Stephen Gow, Leverhulme Research Fellow at Edinburgh Napier University, who discussed the StudentXGenAI research project, and the ELM tool at the University of Edinburgh which is an institutionally provided front-end for accessing various language models but which has safeguards built-in to prevent misuse. Stephen reported on the mixed success of this. While it seems like a good idea, and the kind of thing I believe universities should be providing to ensure equitable access for all students, uptake has been poor, and students report that the they don’t like using the tool because the feel it’s ‘spying on them’, and would rather use AI models directly – highlighting issues of trust and autonomy. Stephen pointed us to C. Thi Nguyen’s paper ‘Trust as an Unquestioning Attitude‘ for a more detailed discussion of trust as it pertains to complex IT systems, and how trust should be viewed not as a binary, but a delicate and negotiated balance.
During our breakout roundtable discussions, my group discussed how AI is a divisive issue, people either love it or hate it, with few in the middle ground. There is some correlation along generational lines here, with younger staff and students being more positive, but it isn’t an exact mapping. One of my table colleagues reported having an intern, a young, recent graduate, who refuses to use any Gen AI systems on environmental ethical grounds, while another colleague won’t use it because they fear offloading their thinking skills to it. That was the second time such a sentiment had been expressed today, and it made me think of the parallels with the damage that social media has done to attention spans, but while that concept took a long time to enter the public consciousness (and we are barely starting to deal with the ramifications), there seems to be more voices raising the problem of AI’s impact on cognitive ability, and it’s happening sooner in the cycle, which gives me some limited optimism. Another colleague at my table also introduced me to the concept of ‘AI shaming‘, from a paper by Louie Giray.
Finally, we were given a hands-on experience of Clarity, Turnitin’s new product which provides students with a web interface for written assessments with a built-in AI chat assistant. The idea is to provide students with an AI system that they can use safely, and which gives confidence to both them and their tutors that there has been no abuse of Gen AI to write the essay. I like the idea of this, and I have advocated for Sunderland to provide clear guidance to students on what they can and can’t use, and that we should be providing something legitimate for students which would have safe guards of some kind to prevent misuse. Why, therefore, when presented with just such a solution, was I so sceptical and disappointed; unable to see anything but its flaws? Maybe the idea just doesn’t work in practice.
I was hoping to see and learn more about Clarity today, so I was very pleased that we were given this opportunity. Of course I immediately started to try and break it. I went straight in with the strawberry test, but the system just kept telling me it wouldn’t help with spelling, and directed me to write something addressing the essay question instead. I did get it to break though, first, by inserting the word into my essay and asking it to check my spelling and grammar, but after I had something written in the input window I found that it would actually answer the question directly, reporting that ‘strawberries’ is actually spelled with one r and two b’s. Fail. When I overheard a colleague at another table reporting that it seemed to be directing them to use US English spelling, I decided to experiment by translating my Copilot produced ‘essay’ into Spanish with Google Translate. Clarity then informed me that the assignment required the essay to be in English, a straight-up hallucination as there was no such instruction. What there was, as Turnitin told us, was that the system has been built on US English and can’t yet properly handle other variations and languages. There were also quite transparent on the underlying technology which is based on Anthropic’s Claude model, which I appreciated as I have found other companies offering AI tools to be evasive, insisting that they have developed their own models based on the own training data only, which I’m highly sceptical about given the resource requirements.
Fun as it may be to try and break AI models with spelling challenges, it’s not what they are built for, and there is an old fashioned spell checker built into the text entry box. However, that doesn’t mean that when presented with an AI chatbot in a setting like this, students aren’t going to ask it questions about spelling and grammar. This seems like a perfectly legitimate use case, and the reason I suspect that Turnitin have installed a ‘guard rail’ here is that they are well aware that large language models are no good for this kind of question, just as they are no good for mathematical operations. Or, for that matter, providing straight facts. The development of people using these models like they were search engines should frighten everyone. Our table chuckled when one of us reported that ChatGPT was confidently telling them that Nigel Farage was the Prime Minister (did I say chuckle? I meant shudder.), but more subtle errors can be far harder to spot, and could have terrible ramifications in the fractured, post-truth world we’ve built. I’m sure I’ve said something like this before on here, and I probably will again, but calling these systems ‘intelligent’ has been a huge mistake. There is no intelligence to be found here. There is no understanding. Only very sophisticated predication systems about what comes next after a given input.
I’m most doubtful about the assumptions that students will want to use Clarity in the first place. Am I giving myself aways as old when I say that I would never even contemplate writing something as important as a multi-thousand word essay in an online web interface that requires a stable, constant internet connection? Clarity has no ability for students to upload their written work, and though you can copy and paste text into it, this would be immediately flagged by Clarity as an issue for investigation. There’s no ability for versioning, no ability to export and save offline, limited formatting options and fonts, no ability to use plugins for reference management, etc. I also can’t imagine any circumstances in which I would recommend students use Clarity. It is not an infrequent problem that academics come to us reporting that they have spent hours writing student feedback in Turnitin’s Writing Feedback tool, only to find out later that their comments haven’t saved properly and just aren’t there. It is such a big problem that we routinely train our staff to write all of their feedback offline first, and then copy and paste it into Feedback Studio. Colleagues in the room challenged Turnitin about this, and the response was that in their evaluation students reported being very happy with the system.
Nevertheless, Turnitin believe that some kind of process validation is going to be necessary to ensure the academic integrity of written work going forwards, and I do think they have a point. But the only way I can see Clarity, or something like it working, is if academics mandate its use for assessment with students having to do everything in the browser, in which case unless they are teaching a module on how to alienate your students and make them hate you, it isn’t going to go down well. As much as Turnitin would like it to be so, I don’t think there’s a technological solution to this problem. I increasing think that in order to validate student knowledge and understanding we are going to have to use some level of dialogic assessment, which doesn’t scale in the highly marketised higher education system we now find ourselves in.
AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
I don’t usually attend these Turnitin product updates, not out of a lack of interest, just because it’s something that lies more with the other half of the team here at Sunderland, so I leave them to it and to cascade what’s important to the rest of us when required. This one piqued my interest though, after seeing a preview of the new user interface at NELE last week. You can see some of the planned changes to the Feedback Studio and the Similarity Report view above. I asked a question about the lack of audio feedback following NELE, and was told that this, along with new video feedback capabilities are on the roadmap and coming soon.
I was also interested in their new Clarity tool, which will allow students to submit or write their work through a web interface, and get immediate feedback with help on how to improve their writing from Turnitin’s AI chatbot. Very similar to how Studiosity’s Writing Feedback+ service works, so that’s going to be very interesting for me to see how that develops.
AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
The final NELE meeting of the year took place at Newcastle University, and we began by looking at Northumbria and Durham’s experience of piloting Turnitin’s new user interface for Feedback Studio. It ‘looks’ good, modern and fresh, and there are some good additional features, such as the ability to link Quick Marks to parts of a rubric, but there are also missing features such as peer marking and audio feedback; features not working quite as they should, such as anonymous marking being rendered somewhat moot by the ‘Reveal Identity’ button; and perennial issues which remain unresolved, such as the infinitely nested scroll bars in the rubric view (first photo).
Next up, the team at Newcastle talked about their ongoing experience of using Inspera to manage digital exams. They shared some good practice of using videos within exams, using an example of giving health students an ultrasound recoding to watch and then asking questions about it. They are also still holding the line on proctoring, citing their testing experience of being able to easily trigger far too many false flags. Good for them.
Rounding off the morning, Adam and I from Sunderland, and Dan from Newcastle led a discussion on VLE standards. I liked the work Newcastle have done on a specimen ‘perfect’ module that meets everything to show academics how it’s done, while our ‘MOT’ service, monitoring processes, and friendly interventions with academics on how they can improve their modules, are completing the circle.
After lunch, and some unscheduled physical activity for me (don’t ask), Newcastle presented on their learning analytics system, NULA, which has been developed in collaboration with Jisc. They had very good things to say about Jisc on this one, that they’ve been very supportive and responsive on building ways of monitoring and reporting on the measures which Newcastle wanted to set.
Next, it was Dan from Newcastle again, who talked about their experience of working with students to develop their new module template which has been designed to be mobile friendly first (second photo). Something many of us claim to do, but which actually seems to be quite rare.
Finally, we were joined by Emily from Newcastle’s library team who presented on the things which keep a librarian up at night. It’s AI. It’s always AI. Specifically, every publisher is experimenting with their own generative AI tools to help people find and analyse the resources in that database. The problems are many. First, these features are coming and changing at the whim of the publisher, without warning or any ability to test and evaluate. One particularly egregious example Emily mentioned was a journal that would provide temporary access to their AI search tool to academics who had attended specific training events, or happened on specific buttons and options on their website. Secondly, Emily was deeply concerned about AI literacy and who is responsible for teaching it. It seems to be falling on interested parties in different departments in different places, when it is really something that needs direction and dedicated roles and senior staff sponsorship. Finally there are the hidden costs. While publishers are marketing these services as free improvements to their search tools, in reality they are raising subscriptions costs on the back end, at a time when the sector is struggling and almost every institution is closing courses and laying off staff.
AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
I don’t know how I found out about this event, but it was very good! It was a talk by Louise Drumm (Edinburgh Napier), facilitated by John Brindle (Lancaster), examining some of the issues that arise in the intersection between digital technology and educational research. Among the points discussed were how the former is a fast moving, external, source of pressure and change, swimming in venture capital cash, while the latter is often slow, ponderous, and impoverished. Louise talked about agency, and how we, as learning technologists and educators, are expected to be users, often knowledgeable and enthusiastic ones, of technologies and practices regardless of how we may feel about them personally.
Louise created a Miro board for the session in which she had created a timeline of digital technology innovations and events which have emerged throughout her career, grouped into different phases. She opened the board up to the group for us to collaboratively edit, move, change, and add new items, which was chaos, but good, creative chaos! Which was a theme of her talk and of her research practice. Creativity that is, not chaos. Just to be clear.
No sooner had the Old Boss become the New Boss once again, than she pulled one of her classic Boss moves and sent me away to the other side of the country!
Jisc’s Framework for Digital Transformation in Higher Education has been around for a few years now, with the aim of helping HEIs to transform and improve their digital infrastructure and services. Bath Spa University has been one of the 24 universities piloting this with Jisc, and this event was an opportunity for them to share their experiences with other interested institutions. For Bath, this has included a major VLE upgrade, a transition from Google for Education to Microsoft, and the implementation of an AI chatbot powered by LearnWise.
The day began with a keynote session delivered by senior staff at Bath Spa talking about what they have done and learned, and ended with a panel discussion which included staff from Jisc who fielded all of our questions. A key takeaway, reiterated by Jisc at several points, was that the most important factor for success was that the technology doesn’t matter as much as the people, culture, and processes in an institution: people over technology.
In between we had three breakout sessions covering how digital assessments had been implemented in an arts department with great success; an interactive session exploring the competencies of ‘digital fluency’; and the third on the challenges of developing a collaborative culture within your institution. In this one we explored an alternative to stakeholder mapping, using a Venn diagram to reframe the groups as collaborators that can help us to achieve the goal, instead of more passive people who need to be managed. In my group, photo of our chart above, we used an example of building up a satellite campus, as we were all involved in such a project, at different stages.
Written case studies from 12 of the partners Jisc have been working with, including Bath, are available on their Report and Case Studies webpage.
We are now the Centre for Teaching Excellence, as of May 1st. Squirrel unrelated. The upper echelons of the University have had a jiggle with the departure of our Deputy Vice Chancellor Academic, so we have been split off from the Centre for Graduate Prospects to become our own service again, bringing us into alignment with the new structure. It’s taken a little while to agree on the new name, hence delayed announcement, but I’ve just updated the CPD page and realised I should note the change.
An excellent question, posed by the HeLF folks, to which the only possible answer is a resounding ‘yes’. But that would make for a very short webinar, so we discussed the issues around this too. Obviously a very interesting session for me, as I have been trying to push my career in this direction over the past few years, as you can probably tell, and the work I’ve been doing on Studiosity has afforded me an excellent opportunity to do so.
We had a good discussion on the nature of research and the differences between research and evaluation. The latter, generally, being something which is done for internal purposes and audiences only, while research is likely of wider interest and therefore there is value in sharing via relevant publications. Within our community, however, there may be barriers which prevent, or make it difficult for professional services staff to publish. One colleague mentioned a publication, not named to protect the guilty, which charged for publication, but gave steep discounts to academic contracted staff, but none if you happened to have ‘professional services’ on your contract.
We also talked a lot about ethics committees, which again can be hard to access, with another colleague reporting that they weren’t even allowed to submit something to an ethics panel, while at another institution professional service staff were kicked out of their ethics board because it was felt to be having a negative impact on their REF submission.
That all sounds rather bleak, but there are solutions to these problems. Some people reported having nominal 0.2 academic contracts to get over institutional barriers, while others are running their own internal ethics boards. It was a very good discussion this morning, and something which is going to become a series, so I will be learning and writing more on this.