Press "Enter" to skip to content

Tag: Turnitin

Navigating the Future: Innovation and Integrity in the Era of AI

I was at St James’ Park today, I believe the local football fans are rather fond of the place, but I was there for Turnitin’s first roundtable discussion since before the pandemic. Trying to start this post with ‘not AI’, we had a look at Turnitin’s product roadmap which is all about the new Feedback Studio. The new version has been redesigned from the ground-up to be screen reader accessible, a common complaint about the old version, and to be fully responsive, rather than Turnitin developing mobile apps for the platform. The rubric manager has also been rewritten to make improvements in managing and archiving rubrics, and adding the ability to import rubrics from common file formats like Excel, rather than the previous propriety format they used. It goes live on July 15th, but institutions can opt-out, and they are expecting a long period of transition. Alas that we are switching to the Canvas framework integration so our staff won’t benefit from this.

And that’s about it for ‘not AI’. In the opening remarks Turnitin presented on the outcomes of a global staff and student survey on perceptions of generative artificial intelligence. Overall, 78% of respondents were positive about the potential of AI, while at the same time 95% believed that AI was being misused. Among students only, 59% were concerned that an over-reliance on AI would result in reduced critical thinking skills (I have thoughts on this that I’ll circle back to later). In the slightly blurry photo above (I was sat at the back) you can see the survey results broken down by region, showing that in the UK and Ireland we are the least optimistic about AI having a positive impact on education, at only 65%, while India has the most positive outlook at 93%. All regions report being overwhelmed by the availability and volume of AI, which is unsurprising when every application and website is adding spurious AI tools to their services in a desperate attempt to be The One that sticks and ends up making a profit. (Side note to remind everyone that no-one is making any money out of actual AI systems in the current boom, these large language models are horrifically expensive to train and run, and the whole thing is being sustained by investment capital in a huge gamble on future returns. What could possibly go wrong!?)

The keynote address was delivered by Stephen Gow, Leverhulme Research Fellow at Edinburgh Napier University, who discussed the StudentXGenAI research project, and the ELM tool at the University of Edinburgh which is an institutionally provided front-end for accessing various language models but which has safeguards built-in to prevent misuse. Stephen reported on the mixed success of this. While it seems like a good idea, and the kind of thing I believe universities should be providing to ensure equitable access for all students, uptake has been poor, and students report that the they don’t like using the tool because the feel it’s ‘spying on them’, and would rather use AI models directly – highlighting issues of trust and autonomy. Stephen pointed us to C. Thi Nguyen’s paper ‘Trust as an Unquestioning Attitude‘ for a more detailed discussion of trust as it pertains to complex IT systems, and how trust should be viewed not as a binary, but a delicate and negotiated balance.

During our breakout roundtable discussions, my group discussed how AI is a divisive issue, people either love it or hate it, with few in the middle ground. There is some correlation along generational lines here, with younger staff and students being more positive, but it isn’t an exact mapping. One of my table colleagues reported having an intern, a young, recent graduate, who refuses to use any Gen AI systems on environmental ethical grounds, while another colleague won’t use it because they fear offloading their thinking skills to it. That was the second time such a sentiment had been expressed today, and it made me think of the parallels with the damage that social media has done to attention spans, but while that concept took a long time to enter the public consciousness (and we are barely starting to deal with the ramifications), there seems to be more voices raising the problem of AI’s impact on cognitive ability, and it’s happening sooner in the cycle, which gives me some limited optimism. Another colleague at my table also introduced me to the concept of ‘AI shaming‘, from a paper by Louie Giray.

Finally, we were given a hands-on experience of Clarity, Turnitin’s new product which provides students with a web interface for written assessments with a built-in AI chat assistant. The idea is to provide students with an AI system that they can use safely, and which gives confidence to both them and their tutors that there has been no abuse of Gen AI to write the essay. I like the idea of this, and I have advocated for Sunderland to provide clear guidance to students on what they can and can’t use, and that we should be providing something legitimate for students which would have safe guards of some kind to prevent misuse. Why, therefore, when presented with just such a solution, was I so sceptical and disappointed; unable to see anything but its flaws? Maybe the idea just doesn’t work in practice.

I was hoping to see and learn more about Clarity today, so I was very pleased that we were given this opportunity. Of course I immediately started to try and break it. I went straight in with the strawberry test, but the system just kept telling me it wouldn’t help with spelling, and directed me to write something addressing the essay question instead. I did get it to break though, first, by inserting the word into my essay and asking it to check my spelling and grammar, but after I had something written in the input window I found that it would actually answer the question directly, reporting that ‘strawberries’ is actually spelled with one r and two b’s. Fail. When I overheard a colleague at another table reporting that it seemed to be directing them to use US English spelling, I decided to experiment by translating my Copilot produced ‘essay’ into Spanish with Google Translate. Clarity then informed me that the assignment required the essay to be in English, a straight-up hallucination as there was no such instruction. What there was, as Turnitin told us, was that the system has been built on US English and can’t yet properly handle other variations and languages. There were also quite transparent on the underlying technology which is based on Anthropic’s Claude model, which I appreciated as I have found other companies offering AI tools to be evasive, insisting that they have developed their own models based on the own training data only, which I’m highly sceptical about given the resource requirements.

Fun as it may be to try and break AI models with spelling challenges, it’s not what they are built for, and there is an old fashioned spell checker built into the text entry box. However, that doesn’t mean that when presented with an AI chatbot in a setting like this, students aren’t going to ask it questions about spelling and grammar. This seems like a perfectly legitimate use case, and the reason I suspect that Turnitin have installed a ‘guard rail’ here is that they are well aware that large language models are no good for this kind of question, just as they are no good for mathematical operations. Or, for that matter, providing straight facts. The development of people using these models like they were search engines should frighten everyone. Our table chuckled when one of us reported that ChatGPT was confidently telling them that Nigel Farage was the Prime Minister (did I say chuckle? I meant shudder.), but more subtle errors can be far harder to spot, and could have terrible ramifications in the fractured, post-truth world we’ve built. I’m sure I’ve said something like this before on here, and I probably will again, but calling these systems ‘intelligent’ has been a huge mistake. There is no intelligence to be found here. There is no understanding. Only very sophisticated predication systems about what comes next after a given input.

I’m most doubtful about the assumptions that students will want to use Clarity in the first place. Am I giving myself aways as old when I say that I would never even contemplate writing something as important as a multi-thousand word essay in an online web interface that requires a stable, constant internet connection? Clarity has no ability for students to upload their written work, and though you can copy and paste text into it, this would be immediately flagged by Clarity as an issue for investigation. There’s no ability for versioning, no ability to export and save offline, limited formatting options and fonts, no ability to use plugins for reference management, etc. I also can’t imagine any circumstances in which I would recommend students use Clarity. It is not an infrequent problem that academics come to us reporting that they have spent hours writing student feedback in Turnitin’s Writing Feedback tool, only to find out later that their comments haven’t saved properly and just aren’t there. It is such a big problem that we routinely train our staff to write all of their feedback offline first, and then copy and paste it into Feedback Studio. Colleagues in the room challenged Turnitin about this, and the response was that in their evaluation students reported being very happy with the system.

Nevertheless, Turnitin believe that some kind of process validation is going to be necessary to ensure the academic integrity of written work going forwards, and I do think they have a point. But the only way I can see Clarity, or something like it working, is if academics mandate its use for assessment with students having to do everything in the browser, in which case unless they are teaching a module on how to alienate your students and make them hate you, it isn’t going to go down well. As much as Turnitin would like it to be so, I don’t think there’s a technological solution to this problem. I increasing think that in order to validate student knowledge and understanding we are going to have to use some level of dialogic assessment, which doesn’t scale in the highly marketised higher education system we now find ourselves in.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Leave a Comment

Innovation and Integrity in the Age of AI

I don’t usually attend these Turnitin product updates, not out of a lack of interest, just because it’s something that lies more with the other half of the team here at Sunderland, so I leave them to it and to cascade what’s important to the rest of us when required. This one piqued my interest though, after seeing a preview of the new user interface at NELE last week. You can see some of the planned changes to the Feedback Studio and the Similarity Report view above. I asked a question about the lack of audio feedback following NELE, and was told that this, along with new video feedback capabilities are on the roadmap and coming soon.

I was also interested in their new Clarity tool, which will allow students to submit or write their work through a web interface, and get immediate feedback with help on how to improve their writing from Turnitin’s AI chatbot. Very similar to how Studiosity’s Writing Feedback+ service works, so that’s going to be very interesting for me to see how that develops.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Leave a Comment

NELE: June 2025

The final NELE meeting of the year took place at Newcastle University, and we began by looking at Northumbria and Durham’s experience of piloting Turnitin’s new user interface for Feedback Studio. It ‘looks’ good, modern and fresh, and there are some good additional features, such as the ability to link Quick Marks to parts of a rubric, but there are also missing features such as peer marking and audio feedback; features not working quite as they should, such as anonymous marking being rendered somewhat moot by the ‘Reveal Identity’ button; and perennial issues which remain unresolved, such as the infinitely nested scroll bars in the rubric view (first photo).

Next up, the team at Newcastle talked about their ongoing experience of using Inspera to manage digital exams. They shared some good practice of using videos within exams, using an example of giving health students an ultrasound recoding to watch and then asking questions about it. They are also still holding the line on proctoring, citing their testing experience of being able to easily trigger far too many false flags. Good for them.

Rounding off the morning, Adam and I from Sunderland, and Dan from Newcastle led a discussion on VLE standards. I liked the work Newcastle have done on a specimen ‘perfect’ module that meets everything to show academics how it’s done, while our ‘MOT’ service, monitoring processes, and friendly interventions with academics on how they can improve their modules, are completing the circle.

After lunch, and some unscheduled physical activity for me (don’t ask), Newcastle presented on their learning analytics system, NULA, which has been developed in collaboration with Jisc. They had very good things to say about Jisc on this one, that they’ve been very supportive and responsive on building ways of monitoring and reporting on the measures which Newcastle wanted to set.

Next, it was Dan from Newcastle again, who talked about their experience of working with students to develop their new module template which has been designed to be mobile friendly first (second photo). Something many of us claim to do, but which actually seems to be quite rare.

Finally, we were joined by Emily from Newcastle’s library team who presented on the things which keep a librarian up at night. It’s AI. It’s always AI. Specifically, every publisher is experimenting with their own generative AI tools to help people find and analyse the resources in that database. The problems are many. First, these features are coming and changing at the whim of the publisher, without warning or any ability to test and evaluate. One particularly egregious example Emily mentioned was a journal that would provide temporary access to their AI search tool to academics who had attended specific training events, or happened on specific buttons and options on their website. Secondly, Emily was deeply concerned about AI literacy and who is responsible for teaching it. It seems to be falling on interested parties in different departments in different places, when it is really something that needs direction and dedicated roles and senior staff sponsorship. Finally there are the hidden costs. While publishers are marketing these services as free improvements to their search tools, in reality they are raising subscriptions costs on the back end, at a time when the sector is struggling and almost every institution is closing courses and laying off staff.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Leave a Comment

Supporting Staff and Students in Moving from AI Scepticism to AI Exploration

How could I miss the latest HelF staff development session, as an avowed AI sceptic? Today Alice May and Shivani Wilson-Rochford from Birmingham City University talked about their approach to responding to the emergence of generative AI. As can be seen on the ‘roadmap’ above, this has included an AI working group, collaboration with staff and students on producing guidelines on use, sharing those via staff and student workshops, and collating resources on a SharePoint site. All things which mirror our approach at Sunderland.

Something they are doing which I liked was providing template text which academic staff can copy and paste into their assignment briefs on what kind of AI students are permitted to use, at four different levels from fully unrestricted, to fully prohibited. They are also working on an assessment redesign project which takes the risks of GAI into account, based on work from the University of Sydney which analysed all of the different types of assessment they have and put them into two lanes based on how secure they are to GAI plagiarism. It’s Table 2 on the page I’ve linked to, it’s a very good table. I like it a lot.

Briefly mentioned was the fact that Birmingham are one of the few institutions in the UK who have enabled Turnitin’s AI detection tool, and I would have liked to have learned more about this. From a student survey on GAI, the second screenshot above, concerns about the accuracy of AI detection was one of the big things they raised.

Alice and Shivani left us with plans for going forwards, which is to build a six-pillar framework on the different aspects of GAI’s impact on HE (third screenshot). Pillar 5 is ‘Ethical AI and Academic Integrity’. This one stood out as, once again, the ethical issues of the environmental impact and copyright were raised. Briefly. And then we moved on. It consistently bothers me, and I don’t have any brilliant answers, but I will reiterate the very basic one of simply choosing not to use these services unless they are solving a genuine problem.

AI Disclaimer: There is no ethical use of generative artificial intelligence. The environmental cost is devastating and the technology is built on plagiarised content and stolen art, for the purpose of deskilling, disempowering and replacing the work of real people.
Leave a Comment

ALT NE User Group: June 2023

A photo of Durham's lightboard in action
Durham University’s Lightboard, a very cool (but smudgy) piece of tech

Hosted by my lovely colleagues at Durham, this ALT North East meeting began with a discussion of the practice of video assessment. I talked through what we do at Sunderland using Canvas and Panopto, covering our best practice advice and talking through the things which can go wrong. The problem of a VLE having multiple tools for recording / storing video was one such headache shared by all of us, no matter what systems we are using.

We then moved on to a discussion about Turnitin, ChatGPT and AI detection, pretty much a standing item now. Dan shared with us a new tool he has come across, which I’m not going to name or share, which uses AI to autocomplete MCQs. A new front has emerged. Some bravery from Northumbria who must be one of the few HEIs to have opted in to Turnitin’s beta checker, and New College Durham are going all in on the benefits of generative writing to help staff manage their workload by, for example, creating lesson plans for them. A couple of interesting experiments to keep an eye on there.

After lunch we had demonstrations of various tools and toys in Durham’s Digital Playground Lab. This included a Lightboard. This is a really cool and simple piece of tech that lets presenters write on a transparent board between them and the camera using UV pens. I came across this a few years ago, before the pandemic I think, but it’s a strange beast. It’s not a commercial system, but open hardware, so anyone can build one for themselves at little cost. Unfortunately at Sunderland, and I suspect at many bureaucracies, this actually makes it a lot harder to get one than just being able to go to a supplier. So it never happened, but at least today I got to see one live.

Another bespoke system demonstrated was a strip of LED lights around the whiteboard controlled through a web app which allows students to discretely indicate their level of comprehension. We had a short tour of the Playground’s media recording room, watched some video recordings of content created in VR to, for example, show the interaction of the magnetic fields of objects, a demonstration of Visual PDE which is an open source web tool for demonstrating differential equations, and Kaptivo, a system for capturing the content of a whiteboard but not the presenter. You can see the Kaptivo camera in the background of my photo, behind the Lightboard.

Leave a Comment

ALT North East User Group: March 2023

Various responses on Padlet showing our thoughts on AI. It's a tad negative.
A screenshot from Padlet showing our thoughts on generative AI. It’s a tad negative.

We’re getting back into a stride now, with the second meeting of the academic year at Teesside. After introductions and updates from each of the core university groups, Malcolm from Durham kicked us off with a conversation about Turnitin and how we all feel about it. From a survey of the room, most of us seem to be using it rather apathetically, or begrudgingly, with a few haters who would love to be able to do away with it, and no-one saying they actively like the service. Very revealing. So why do we all keep on using it? Because we all keep on using it. Turnitin’s database of student papers pulls like a black hole, and it will take a brave institution to quit the service now. Of note was that no-one really objected to the technology itself, especially originality reporting, but rather their corporate disposition and hegemonic business model.

Emma from Teesside then talked about their experience of being an Adobe Creative Campus, which involves making Adobe software available to all staff and students, and embedding it into the curriculum. Unfortunately, Emma and other Teesside colleagues noted the steep learning curve which was a barrier to use, and the fact that content had to sit on Adobe servers and was therefore under their control.

Next up was my partner in crime, Dan, reporting on Sunderland’s various efforts over the years to effectively gather student module feedback. This was a short presentation to stimulate a discussion and share practice. At Newcastle they have stopped all module evaluation, citing research on, for example, how female academics are rated lower than male. This has been replaced with an ‘informal check’ by lectures asking students how the module is going, are you happy, etc. They are being pushed to bring a formal system back due to NSS pressures, but are so far resisting. At Durham they are almost doing the opposite, with a dedicated team in their academic office who administer the process, check impact, and make sure that feedback is followed up on.

Finally after lunch, we had a big chat about that hot-button issue that has taken over our lives, the AI revolution! It was interesting for me to learn how Turnitin became so dominant back in the day (making it available to everyone as a trial, and getting us hooked…), and the parallels which can be drawn with their plans to roll out AI detection in the near future. Unlike their originality product which allows us to see the matches and present this to students as evidence of alleged plagiarism, we were concerned that their AI detection tool would be a black box, leaving wide open the possibility of false accusations of cheating with students having no recourse or defence. I don’t think I can share where I saw this exactly, but apparently Turnitin are saying that the tool has a false positive rate of around 1 in 100. That’s shocking, unbelievable.

No-one in the North East seems to be looking at trying to do silly things like ‘ban’ it, but some people at Durham, a somewhat conservation institution, are using it as a lever to regress to in-person, closed-book examination. Newcastle are implementing declarations in the form of cover sheets, asking students to self-certify if / how they have used AI writing.

There were good observations from colleagues that a) students are consistently way ahead of us, and are already sharing ways of avoiding possible detection on TikTok; and b) that whatever we do in higher education will ultimately be redundant, for as soon as students enter the real world they will use whatever tools are available in industry. Better that we teach students how to use such tools effectively and ethically in a safe environment. As you can see from the Padlet screenshot above, our sentiments on AI and ChatGPT were a tad negative.

Leave a Comment

Authorship Investigate Demo

Had another demonstration of Turnitin’s new Authorship Investigate tool today. This time they came to visit us for the benefit of our head of service.

Further to what I’ve written about this before, new features or things I’ve learned today includes the fact that this isn’t integrated into either the VLE or Turnitin’s Feedback Studio which we currently use, but rather is a standalone application that only nominated individuals would have access to. This would typically be people working in academic misconduct departments who could use Authorship Investigate as a tool to help their investigations. Turnitin are, however, working on a kind of early warning system that could be used to identify papers which have potentially been procured through contract cheating / essay mill services, similar to the existing similarity report. Academics could then ask for those papers to be investigated further. This is, however, some way off at this time.

Some new things Authorship Investigate can use in checking papers includes citation styles, font and text styles, and the language of the document, e.g. UK / US English, and whether or not this has been changed or doesn’t match previously submitted papers by the student in question.

Leave a Comment

ALT North East User Group – 2019

In a first, I didn’t just attend the meeting this time round, I hosted it at one of the University’s nicer enterprise suites at Hope Street Xchange. Working with Graeme and Julie who are the North East’s key contacts with ALT, I took care of the practicalities – venue, IT, parking, lunch – while they organised the agenda and speakers.

In the morning we had presentations from our regional Turnitin account manager who presented on their new Authorship Investigate tool which is designed to help detect instances of contract cheating, followed by a presentation and discussion from Jisc on changes to the EU’s Accessibility Regulations which we as an institution will need to respond to over the next year.

In the afternoon representatives from each institution attending gave a short presentation or talk about what interesting projects we have going on. I talked about using Trello with the team to better organise our workload, and the rollout of Panopto across the University which is now in full swing.

I’m pleased to be able to say it all went very well, with only one minor lunch hiccup which was quickly resolved. Hopefully this will be something we can do on a regular basis going forward.

Leave a Comment

Turnitin Academic Integrity Summit

Attended Turnitin’s annual conference which this year was largely devoted to the issue of contract cheating, students paying other people to write essays on their behalf. A problem which has been growing for some time, but which came to the fore in 2014 with the MyMaster scandal in Australia. They also had demonstrations of an imminent anonymous and moderated marking tool which looked great, and a new Code Similarity project which is a development of MOSS for checking computer code for similarity.

The new product they have to help with contract cheating is called Authorship Investigation and aims to try and detect cheating by comparing work submitted by a given student over a period of time, analysing such things as word and punctuation usage, richness of vocabulary, and document metadata – looking for obvious things such as an unusual author or editing time. The hands-on demonstration was quite good, especially for software still in beta and not due for release until next year. A number of us at the demonstration raised the same type of concerns though. For example, when I’m writing work I create a new document for every draft, and therefore the final file that I actually submit would show a same day creation date and very little editing time, both things that would be flagged up by Authorship Investigation as suspicious.

Also demonstrated was just how easy it is to get assignments from essay mills, and how predatory they are. A funny anecdote was about someone who was researching contract cheating. They started an online chat with someone from an essay mill site, who then proceeded to offer their services to write the paper for them!

This is a hard problem Turnitin are trying to solve, much harder than identifying blocks of text which have been copied and pasted from elsewhere, and most of us at the demonstration were a little skeptical about their approach. Of course, Turnitin is a technology company and they have devised a technological solution (to sell), when a better solution is arguably a pedagogic one, designing out the ability for students to outsource assessment work by moving away from essays and using approaches such as face-to-face presentations. Knowing your students and their work personally is also likely to be better than relying on algorithms, but of course this is much easier with smaller cohorts.

There was also very little discussion about the context of this, and what has caused the issue to arise. In most of the West we have commodified tertiary education, turning it into just another product that’s available for anyone who can afford it, so is it any wonder that those with the means take the next step? Nevertheless, this is the world we find ourselves in and essay mills aren’t going to go away. Calls to legislate against them, as worthy as that may be, will have the same problems as trying to prohibit any online content in that it can only apply to UK based companies, and while technological solutions may help in the short term, they are no panacea as methods to circumvent them will soon appear in what is an ever escalating arms race.

Leave a Comment

Turnitin UK User Summit

student_feedback

Attended the afternoon sessions of Turnitin’s UK user summit which focused on customer experience, with talks from colleagues at the University of Edinburgh, the University of East London, Newcastle University and the University of Huddersfield. It’s always cathartic to hear your colleagues sharing their tales of woe and horror which are so familiar in your own work, like the academics who insist on treating the originality score as sacrosanct when making a plagiarism decision, but more productively there were some really good ideas and pieces of best practice shared. One colleague was using Blackboard’s adaptive release function to hide the Turnitin assignment submission link until students had completed a ‘quiz’ which was simply making them acknowledge in writing that they work they were about to submit was all their own. A couple of people presented their research findings on what students wanted from feedback, such as in the attached photo which shows a clear preference for electronic feedback. Someone made a product development suggestion, splitting the release of the grade and feedback in Turnitin so that students have to engage with their feedback before they get their grade. But I think my personal highlight from the day was the very diplomatic description of difficult customers as those who have ‘higher than average expectations’.

Though I missed out on the morning session due to another commitment, I was able to get the gist from networking with colleagues in-between sessions. Improvements to the Feedback Studio including the ability to embed links, multiple file upload, a new user portal which will show the most recent cases raised by people at your institution, and the development I found most interesting, the ability to identify ghost written assignments. This is still quite away from being ready, but it’s an increasing problem and one Turnitin has in their sights. They couldn’t reveal too much about how this will work for obvious reasons, but the gist is that they will attempt to build up a profile of the writing style of individuals so that they can flag up papers which seem to be written differently.

The Twitter conversation from the summit is available from the TurnitinUKSummit hashtag, where you will see I won the Top Tweet! Yay me, but alas there were no prizes.

Leave a Comment