Press "Enter" to skip to content

Month: March 2023

Studiosity Research Outcomes

Screenshot showing improved attainment for Studiosity users
Screenshot showing improved student attainment where Studiosity was used

In this presentation Professor Liz Thomas, who has previously done impact analysis for Studiosity, presented her latest research on the experience of UK institutions using the service since it launched here in 2016/17, and now includes 22 UK HEIs.

The screenshot I’ve included above shows improved attainment rates of students who have used Studiosity versus those who did not, and looks very similar to the charts we produced here after our pilot year. Caveats abound of course. I’ve said “correlation ≠ causation” more times that I can count of late, and it is perfectly possibly that the students who engage with Studiosity would have been high achievers in any case, or would have engaged with other interventions to improve their work. But it certainly seems like there is something there, and the research also showed that in the groups of students who engaged with Studiosity, the attainment gap between white and BME students was reduced, and for one institution completely eliminated.

Other findings from the research included that 54% of usage takes place outside of conventional office hours, usage peaks in April (and on Wednesdays), and both professional and academic staff reported a benefit of the service as being able to refer students to a specialist service which freed up time for them to concentrate on other areas.

One point of discussion was around low engagement and how this can be improved. It was noted that students need the opportunity to be able to include a draft submission to Studiosity in good time, and it was suggested that use of Studiosity be built into assessments to allow for this. This very much echoes the findings of my colleague in our Faculty of Health, Science and Wellbeing, Jon Rees, who wrote about his experience on the University’s Practice Hub.

Leave a Comment

ALT North East User Group: March 2023

Various responses on Padlet showing our thoughts on AI. It's a tad negative.
A screenshot from Padlet showing our thoughts on generative AI. It’s a tad negative.

We’re getting back into a stride now, with the second meeting of the academic year at Teesside. After introductions and updates from each of the core university groups, Malcolm from Durham kicked us off with a conversation about Turnitin and how we all feel about it. From a survey of the room, most of us seem to be using it rather apathetically, or begrudgingly, with a few haters who would love to be able to do away with it, and no-one saying they actively like the service. Very revealing. So why do we all keep on using it? Because we all keep on using it. Turnitin’s database of student papers pulls like a black hole, and it will take a brave institution to quit the service now. Of note was that no-one really objected to the technology itself, especially originality reporting, but rather their corporate disposition and hegemonic business model.

Emma from Teesside then talked about their experience of being an Adobe Creative Campus, which involves making Adobe software available to all staff and students, and embedding it into the curriculum. Unfortunately, Emma and other Teesside colleagues noted the steep learning curve which was a barrier to use, and the fact that content had to sit on Adobe servers and was therefore under their control.

Next up was my partner in crime, Dan, reporting on Sunderland’s various efforts over the years to effectively gather student module feedback. This was a short presentation to stimulate a discussion and share practice. At Newcastle they have stopped all module evaluation, citing research on, for example, how female academics are rated lower than male. This has been replaced with an ‘informal check’ by lectures asking students how the module is going, are you happy, etc. They are being pushed to bring a formal system back due to NSS pressures, but are so far resisting. At Durham they are almost doing the opposite, with a dedicated team in their academic office who administer the process, check impact, and make sure that feedback is followed up on.

Finally after lunch, we had a big chat about that hot-button issue that has taken over our lives, the AI revolution! It was interesting for me to learn how Turnitin became so dominant back in the day (making it available to everyone as a trial, and getting us hooked…), and the parallels which can be drawn with their plans to roll out AI detection in the near future. Unlike their originality product which allows us to see the matches and present this to students as evidence of alleged plagiarism, we were concerned that their AI detection tool would be a black box, leaving wide open the possibility of false accusations of cheating with students having no recourse or defence. I don’t think I can share where I saw this exactly, but apparently Turnitin are saying that the tool has a false positive rate of around 1 in 100. That’s shocking, unbelievable.

No-one in the North East seems to be looking at trying to do silly things like ‘ban’ it, but some people at Durham, a somewhat conservation institution, are using it as a lever to regress to in-person, closed-book examination. Newcastle are implementing declarations in the form of cover sheets, asking students to self-certify if / how they have used AI writing.

There were good observations from colleagues that a) students are consistently way ahead of us, and are already sharing ways of avoiding possible detection on TikTok; and b) that whatever we do in higher education will ultimately be redundant, for as soon as students enter the real world they will use whatever tools are available in industry. Better that we teach students how to use such tools effectively and ethically in a safe environment. As you can see from the Padlet screenshot above, our sentiments on AI and ChatGPT were a tad negative.

Leave a Comment

Teaching with ChatGPT: Examples of Practice

Some examples of what ChatGPT is, and isn't; it is a large language model, it is not sentient!
Screenshot from one of the presentations outlining what ChatGPT is and is not: it is not human, not sentient, and not reliable!

This session on the robot uprising was facilitated by the University of Kent, and in a welcome contrast to some of the other sessions I have been to on AI recently, this was much more positive, focusing on early examples of using ChatGPT to enhance and support teaching and the student experience.

Some highlights were Maha Bali from the American University in Cairo who argued that we need cultural transparency around this technology as people are going to use it regardless of whatever regulations are put in place. This was echoed by some of the other presenters who noted that after graduation, when students enter industry, they will use, and be expected to use, any and all available relevant technologies. Someone else in the chat also noted that if you ban AI writing at university, then one outcome is going to be that students will only use it for cheating. So good luck, Cambridge. On transparent, ethic use, Laura Dumin from the University of Central Oklahoma talked about a new process they have implemented which asks students to declare if they have used AI tools to help with writing, and highlight which text has been AI generated so academics can clearly see this.

Some presenters had suggestions around re-focusing assessments along the lines of what ChatGPT can’t do, but which humans can. Some of these I feel are short term solutions. One person, for example, talked about how ChatGPT is generally better at shorter pieces of writing, so they have changed their assessments from 3x 800 word assessments throughout the year to 1x 2,000. Debbie Kemp at Kent suggested asking students to include infographics. I think these suggestions are going to work for now, but not in the long term. And the long term here isn’t even very long, given the pace of technological developments. By the time you could get changes to assessment through a programme board and in place for students, the technology may well have rendered your changes moot.

I think a better idea is around including more critical reflection from students. Margaret Bearman from Deacon University in Australia made the point that AI is not good at providing complex, context sensitive value judgements, and that I think is going to be a harder barrier for AI to overcome. Neil McGregor at the University of Manchester talked about this in a slightly different form. Instead of having students write critical reflections, they are now generating those with ChatGPT and asking the students to analyse and critique them – identifying what parts of the AI text they agree with, and where are the weaknesses in the arguments presented.

All of these sessions were recorded and are available on YouTube.

1 Comment

Studiosity: It Works!

Photo of entrance to London Campus
Photo of the entrance to our London Campus

It sometimes feels like Studiosity has taken over my life over the past couple of years, but it works! And I have the data to prove it. Some analysis recently completed after our first full year of usage showed clear correlation between student success, as measured by progression and outcomes, and engagement with the Studiosity service. I can’t exactly share University data of course, but I was recently interviewed by Studiosity about this work and a news article has now been published on their website about it.

Leave a Comment