Press "Enter" to skip to content

Author: sonya

Block Teaching Experience at UoSiL

Scoreboard showing team scores from gamification session
“What’s on the board, Miss Ford?”

Last week I was down at our London Campus for block teaching of my module, Designing Learning and Assessment in Higher Education. Last year our students at London were part of the main cohort, but this year due to numbers we arranged to deliver the PG Cert as a ‘block’ over a couple of weeks. To share the workload, teaching was split between myself and my counterpart of the other module who travelled down for a few days, and teaching staff at London with relevant experience. It was interesting for me to see different perspectives as a result of London staff picking up some of these sessions. On one of the sessions, ‘Academic Identity and Everyday Writing in the Workplace, I learned about the concept of teaching journals, a reflective exercise to capture “observations, reflections, and other thoughts about teaching” (Richards and Farrell, 2010). Interestingly, I find that on reflection I have been doing this all along without realising it – for every occurrence of every module I have taught, I have kept a running list of things which I have learned, reflections about things which worked particularly well (or didn’t), and ideas about things to change to improve the module for future cohorts. However, in the spirit of the concept I am attempting to put this into more formal practice with this post.

In additional to discovering this concept and getting to see some of my London colleagues in action, I also learned about Class VR which is a virtual reality system they have bought. The headsets are a little basic, but the key concept here is that you have a managed service which can push content to all of the headsets in the class. It’s a great idea, I really liked it. Unfortunately their experience with it has been more miss than hit, with headsets often failing to connect to the server and requiring a reset. Indeed, for our demo all three of the headsets they brought along failed to connect.

Of the sessions I taught myself, ‘Gamification and Game Based Learning’ went well. I’ve ran this for a number of years now as part of different modules, and I feel like it’s well polished and we always get good feedback about this one. The screenshot above is the final scoreboard from Keep the Score, one of the supplementary tools I recommend. The session around assessment and modern forms of academic misconduct (inc. generative AI) also ran well and provoked some interesting and lively discussion. Finally, ‘The Biscuit One’. Adapted from the work of Sambell, Brown and Race (2012), this was a highly impactful activity for me when I was a student on the PG Cert in 2017 and one I pushed to include when this module was revamped and I took over as module leader. The central idea is to teach people about creating rubrics and exploring some of the difficulties in marking, such as grade boundaries, using the metaphor of ‘what is a biscuit?’ The academic who used to run this at Sunderland left us last year, so for the past two iterations of the PG Cert I’ve ran this session myself. It’s been okay, but I don’t think I do it as well as they used to. In both occasions I feel I’ve been rather unlucky in having two groups come up with a definition for a biscuit that was so broad and encompassing that virtually all of the biscuits provided were included. I haven’t worked out how to deal with that yet, but I’ll need to think of something for February.

Leave a Comment

AI in Education: Unleashing Creativity and Collaboration

Word cloud showing positivity towards AI
Word cloud showing some positivity towards AI

This was the University of Kent’s third Digitally Enhanced Education webinar on the topic of AI in education, this time with a focus on how AI can be used positively to support creativity and collaboration. An open poll on our thoughts ran throughout the afternoon, and as you can see from the screenshot above the group was far more optimistic about it all than us doom-saying learning technologists at ALT North East. All of the presentations were recorded and are available on their YouTube channel.

A few themes stood out for me. On how GAI is impacting students, Dr Sam Lau of Hong Kong Baptist University talked about a student survey they have done in which students talked about how they are starting to use GAI tools as a new, ‘better’ type of search engine and teaching assistant. Cate Bateson, Hannah Blair and Clodagh O’Dowd, students at Queen’s University Belfast, reported that students want clarity and guidance from their institutions on where and how they are allowed to use AI tools. This was echoed by Liss Chard-Hall, a study skills tutor, who said that students have reported to her a new reluctance to use tools which already were using AI before ChatGPT, such as Grammarly, because they aren’t sure if it’s allowed by their institution. One person in the chat even commented that they knew of a student who was scared to use the spelling and grammar checker in Word lest they break new university rules about using AI in assessment.

Also from the chat, there was a discussion about which areas of university life are going to be most disrupted. Literature reviews was a big one, as what benefits are there from conducting a complex, time consuming search of literature when you can ask an AI model to do it for you? To which end, I learned about a new tool that claims to be able to do just this: Elicit. Another useful discovery from this session is This Person Does Not Exist which generates photo-realistic images of people.

On impacts in the wider world, Dr Ioannis Glinavos of the University of Westminster made the case that jobs in many areas will become more about verifying information and accuracy, as has happened with translators in the past couple of decades. While it is still a necessary skill, and possible to make a living as a translator, machine translation does the bulk of the work now, with human translators doing post-editing and checking for contextual and cultural relevance.

Finally, Anna Mills from the College of Marin in the US brought ethical questions back to the fore. First, reminding us that these new GAI models are designed to output plausible sounding responses, not truth – they don’t care about truth – hence we all need to be mindful to verify any information sourced from GAI tools. Anna then talked about two facets of “AI colonialism”. First, that GAI models are primarily trained on source texts written in the West (and as we know from stats about who writes Wikipedia articles and Reddit posts for example, we can infer a predominance of certain genders and skin colours too – biases feeding biases…), and second, that content moderation is being outsourced to low paid workers in the developing world, an inconvenient truth that isn’t getting enough attention. Anna’s presentation is available under a CC license and is well worth reading in full.

Leave a Comment

ALT NE User Group: June 2023

A photo of Durham's lightboard in action
Durham University’s Lightboard, a very cool (but smudgy) piece of tech

Hosted by my lovely colleagues at Durham, this ALT North East meeting began with a discussion of the practice of video assessment. I talked through what we do at Sunderland using Canvas and Panopto, covering our best practice advice and talking through the things which can go wrong. The problem of a VLE having multiple tools for recording / storing video was one such headache shared by all of us, no matter what systems we are using.

We then moved on to a discussion about Turnitin, ChatGPT and AI detection, pretty much a standing item now. Dan shared with us a new tool he has come across, which I’m not going to name or share, which uses AI to autocomplete MCQs. A new front has emerged. Some bravery from Northumbria who must be one of the few HEIs to have opted in to Turnitin’s beta checker, and New College Durham are going all in on the benefits of generative writing to help staff manage their workload by, for example, creating lesson plans for them. A couple of interesting experiments to keep an eye on there.

After lunch we had demonstrations of various tools and toys in Durham’s Digital Playground Lab. This included a Lightboard. This is a really cool and simple piece of tech that lets presenters write on a transparent board between them and the camera using UV pens. I came across this a few years ago, before the pandemic I think, but it’s a strange beast. It’s not a commercial system, but open hardware, so anyone can build one for themselves at little cost. Unfortunately at Sunderland, and I suspect at many bureaucracies, this actually makes it a lot harder to get one than just being able to go to a supplier. So it never happened, but at least today I got to see one live.

Another bespoke system demonstrated was a strip of LED lights around the whiteboard controlled through a web app which allows students to discretely indicate their level of comprehension. We had a short tour of the Playground’s media recording room, watched some video recordings of content created in VR to, for example, show the interaction of the magnetic fields of objects, a demonstration of Visual PDE which is an open source web tool for demonstrating differential equations, and Kaptivo, a system for capturing the content of a whiteboard but not the presenter. You can see the Kaptivo camera in the background of my photo, behind the Lightboard.

Leave a Comment

Studiosity Partner Forum 2023

Studiosity usage at Sheffield
Photo of Sheffield’s Studiosity Dashboard

Attended the second Studiosity Partner Forum in London today, which had representatives from 14 UK HEIs out of the now 23 who are Studiosity users. The opening keynote was delivered by Rebecca Bunting, Vice Chancellor at the University of Bedfordshire, who talked about issues current in HE, with a focus on access and participation. She made good points on the limitations of students going to university, which includes not only things like entry requirements and location, but also what people are able to study once there and how the cost of living crisis is impacting choice. She talked about how this can impact on student retention, which HEIs are held accountable for, but there are often very good reason why students may have to leave their study. Finally, she talked about the concept of the “sticky campus” – keeping students on campus – which is something else universities are often held accountable for as a desirable thing, but which doesn’t work for students in their 30s or who have fulltime jobs, families, etc. Those students want, and need, to be on campus to do what they need for their studies and then get away again as soon as possible. At Bedfordshire, the majority of their students are over 30.

Next was a product update session from Isabelle Bristow, Studiosity’s Managing Director for the UK and Europe. The peer support service which was in early development last year will be available in July as ‘Student Connect’, in which third year students can mentor and guide first year students after training from Studiosity and the university. These mentors are paid at a rate set by the university, and all chat and calls are managed through Studiosity to ensure privacy and confidentiality. Unfortunately this isn’t something we will be able to explore at Sunderland, as we are continuing to keep Studiosity focused at IFY and new undergraduates. Isabelle also talked about a new Writing Feedback feature which will help students to identify where they have used higher order thinking skills – at least in part designed to counter and mitigate the use of generative AI writing.

Simon Reade and Matthew Hare from Sheffield Hallam University then presented on their data dashboard which uses data from the Studiosity API and other sources, and outputs to Tableau. One such chart, showing usage changes over a number of years, is shown (badly) in the photo above. This was a very interesting session for me, as we have just done this ourselves using Power BI. Some of their findings / experience felt very familiar – high usage in Health subjects, low in their Business, Technology and Engineering College (strange bedfellows, but our Business folks can also be hard to engage with new technology and interventions). Another observation they made was that Studiosity seems to hit more demographic groups than those which traditionally access support services, a good thing.

After lunch, Dr Andy Gould from SOAS talked about how they are responding to AI which led into an open discussion. Andy referenced Jisc, who in their response said that a crisis could be used as a driver for change, similar to what I and others said about the pandemic response. The problem is the sector seems to be in perma-crisis. They have co-created a student guide containing a list of ‘dos’ and ‘don’ts’ as best practice. Andy also talked about the idea of academics using ChatGPT to write student feedback, something students were very against, unsurprisingly, and finally noted that some students have reporting using a paraphrasing service I won’t name to try and ‘launder’ AI produced writing.

Other random points and observations made throughout the day in discussions with colleagues included a note from one institution that has seen Studiosity seemingly widen their participation gap, possibly as a result of higher achieving students engaging with the service to a greater extent. Much of our discussions were on the nature of students wanting to have a personal connection when it comes to seeking support, something Studiosity delivers well, and which may indicate strong use of the new Student Connect service when it goes live. Referencing was noted as by far the most in demand area for support, and again something that may draw them to peer support. Finally, there was a comment about how in some subject areas, such as engineering, students may not get any conventional written assignment until their 3rd year, with 1st and 2nd year assignments focusing on group work. This is an important point for me, and Sunderland, to be aware of as it may help to explain weak uptake in certain areas.

Leave a Comment

Using AI in Education: A Student Voice

Screenshot showing different results for weather in Egypt on Google and ChatGPT
Screenshot of a search query in Google and ChatGPT

A second session from the University of Kent on AI / ChatGPT, this time student-led. It was good to hear the student voice on these developments, and I found it reassuring that they are identifying the same issues and raising the same concerns as staff. More than one of the student presentations talked about how ChatGPT is already displacing Google and other search engines as the first place they are going to find answers. Like in the screenshot above, which shows the difference in results when searching for the temperature in Egypt from Google on the left, where you get a list of links to follow through, and ChatGPT on the right which provides a far more detailed answer in a written form. The problem is, as identified in one of the presentations but not the other, is that there is no way to verify the data which ChatGPT is presenting as truth. With the Google results you can evaluate the sources and verify against others; ChatGPT is a black box.

There was another good presentation from a student at Northumbria who has done some early research with students who have used ChatGPT to find out why and what they are using it for. The results being that they are mostly using it to check their own knowledge (problematic if you can’t trust ChatGPT to give you true answers), and to generate ideas. They are also using it outside of education to, for example, help write CVs and job applications. This makes me feel like we are always going to be on the defensive, reactive as one student said. While the education sector tries to grapple with the technology and debates banning it on one hand and thoughtful ethical use – while protecting academic integrity – on the other, other areas of society plough on regardless and heedless.

In another presentation a student talked about experimenting with ChatGPT to produce a response to one of their assignments and, echoing Margaret Bearman’s point from the teacher led session last month, they found that the result lacked critical analysis, and believe it would have been a clear fail.

I was pleased to note that ethical issues were being raised by students, though largely in the context of equitable access. With GPT 4 and priority access going behind a paywall, the students who can afford to use it will, and those who can’t will find they have another new and innovative way of being disadvantaged. How long before we see the first university or college purchasing a license for all of their staff and students?

Once again all of the presentations were recorded and are available as a YouTube playlist.

Leave a Comment

On the Death of Twitter

Simpsons Meme - Already Dead
To be clear, it was already dead before Musk, he’s just been killing it more

I’m done. I’m out. Twitter descended into a miserable hate-filled hellsite long before Musk took over, but good gosh every time you think it can’t get any worse he finds a way. I’ve barley used it over the past few years, but last week I received a lovely, friendly email telling me they were revoking my API access for the bot which auto-Tweets posts I make here which is, perhaps bizarrely, the final straw. I am mothballing rather then deleting my account outright as I wish to cyber-squat my name, but the WordPress app is gone, my apps are gone, and the account will now sit dormant save for maybe once a year when I log in on a browser to make sure I still have access. Just like LinkedIn.

It’s lamentable. It didn’t have to be this way, and I’m old enough to remember when Twitter was fresh and exciting and a great place for building communities, but those days are long gone. Killed by a profit incentive that prioritises outrage and antagonism. Just like Facebook. Imagine a world where it was run as a public service, with rules and etiquette derived from consensus and putting wellbeing first. It’s been done. Once. That glorious shining light of the internet: Wikipedia.

And something like that is happening with Mastodon, which is not a centralised profit driven service, but a collection of communities that share a technical standard that allows people to talk to each other across those communities freely. Wired has a good, recent article all about it and how to get started, and if you’re reading this you probably work in higher education, so a good community for you to join would be the one I’m in: scholar.social.

Leave a Comment

UKAT Annual Conference 2023

The University of Sunderland has done a lot of work over the past few years to standardise and professionalise our personal tutoring provision and align it with UKAT, the United Kingdom Advising and Tutoring association. It was from this that the Studiosity project began, and when our Pro VC for Learning and Teaching put out a call towards the end of last year for us to attend and present at this year’s conference en masse, I submitted a proposal for my Studiosity pilot year presentation – now in what I hope is its final form, including impact on attainment and progression for the pilot cohort.

Conference sessions I was able to attend as a participant were:

  • UKAT Curriculum Taster Session, by Karen Kenny of the University of Exeter which allowed me to complete an introductory module.
  • Empowering Under-Represented Voices, by Rachael O’Connor from the University of Leeds which included a discussion on how tutors can reach and support those students.
  • Considerations Around Academic Misconduct, by Luke Jefferies from the University of East Anglia – a really good session that deconstructed notions of ‘cheating’ and discussed some of the unspoken and unacknowledged factors which feed into academic misconduct. I had not considered, for example, that in some cultures it is a sign of respect to directly quote others, rather than being taken as plagiarism as it is in UK HE.
  • Lighting Talks on the significance of graduate attributes, an evaluation of the impact of mindset interventions, and the impact of specialist academic tutors from my colleagues at our London Campus.
  • Technology in Advising SIG, by Pete Fitch of UCL who led a discussion on what technologies we are using to support tutoring, including learning management systems, ePortfolios, and bespoke timetabling and appointment booking systems.
  • Understanding Student Finances, by Charmaine Valente from the Student Loans Company who talked about the current loans system in England and Wales which is helpful for PATs to know about to be able to inform students.
  • Academic Coaching at the University of Wolverhampton, by James Jennings who talked about the dedicated academic coaches they are employing at Wolverhampton to provide dedicated support and pastoral care for students.
  • Critical Thinking and Tutoring, by George Steele of Ohio State University which was an interesting session to see some of the differences in perspective in the US system, such as students choosing an institution first without knowing what they want to major in. Part of the role of tutors there is to guide students and help them make that decision.
  • Active Listening for Effective Personal Tutoring, by Angela Newton from the University of Leeds who led an interactive session exploring and evaluating our listening skills.

Attached photos are from the opening keynote speech, George Steele from Ohio State talking about reflective thought, and examples of the Welsh language not trying very hard (I feel like I can get away with this joke by being Scottish).

Leave a Comment

Studiosity Research Outcomes

Screenshot showing improved attainment for Studiosity users
Screenshot showing improved student attainment where Studiosity was used

In this presentation Professor Liz Thomas, who has previously done impact analysis for Studiosity, presented her latest research on the experience of UK institutions using the service since it launched here in 2016/17, and now includes 22 UK HEIs.

The screenshot I’ve included above shows improved attainment rates of students who have used Studiosity versus those who did not, and looks very similar to the charts we produced here after our pilot year. Caveats abound of course. I’ve said “correlation ≠ causation” more times that I can count of late, and it is perfectly possibly that the students who engage with Studiosity would have been high achievers in any case, or would have engaged with other interventions to improve their work. But it certainly seems like there is something there, and the research also showed that in the groups of students who engaged with Studiosity, the attainment gap between white and BME students was reduced, and for one institution completely eliminated.

Other findings from the research included that 54% of usage takes place outside of conventional office hours, usage peaks in April (and on Wednesdays), and both professional and academic staff reported a benefit of the service as being able to refer students to a specialist service which freed up time for them to concentrate on other areas.

One point of discussion was around low engagement and how this can be improved. It was noted that students need the opportunity to be able to include a draft submission to Studiosity in good time, and it was suggested that use of Studiosity be built into assessments to allow for this. This very much echoes the findings of my colleague in our Faculty of Health, Science and Wellbeing, Jon Rees, who wrote about his experience on the University’s Practice Hub.

Leave a Comment

ALT North East User Group: March 2023

Various responses on Padlet showing our thoughts on AI. It's a tad negative.
A screenshot from Padlet showing our thoughts on generative AI. It’s a tad negative.

We’re getting back into a stride now, with the second meeting of the academic year at Teesside. After introductions and updates from each of the core university groups, Malcolm from Durham kicked us off with a conversation about Turnitin and how we all feel about it. From a survey of the room, most of us seem to be using it rather apathetically, or begrudgingly, with a few haters who would love to be able to do away with it, and no-one saying they actively like the service. Very revealing. So why do we all keep on using it? Because we all keep on using it. Turnitin’s database of student papers pulls like a black hole, and it will take a brave institution to quit the service now. Of note was that no-one really objected to the technology itself, especially originality reporting, but rather their corporate disposition and hegemonic business model.

Emma from Teesside then talked about their experience of being an Adobe Creative Campus, which involves making Adobe software available to all staff and students, and embedding it into the curriculum. Unfortunately, Emma and other Teesside colleagues noted the steep learning curve which was a barrier to use, and the fact that content had to sit on Adobe servers and was therefore under their control.

Next up was my partner in crime, Dan, reporting on Sunderland’s various efforts over the years to effectively gather student module feedback. This was a short presentation to stimulate a discussion and share practice. At Newcastle they have stopped all module evaluation, citing research on, for example, how female academics are rated lower than male. This has been replaced with an ‘informal check’ by lectures asking students how the module is going, are you happy, etc. They are being pushed to bring a formal system back due to NSS pressures, but are so far resisting. At Durham they are almost doing the opposite, with a dedicated team in their academic office who administer the process, check impact, and make sure that feedback is followed up on.

Finally after lunch, we had a big chat about that hot-button issue that has taken over our lives, the AI revolution! It was interesting for me to learn how Turnitin became so dominant back in the day (making it available to everyone as a trial, and getting us hooked…), and the parallels which can be drawn with their plans to roll out AI detection in the near future. Unlike their originality product which allows us to see the matches and present this to students as evidence of alleged plagiarism, we were concerned that their AI detection tool would be a black box, leaving wide open the possibility of false accusations of cheating with students having no recourse or defence. I don’t think I can share where I saw this exactly, but apparently Turnitin are saying that the tool has a false positive rate of around 1 in 100. That’s shocking, unbelievable.

No-one in the North East seems to be looking at trying to do silly things like ‘ban’ it, but some people at Durham, a somewhat conservation institution, are using it as a lever to regress to in-person, closed-book examination. Newcastle are implementing declarations in the form of cover sheets, asking students to self-certify if / how they have used AI writing.

There were good observations from colleagues that a) students are consistently way ahead of us, and are already sharing ways of avoiding possible detection on TikTok; and b) that whatever we do in higher education will ultimately be redundant, for as soon as students enter the real world they will use whatever tools are available in industry. Better that we teach students how to use such tools effectively and ethically in a safe environment. As you can see from the Padlet screenshot above, our sentiments on AI and ChatGPT were a tad negative.

Leave a Comment

Teaching with ChatGPT: Examples of Practice

Some examples of what ChatGPT is, and isn't; it is a large language model, it is not sentient!
Screenshot from one of the presentations outlining what ChatGPT is and is not: it is not human, not sentient, and not reliable!

This session on the robot uprising was facilitated by the University of Kent, and in a welcome contrast to some of the other sessions I have been to on AI recently, this was much more positive, focusing on early examples of using ChatGPT to enhance and support teaching and the student experience.

Some highlights were Maha Bali from the American University in Cairo who argued that we need cultural transparency around this technology as people are going to use it regardless of whatever regulations are put in place. This was echoed by some of the other presenters who noted that after graduation, when students enter industry, they will use, and be expected to use, any and all available relevant technologies. Someone else in the chat also noted that if you ban AI writing at university, then one outcome is going to be that students will only use it for cheating. So good luck, Cambridge. On transparent, ethic use, Laura Dumin from the University of Central Oklahoma talked about a new process they have implemented which asks students to declare if they have used AI tools to help with writing, and highlight which text has been AI generated so academics can clearly see this.

Some presenters had suggestions around re-focusing assessments along the lines of what ChatGPT can’t do, but which humans can. Some of these I feel are short term solutions. One person, for example, talked about how ChatGPT is generally better at shorter pieces of writing, so they have changed their assessments from 3x 800 word assessments throughout the year to 1x 2,000. Debbie Kemp at Kent suggested asking students to include infographics. I think these suggestions are going to work for now, but not in the long term. And the long term here isn’t even very long, given the pace of technological developments. By the time you could get changes to assessment through a programme board and in place for students, the technology may well have rendered your changes moot.

I think a better idea is around including more critical reflection from students. Margaret Bearman from Deacon University in Australia made the point that AI is not good at providing complex, context sensitive value judgements, and that I think is going to be a harder barrier for AI to overcome. Neil McGregor at the University of Manchester talked about this in a slightly different form. Instead of having students write critical reflections, they are now generating those with ChatGPT and asking the students to analyse and critique them – identifying what parts of the AI text they agree with, and where are the weaknesses in the arguments presented.

All of these sessions were recorded and are available on YouTube.

1 Comment