Press "Enter" to skip to content

Tag: ChatGPT

Institutional Experiences of Microsoft Copilot

Diagram of MS Copilot architecture
Diagram of Microsoft Copilot Architecture

November 2023, I wrote a rambling post about my thoughts on generative AI and where it was going to go for the ALT Blog. I made a prediction there that someone was going to buy a site license for ChatGPT, and lo! This HeLF discussion was about exactly that. Sort of. It’s Microsoft’s Copilot tool that the majority of people are going for, because we are all, or mostly, existing Microsoft customers and they are baking it into their Office 365 offering. Though there are a couple of institutions looking at ChatGPT as an alternative.

Costs and practically was a big issue under discussion. Microsoft are only giving us the very basic service for free, and if you want full Copilot Premium that it’s an additional cost of around £30 a month per individual. Pricey, but it gets worse. They have tiers upon tiers, and if you want to do more advanced things like having your own Copilot chatbot available in your VLE for example, then you’re into another level of premium which goes up to hundreds a month.

We also discussed concerns about privacy and data security. If Copilot is given access to your OneDrive and SharePoint files for example, then you need to make sure that everything has correct data labels, or else you run the risk of the chatbot surfacing confidential information to users.

At Sunderland we have no plans for any premium generative AI tools at present, the costs are just prohibitive. And it’s not just at this level, the entire field of generative AI is hugely expensive and completely unsustainable. So I’ll end as I began, with prognostications: OpenAI is haemorrhaging money, they lost over half a billion dollars last year. They are living on investment capital, and unless the finance bods start seeing a serious return, they are going to pull the plug. Sooner rather than later I reckon. I don’t think OpenAI will go under exactly, but I do think they are going to get eaten by one of the big players, Microsoft most likely. A lot of headlines were made last year about Microsoft’s $10 billion investment, but people haven’t read the fine print – that $10 billion was in the form of server credits, so Microsoft is going to get that back one way or another. I’m going to give the AI bubble another six to eighteen months.

What will come after that? Generative AI isn’t going to go away of course, it’s a great technological achievement, but I think we will see a shift towards smaller models being run locally on our personal devices. It will be interesting to see how Apple Intelligence will pan out, they aren’t putting all of their eggs into the ChatGPT basket. And as for the tech and finance industries? They’ll just move onto the next bubble. Quantum computing anyone?

Leave a Comment

AI in Education: Unleashing Creativity and Collaboration

Word cloud showing positivity towards AI
Word cloud showing some positivity towards AI

This was the University of Kent’s third Digitally Enhanced Education webinar on the topic of AI in education, this time with a focus on how AI can be used positively to support creativity and collaboration. An open poll on our thoughts ran throughout the afternoon, and as you can see from the screenshot above the group was far more optimistic about it all than us doom-saying learning technologists at ALT North East. All of the presentations were recorded and are available on their YouTube channel.

A few themes stood out for me. On how GAI is impacting students, Dr Sam Lau of Hong Kong Baptist University talked about a student survey they have done in which students talked about how they are starting to use GAI tools as a new, ‘better’ type of search engine and teaching assistant. Cate Bateson, Hannah Blair and Clodagh O’Dowd, students at Queen’s University Belfast, reported that students want clarity and guidance from their institutions on where and how they are allowed to use AI tools. This was echoed by Liss Chard-Hall, a study skills tutor, who said that students have reported to her a new reluctance to use tools which already were using AI before ChatGPT, such as Grammarly, because they aren’t sure if it’s allowed by their institution. One person in the chat even commented that they knew of a student who was scared to use the spelling and grammar checker in Word lest they break new university rules about using AI in assessment.

Also from the chat, there was a discussion about which areas of university life are going to be most disrupted. Literature reviews was a big one, as what benefits are there from conducting a complex, time consuming search of literature when you can ask an AI model to do it for you? To which end, I learned about a new tool that claims to be able to do just this: Elicit. Another useful discovery from this session is This Person Does Not Exist which generates photo-realistic images of people.

On impacts in the wider world, Dr Ioannis Glinavos of the University of Westminster made the case that jobs in many areas will become more about verifying information and accuracy, as has happened with translators in the past couple of decades. While it is still a necessary skill, and possible to make a living as a translator, machine translation does the bulk of the work now, with human translators doing post-editing and checking for contextual and cultural relevance.

Finally, Anna Mills from the College of Marin in the US brought ethical questions back to the fore. First, reminding us that these new GAI models are designed to output plausible sounding responses, not truth – they don’t care about truth – hence we all need to be mindful to verify any information sourced from GAI tools. Anna then talked about two facets of “AI colonialism”. First, that GAI models are primarily trained on source texts written in the West (and as we know from stats about who writes Wikipedia articles and Reddit posts for example, we can infer a predominance of certain genders and skin colours too – biases feeding biases…), and second, that content moderation is being outsourced to low paid workers in the developing world, an inconvenient truth that isn’t getting enough attention. Anna’s presentation is available under a CC license and is well worth reading in full.

Leave a Comment

ALT NE User Group: June 2023

A photo of Durham's lightboard in action
Durham University’s Lightboard, a very cool (but smudgy) piece of tech

Hosted by my lovely colleagues at Durham, this ALT North East meeting began with a discussion of the practice of video assessment. I talked through what we do at Sunderland using Canvas and Panopto, covering our best practice advice and talking through the things which can go wrong. The problem of a VLE having multiple tools for recording / storing video was one such headache shared by all of us, no matter what systems we are using.

We then moved on to a discussion about Turnitin, ChatGPT and AI detection, pretty much a standing item now. Dan shared with us a new tool he has come across, which I’m not going to name or share, which uses AI to autocomplete MCQs. A new front has emerged. Some bravery from Northumbria who must be one of the few HEIs to have opted in to Turnitin’s beta checker, and New College Durham are going all in on the benefits of generative writing to help staff manage their workload by, for example, creating lesson plans for them. A couple of interesting experiments to keep an eye on there.

After lunch we had demonstrations of various tools and toys in Durham’s Digital Playground Lab. This included a Lightboard. This is a really cool and simple piece of tech that lets presenters write on a transparent board between them and the camera using UV pens. I came across this a few years ago, before the pandemic I think, but it’s a strange beast. It’s not a commercial system, but open hardware, so anyone can build one for themselves at little cost. Unfortunately at Sunderland, and I suspect at many bureaucracies, this actually makes it a lot harder to get one than just being able to go to a supplier. So it never happened, but at least today I got to see one live.

Another bespoke system demonstrated was a strip of LED lights around the whiteboard controlled through a web app which allows students to discretely indicate their level of comprehension. We had a short tour of the Playground’s media recording room, watched some video recordings of content created in VR to, for example, show the interaction of the magnetic fields of objects, a demonstration of Visual PDE which is an open source web tool for demonstrating differential equations, and Kaptivo, a system for capturing the content of a whiteboard but not the presenter. You can see the Kaptivo camera in the background of my photo, behind the Lightboard.

Leave a Comment

Studiosity Partner Forum 2023

Studiosity usage at Sheffield
Photo of Sheffield’s Studiosity Dashboard

Attended the second Studiosity Partner Forum in London today, which had representatives from 14 UK HEIs out of the now 23 who are Studiosity users. The opening keynote was delivered by Rebecca Bunting, Vice Chancellor at the University of Bedfordshire, who talked about issues current in HE, with a focus on access and participation. She made good points on the limitations of students going to university, which includes not only things like entry requirements and location, but also what people are able to study once there and how the cost of living crisis is impacting choice. She talked about how this can impact on student retention, which HEIs are held accountable for, but there are often very good reason why students may have to leave their study. Finally, she talked about the concept of the “sticky campus” – keeping students on campus – which is something else universities are often held accountable for as a desirable thing, but which doesn’t work for students in their 30s or who have fulltime jobs, families, etc. Those students want, and need, to be on campus to do what they need for their studies and then get away again as soon as possible. At Bedfordshire, the majority of their students are over 30.

Next was a product update session from Isabelle Bristow, Studiosity’s Managing Director for the UK and Europe. The peer support service which was in early development last year will be available in July as ‘Student Connect’, in which third year students can mentor and guide first year students after training from Studiosity and the university. These mentors are paid at a rate set by the university, and all chat and calls are managed through Studiosity to ensure privacy and confidentiality. Unfortunately this isn’t something we will be able to explore at Sunderland, as we are continuing to keep Studiosity focused at IFY and new undergraduates. Isabelle also talked about a new Writing Feedback feature which will help students to identify where they have used higher order thinking skills – at least in part designed to counter and mitigate the use of generative AI writing.

Simon Reade and Matthew Hare from Sheffield Hallam University then presented on their data dashboard which uses data from the Studiosity API and other sources, and outputs to Tableau. One such chart, showing usage changes over a number of years, is shown (badly) in the photo above. This was a very interesting session for me, as we have just done this ourselves using Power BI. Some of their findings / experience felt very familiar – high usage in Health subjects, low in their Business, Technology and Engineering College (strange bedfellows, but our Business folks can also be hard to engage with new technology and interventions). Another observation they made was that Studiosity seems to hit more demographic groups than those which traditionally access support services, a good thing.

After lunch, Dr Andy Gould from SOAS talked about how they are responding to AI which led into an open discussion. Andy referenced Jisc, who in their response said that a crisis could be used as a driver for change, similar to what I and others said about the pandemic response. The problem is the sector seems to be in perma-crisis. They have co-created a student guide containing a list of ‘dos’ and ‘don’ts’ as best practice. Andy also talked about the idea of academics using ChatGPT to write student feedback, something students were very against, unsurprisingly, and finally noted that some students have reporting using a paraphrasing service I won’t name to try and ‘launder’ AI produced writing.

Other random points and observations made throughout the day in discussions with colleagues included a note from one institution that has seen Studiosity seemingly widen their participation gap, possibly as a result of higher achieving students engaging with the service to a greater extent. Much of our discussions were on the nature of students wanting to have a personal connection when it comes to seeking support, something Studiosity delivers well, and which may indicate strong use of the new Student Connect service when it goes live. Referencing was noted as by far the most in demand area for support, and again something that may draw them to peer support. Finally, there was a comment about how in some subject areas, such as engineering, students may not get any conventional written assignment until their 3rd year, with 1st and 2nd year assignments focusing on group work. This is an important point for me, and Sunderland, to be aware of as it may help to explain weak uptake in certain areas.

Leave a Comment

Using AI in Education: A Student Voice

Screenshot showing different results for weather in Egypt on Google and ChatGPT
Screenshot of a search query in Google and ChatGPT

A second session from the University of Kent on AI / ChatGPT, this time student-led. It was good to hear the student voice on these developments, and I found it reassuring that they are identifying the same issues and raising the same concerns as staff. More than one of the student presentations talked about how ChatGPT is already displacing Google and other search engines as the first place they are going to find answers. Like in the screenshot above, which shows the difference in results when searching for the temperature in Egypt from Google on the left, where you get a list of links to follow through, and ChatGPT on the right which provides a far more detailed answer in a written form. The problem is, as identified in one of the presentations but not the other, is that there is no way to verify the data which ChatGPT is presenting as truth. With the Google results you can evaluate the sources and verify against others; ChatGPT is a black box.

There was another good presentation from a student at Northumbria who has done some early research with students who have used ChatGPT to find out why and what they are using it for. The results being that they are mostly using it to check their own knowledge (problematic if you can’t trust ChatGPT to give you true answers), and to generate ideas. They are also using it outside of education to, for example, help write CVs and job applications. This makes me feel like we are always going to be on the defensive, reactive as one student said. While the education sector tries to grapple with the technology and debates banning it on one hand and thoughtful ethical use – while protecting academic integrity – on the other, other areas of society plough on regardless and heedless.

In another presentation a student talked about experimenting with ChatGPT to produce a response to one of their assignments and, echoing Margaret Bearman’s point from the teacher led session last month, they found that the result lacked critical analysis, and believe it would have been a clear fail.

I was pleased to note that ethical issues were being raised by students, though largely in the context of equitable access. With GPT 4 and priority access going behind a paywall, the students who can afford to use it will, and those who can’t will find they have another new and innovative way of being disadvantaged. How long before we see the first university or college purchasing a license for all of their staff and students?

Once again all of the presentations were recorded and are available as a YouTube playlist.

Leave a Comment

ALT North East User Group: March 2023

Various responses on Padlet showing our thoughts on AI. It's a tad negative.
A screenshot from Padlet showing our thoughts on generative AI. It’s a tad negative.

We’re getting back into a stride now, with the second meeting of the academic year at Teesside. After introductions and updates from each of the core university groups, Malcolm from Durham kicked us off with a conversation about Turnitin and how we all feel about it. From a survey of the room, most of us seem to be using it rather apathetically, or begrudgingly, with a few haters who would love to be able to do away with it, and no-one saying they actively like the service. Very revealing. So why do we all keep on using it? Because we all keep on using it. Turnitin’s database of student papers pulls like a black hole, and it will take a brave institution to quit the service now. Of note was that no-one really objected to the technology itself, especially originality reporting, but rather their corporate disposition and hegemonic business model.

Emma from Teesside then talked about their experience of being an Adobe Creative Campus, which involves making Adobe software available to all staff and students, and embedding it into the curriculum. Unfortunately, Emma and other Teesside colleagues noted the steep learning curve which was a barrier to use, and the fact that content had to sit on Adobe servers and was therefore under their control.

Next up was my partner in crime, Dan, reporting on Sunderland’s various efforts over the years to effectively gather student module feedback. This was a short presentation to stimulate a discussion and share practice. At Newcastle they have stopped all module evaluation, citing research on, for example, how female academics are rated lower than male. This has been replaced with an ‘informal check’ by lectures asking students how the module is going, are you happy, etc. They are being pushed to bring a formal system back due to NSS pressures, but are so far resisting. At Durham they are almost doing the opposite, with a dedicated team in their academic office who administer the process, check impact, and make sure that feedback is followed up on.

Finally after lunch, we had a big chat about that hot-button issue that has taken over our lives, the AI revolution! It was interesting for me to learn how Turnitin became so dominant back in the day (making it available to everyone as a trial, and getting us hooked…), and the parallels which can be drawn with their plans to roll out AI detection in the near future. Unlike their originality product which allows us to see the matches and present this to students as evidence of alleged plagiarism, we were concerned that their AI detection tool would be a black box, leaving wide open the possibility of false accusations of cheating with students having no recourse or defence. I don’t think I can share where I saw this exactly, but apparently Turnitin are saying that the tool has a false positive rate of around 1 in 100. That’s shocking, unbelievable.

No-one in the North East seems to be looking at trying to do silly things like ‘ban’ it, but some people at Durham, a somewhat conservation institution, are using it as a lever to regress to in-person, closed-book examination. Newcastle are implementing declarations in the form of cover sheets, asking students to self-certify if / how they have used AI writing.

There were good observations from colleagues that a) students are consistently way ahead of us, and are already sharing ways of avoiding possible detection on TikTok; and b) that whatever we do in higher education will ultimately be redundant, for as soon as students enter the real world they will use whatever tools are available in industry. Better that we teach students how to use such tools effectively and ethically in a safe environment. As you can see from the Padlet screenshot above, our sentiments on AI and ChatGPT were a tad negative.

Leave a Comment

Teaching with ChatGPT: Examples of Practice

Some examples of what ChatGPT is, and isn't; it is a large language model, it is not sentient!
Screenshot from one of the presentations outlining what ChatGPT is and is not: it is not human, not sentient, and not reliable!

This session on the robot uprising was facilitated by the University of Kent, and in a welcome contrast to some of the other sessions I have been to on AI recently, this was much more positive, focusing on early examples of using ChatGPT to enhance and support teaching and the student experience.

Some highlights were Maha Bali from the American University in Cairo who argued that we need cultural transparency around this technology as people are going to use it regardless of whatever regulations are put in place. This was echoed by some of the other presenters who noted that after graduation, when students enter industry, they will use, and be expected to use, any and all available relevant technologies. Someone else in the chat also noted that if you ban AI writing at university, then one outcome is going to be that students will only use it for cheating. So good luck, Cambridge. On transparent, ethic use, Laura Dumin from the University of Central Oklahoma talked about a new process they have implemented which asks students to declare if they have used AI tools to help with writing, and highlight which text has been AI generated so academics can clearly see this.

Some presenters had suggestions around re-focusing assessments along the lines of what ChatGPT can’t do, but which humans can. Some of these I feel are short term solutions. One person, for example, talked about how ChatGPT is generally better at shorter pieces of writing, so they have changed their assessments from 3x 800 word assessments throughout the year to 1x 2,000. Debbie Kemp at Kent suggested asking students to include infographics. I think these suggestions are going to work for now, but not in the long term. And the long term here isn’t even very long, given the pace of technological developments. By the time you could get changes to assessment through a programme board and in place for students, the technology may well have rendered your changes moot.

I think a better idea is around including more critical reflection from students. Margaret Bearman from Deacon University in Australia made the point that AI is not good at providing complex, context sensitive value judgements, and that I think is going to be a harder barrier for AI to overcome. Neil McGregor at the University of Manchester talked about this in a slightly different form. Instead of having students write critical reflections, they are now generating those with ChatGPT and asking the students to analyse and critique them – identifying what parts of the AI text they agree with, and where are the weaknesses in the arguments presented.

All of these sessions were recorded and are available on YouTube.

1 Comment

UK HE’s Thoughtful Response to Robot Writing

Screenshot of three Borg drones from Star Trek
Am I implying an equivalence between The Borg and ChatGPT?

There’s no escaping the robots, resistance is futile. ChatGPT has been a gathering storm since the back end of last year, and Sunderland cannot escape the pull. However, we need to learn more about this and related technology in order to be able to be able to provide a thoughtful and measured response to it for our staff and students. To which end, I signed up for this session drawing together senior academics from across UK HE to share thoughts and experience. I have a few more such sessions coming in the next few weeks, so I’ll wait and share my thoughts in a dedicated post when I have the time and space to synthesis them.

Leave a Comment