Part two of Kent’s Digitally Enhanced Education series looking at how generative AI is affecting critical thinking skills. This week we had stand out presentations from:
Professor Jess Gregory, of Southern Connecticut State University (nice to see reach of the network, well, reaching out), who presented on the problem of mastering difficult conversations for teachers in training. These students will often find themselves thrust into difficult situations upon graduation, having to deal with stubborn colleagues, angry parents, etc., and Jean has developed a method of preparing them by using generative AI systems with speech capabilities to simulate difficult conversations. This can, and has, been done by humans of course, but that is time consuming, could be expensive, and doesn’t offer the same kind of safe space for students to practice freely.
David Bedford, from Canterbury Christ Church University, presented on how the challenges of critical analysis are not new, and that anything produced as a result of generative AI needs to be evaluated in just the same way as we would the results of an internet search, or a Wikipedia article, or from books and journals. He presented us with the ‘BREAD’ model, first produced in 2016, for analysis (see first screenshot for detail). This asks us to consider Bias, Relevance, Evidence, Author, and Date.
Nicki Clarkson, University of Southampton, talked about co-producing resources about generative AI with students, and noted how they were very good at paring content down to the most relevant parts, and that the final videos were improved by having a student voiceover on them, rather than that of staff.
Dr Sideeq Mohammed, from the University of Kent, presented about his experience of running a session on identifying misleading information, using a combination of true and convincingly false articles and information, and said of the results that students always left far more sceptical and wanting to check the validity of information at the end of sessions. My second screenshot is from this presentation, showing three example articles. Peter Kyle is in fact a completely made-up government minister. Or is he?
Finally, Anders Reagan, from the University of Oxford, compared generative AI tools to the Norse trickster god, Loki. As per my third screenshot, both are powerful, seemingly magic, persuasive and charismatic, and capable of transformation. Andres noted, correctly, that now that this technology is available, we must support it. If we don’t, students and academics are still going to be using it on their own initiative, the allure being too powerful, so it is better for us as learning technology experts to provide support and guidance. In so doing we can encourage criticality, warn of the dangers, and encourage more specialised research based generative AI tools such as Elicit and Consensus.
You can find recordings of all of the sessions on the @digitallyenhancededucation554 YouTube channel.
Leave a Comment