Press "Enter" to skip to content

Tag: Intelligence

Helping Students Develop Critical Thinking Skills When Using Generative AI (Part 2)

Part two of Kent’s Digitally Enhanced Education series looking at how generative AI is affecting critical thinking skills. This week we had stand out presentations from:

Professor Jess Gregory, of Southern Connecticut State University (nice to see reach of the network, well, reaching out), who presented on the problem of mastering difficult conversations for teachers in training. These students will often find themselves thrust into difficult situations upon graduation, having to deal with stubborn colleagues, angry parents, etc., and Jean has developed a method of preparing them by using generative AI systems with speech capabilities to simulate difficult conversations. This can, and has, been done by humans of course, but that is time consuming, could be expensive, and doesn’t offer the same kind of safe space for students to practice freely.

David Bedford, from Canterbury Christ Church University, presented on how the challenges of critical analysis are not new, and that anything produced as a result of generative AI needs to be evaluated in just the same way as we would the results of an internet search, or a Wikipedia article, or from books and journals. He presented us with the ‘BREAD’ model, first produced in 2016, for analysis (see first screenshot for detail). This asks us to consider Bias, Relevance, Evidence, Author, and Date.

Nicki Clarkson, University of Southampton, talked about co-producing resources about generative AI with students, and noted how they were very good at paring content down to the most relevant parts, and that the final videos were improved by having a student voiceover on them, rather than that of staff.

Dr Sideeq Mohammed, from the University of Kent, presented about his experience of running a session on identifying misleading information, using a combination of true and convincingly false articles and information, and said of the results that students always left far more sceptical and wanting to check the validity of information at the end of sessions. My second screenshot is from this presentation, showing three example articles. Peter Kyle is in fact a completely made-up government minister. Or is he?

Finally, Anders Reagan, from the University of Oxford, compared generative AI tools to the Norse trickster god, Loki. As per my third screenshot, both are powerful, seemingly magic, persuasive and charismatic, and capable of transformation. Andres noted, correctly, that now that this technology is available, we must support it. If we don’t, students and academics are still going to be using it on their own initiative, the allure being too powerful, so it is better for us as learning technology experts to provide support and guidance. In so doing we can encourage criticality, warn of the dangers, and encourage more specialised research based generative AI tools such as Elicit and Consensus.

You can find recordings of all of the sessions on the @digitallyenhancededucation554 YouTube channel.

Leave a Comment

Helping Students Develop Critical Thinking Skills When Using Generative AI (Part 1)

From the University of Kent’s Digitally Enhanced Education series, a two-parter on the theme of how generative AI is affecting student’s critical thinking skills, with the second part coming next week. We’ve been living with generative AI for a while now, and I am finding diminishing returns from the various webinars and training I have been attending. Nevertheless, there’s always new things to learn and nuggets of wisdom to be found in these events. The Kent webinar series has such a wide reach now that the general chat, as much as the presentations, is a fantastic resource. Phil has done a magnificent job with this initiative, and is a real credit in the TEL community.

Dr Mary Jacob, from Aberystwyth University, presented an overview of their new AI guidance for staff and students, highlighting for students that they shouldn’t rely on AI; for staff to understand what it can and can’t do, and the legal and ethical implications of the technology; and for everyone to be critical of the output – is it true? Complete? Unbiased?

Professor Earle Abrahamson, from the University of Hertfordshire, presented on the importance of using good and relevant prompts to build critical analysis skills. The first screenshot above is from Earle’s presentation, showing different perceptions on generative AI from students and staff. There were some good comments in the chat during Earle’s presentation, on how everything we’ve discussed today comes back from information literacy.

Dr Sian Lindsay, from the University of Reading, talked about the risks of AI on critical thinking, namely that students may be exposed to a narrower range of ideas due to the biases inherent in all existing generative AI systems and the limited ranges of data they have access to, and are trained upon. The second screenshot is from Sian’s presentation, highlighting some of the research in this area.

I can’t remember who shared this, if it came from one of the presentations or the chat, but someone shared a great article on Inside Higher Ed on the option to opt out of using generative AI at all. Yes! Very good, I enjoyed this very much. I don’t agree with all of it. But most of it! My own take in short: there is no ethical use of generative artificial intelligence, and we should only use it when it serves a genuine need or use.

As always, recordings of all presentations are available on the @digitallyenhancededucation554 YouTube channel.

Leave a Comment

The End is Not Nigh


Pecuniam populo antepone

Yesterday I had the dubious pleasure of catching a bit of Rishi Sunak’s chat with Elon Musk about the future of AI, and it was dreadful. Absolutely no criticality whatsoever, Sunak just blindly accepted everyone Musk told him. This is something which bothers me so much that over the past few months I sort of accidently wrote 2,500 words on why the robots will not be taking over anytime soon, but instead of publishing it here I sent it on to the ALTC Blog for consideration, and it was published today – you can read it here. I should think of the ALTC Blog more often and try to get more of my ramblings published there, it’s been a while. They even gave me a badge.

Anyway, the short, short version is that no matter how impressive ChatGPT may seem, it’s not doing anything very new or revolutionary, and that particular kind of artificial intelligence has pretty much gone as far as it can. There is absolutely no path from where we are today to general artificial intelligence which can rival or surpass human intelligence. None. Whatsoever. The real threat of AI we should be worried about is how it is being used to displace and make precarious workers in certain industries to further increase the capture of wealth by the top 1%. This is one of the issues which SAG-AFTRA are striking on, specifically the practice of replacing background extras in film and TV with AI generated images. This is the time to be fighting back and supporting campaigns like this, because our politicians are certainty not up to the challenge, even if it does mean you have to wait an extra few months for Dune: Part 2.

ALRC Blog Contributor Digital Badge

Leave a Comment

Session 5: Emotional Intelligence

Today’s session began with a discussion of organisational culture, using Johnson and Scholes’ Culture Web as a starting point, then breaking us up into groups to explore each of the six factors making up the organisational paradigm and how they are expressed and represented at the university. Those factors were broken into two groups, representing soft and hard aspects of the organisational culture, soft being:

  • Ritual and Routines;
  • Stories;
  • Symbols;

and hard:

  • Control Systems;
  • Organisational Cultures;
  • Power Structures.

This was followed by a discussion on the differences between, and the problems caused by the discrepancy between how the most senior management wishes an organisation to be perceived and what it is actually like and how it is perceived by others, internal and external. Of particular note was the problems that arise when management tries to change or impose a new set of values.

We were then asked to reflect on our personal values and how those link to, or are in conflict with the values of the university. This wasn’t too difficult for me. I know myself, and there are some pretty core values which came to mind instantly, including inclusivity, openness, honesty and trust, and I’m pleased to be able to say that these are fitting in very well with my team and the culture at the University of Sunderland in general. It’s been almost two years now and I’m still very happy here and glad I made the leap from Northumbria, an institution where they tried to change the organisation’s culture and values from the top with results that decorum prevents me from commenting on.

This all led into the core topic for today’s session, emotional intelligence. The concept of emotional intelligence, henceforth EI, was popularised in the mid-90s by Daniel Goleman, based on the work of Mayer and Salovey. In his 1996 book, Emotional Intelligence: Why It Can Matter More Than IQ, Goleman defined EI as “… abilities such as being able to motivate oneself and persist in the face of frustrations; to control impulse and delay gratification; to regulate one’s moods and keep distress from swamping the ability to think; to empathize and to hope.” A great deal of research was introduced to us, much of it as post-session reading, going into the history and developments of EI as a concept, and showing that the ability to manage our emotions and relationships has been consistently linked to effective leadership.

A number of self-assessment exercises to measure EI have been developed by psychologists and we were asked to complete one of these, the Schutte Emotional Intelligence Scale, which provided a global EI score and scores in the four individual capabilities:

  • Perception of emotion (self-awareness);
  • Managing emotions in the self (self-management);
  • Managing other’s emotions (social awareness);
  • Utilization of emotions (social skill).

My global EI was 123, with a mean of 125, but my scores in the related capabilities of ‘managing emotions in the self’ and ‘utilization of emotions’ were above the mean, and in the remaining two capabilities a little below the mean. That is a result that rings true to me, and accords with my personality as an introvert and my Insights Discovery profile which pegged me as a ‘Coordinating Observer’.

EI can be developed and improved upon and the session gave us some tools and ideas on how to do this, one of which was to keep a reflective journal, which is handy for me, as that is one of the reasons I keep this blog! Although I must admit that, being public, posts here are edited and tailored for an audience, rather than just being for myself and thus completely candid, and while I do also keep a private journal, work related things rarely make it in there.

The session linked EI back to transformational leadership by introducing us to the Betari Box, showing the cycle of how your attitude affects your behaviour, which affects the attitude of others which affects their behaviour, and that comes back to affect your own behaviour; and highlighting the body of research that shows a strong link between high EI and successful managers.

Finally, Goleman’s work on types of leadership was discussed and we performed an exercise to divulge our own leadership styles and preferences. The six leadership styles identified by Goleman are:

  • Coercive – demands immediate compliance;
  • Authoritative – mobilises people towards a vision;
  • Affiliative – creates emotional bonds and harmony;
  • Democratic – builds consensus through participation;
  • Pacesetting – expects excellence and self-direction;
  • Coaching – develops people for the future.

Goleman argues that all of these styles are of value in different situations, but cites evidence in the form of case studies and surveys that shows that generally some are more effective than others, with a coercive style being least effective and authoritative the most. Note that authoritative in this context doesn’t mean leading by command, by asserting authority, but leading by example and being able to articulate a clear, achievable goal, while giving people the trust and freedom to find their own means of getting there. The self-assessment exercise revealed that I lean towards the affiliative and democratic styles, but shows an interesting gap in how comfortable I feel using an authoritative style and how often I actually use it. Something to work on I think. Something else I’ll take away from the session is a sense of responsibility to be more proactive in setting the mood of the team on a daily basis to help make the university a positive and happy place to work.

Leave a Comment