Attended the ALT North East user group today at Durham Castle* to network and share practice with learning technologists from universities and colleges across the region.
A representative from Jisc was there to provide us with an update on their Learning Analytics project, which continues to look ever more impressive every time I see it. This was followed by a demonstration and talk about Special iApps which have been created to help children with special educational needs. Then we had two sessions on the use and value of Microsoft Teams in education, one from a Microsoft representative and one from a colleague at Teesside who have been using it in the wild with good results. Finally there was a demonstration of the survey tool BluePulse via webinar.
* The castle was nice. Really nice.
Isn’t it nice that the University are letting me get out and about again? In London for two days for the UK HE User Group today and CanvasCon Europe tomorrow.
Today was really useful. Around 40 of us from all over the country at St George’s Medical School in Tooting sharing our experience as Canvas users. In the morning we had a demonstration of anonymous and moderating marking from colleagues who are currently piloting it with positive results, though they noted that they have found a limited number of ways to circumvent the anonymisation. However, as they are all quite obscure and difficult they remain confident in the tool and are rolling it our further. It will be interesting to see how Instructure’s offering here compares with Turnitin’s pending anonymous and moderated marking tool.
Also in the morning we had some group discussions on different ways of using Canvas for assessment and feedback to stimulate discussion and share ideas and best practice.
In the afternoon we were joined by representatives from Instructure who gave us updates on their developments and allowed us to grill them quite freely. This is always an excellent opportunity to use our collective influence to nudge Canvas in a direction which helps to address the needs of the UK sector. The anonymous and moderated marking tool for example, is something that was proposed by, and has been driven by this group.
Instructure provided us with a progress report on our Top 10 priority development list from last year, as shown in the photo above, which shows ‘Non-Scoring Rubrics’ and ‘Analytics to include Mobile App Usage’ as complete, and most of the others in the design or development stages. Finally, we voted on the new Top 10 list for 2018-19. From a long list of suggestions collated prior to the User Group, each person at the group was allowed to vote for three issues, and I voted for QuickMark style functionality in SpeedGrader, improved Group functionality, and the ability to set Notifications by course. All things which I’m being pressed for by our academic community at Sunderland.
Attended the Canvas UK User Group in Birmingham representing the University of Sunderland for the first time. I’m told that when this group started a few years ago it was half a dozen people around a table, now it’s a room of 30 from institutions all of the country. Very useful for networking and getting tips and tricks from established users – little things like the fact that you can open up content pages to allow anyone to edit them, effectively turning them into wikis, and learning about the kinds of problems other users have had, for example that notifications can’t be customised on a per course basis. An institution that migrated to Canvas a couple of years ago had a lot of complaints about that from staff, but I don’t think it will be an issue for us as we’re moving from a VLE that had no notifications system at all, so it’s an enhancement request for us rather than a loss of functionality.
By far the most useful part of the day was the access we had to technical people from Instructure and the roadmap and plans they shared with us. I knew that Crocodoc was due for replacement for example, but I didn’t realise it was happening quite so soon (next week!) and I saw the replacement tool for the first time. Looking forward to Quizzes 2, Blueprint courses and the changing functionality around muting assignments. A little disappointed to learn that the quick marks functionality from Turnitin’s Grademark isn’t going to be implemented in Speedgrader, as we’ve already had academics raising that with us. Also noted an interesting looking screenshot in the roadmap which showed Mahara loading within Canvas, similar to how the Turnitin LTI displays. We would love to have that kind of deep integration, but there were mixed messages about Mahara, with some people reporting that the latest version of the integration was still broken. The slide was in the roadmap though, so hopefully something that we can look forward.
The final day of the first semester was a little unusual. The morning was given over to a review of the assignments for this module which are to complete the UKPSF form, critique a learning session, analyse a learning theory, and write a report on the experience of peer observation, comparing the experience of being the observer and the observee. Drafts are due at the end of semester 2, with final versions by September. All well and good, and all covered in the module guide. This session didn’t add anything, and yet we did literally spend the entire morning debating it. Strange things happen when you have academics as students.
The afternoon session was more useful. First there was a short presentation on evaluation in general, why and how to do it, followed by an introduction to nominal group technique. A definition of evaluation was given as ‘assessing the process and practice of a prior learning strategy or event by feedback and trying to make objective summaries of an often subjective interpretation.’ This was followed by a discussion on the different types of evaluation – student, staff, data, and self – and the difference between quality assurance, which is backwards looking and tends to be about accountability, and quality enhancement, which is about how to improve and develop your programme or module.
With quality enhancement in mind, nominal group technique was then introduced followed by actually using it to evaluate this first semester. As a group, and with the programme leader absent, we drew up two lists of ten to twelve points of things that are going well, and things which we think need to be improved. These were written on a board in no particular order, then individually we had ten votes, or points, with which to rank what we thought were the most important points. So for example, if you thought that ‘over-assessment’ and ‘use of VLE’ were the two most important things that needed to be improved upon, then you could give each one five votes. The programme leader was then invited back in and the votes were added up to show what we collectively ranked as the most important things for improvement, and what we felt was going well. The outcome of this evaluation will be actively used in the development of the programme for the second semester.