This morning we had a visit from our account managers at Medial to give us a demonstration of the new version of Medial which we will be upgrading to imminently, and to discuss future developments. Version 6 provides new video editing options, the ability to batch import and apply metadata to videos, improvements to the live streaming part of the system, and the various options which are now available for adding closed captions to videos – either machine transcription or more accurate, but much more expensive, human services. The player has also been updated to add variable playback rate, from 0.5x to 2x speed.
We also discussed the practicalities of integrating Medial into Canvas, especially now that we also have reVIEW (Panopto) which has overlapping functionality, and some further changes planned for their next release.
Second day of Panopto training covered everything we need to know in order to setup and administer the new system. This included a comprehensive run through of all of the configuration options and system settings, how to manually manage user accounts and the folder structure if required, and an overview of the various support resources which are available to us including the main support site and the process for logging incidents with their service desk.
It’s official. Contracts have been signed, Canvas integration has been tested, and now we’ve had our first batch of training for Panopto, the University’s new lecture capture system which we’re branding internally as reVIEW.
This session covered how the system will be accessed and used by Viewers (students, essentially), and Creators (lecturers). There wasn’t a lot to cover for Viewers. We’re planning on having everything integrated through Canvas, so it’s just a case of navigating to the relevant item or accessing the reVIEW tool in the menu. Playback speed can be varied between half and twice speed which is nice, caption styles can be customised, and the search functionality is impressive – it doesn’t just work on text, but also for spoken terms thanks to a machine speech-to-text engine.
It is possible for students to be given access to create their own videos by using ‘Assignment’ folders which can be configured for them by module tutors, and simple quizzes can be added at any point throughout videos to check comprehension. Results for which can be fed back into the Canvas Gradebook.
There was much more content for Creators as would be expected, covering recording and editing. Recordings can include multiple sources, including any webcam and mic connected to the computer – and more than one source – PowerPoint presentations, and your entire computer screen. Recordings are uploaded to Panopto’s servers progressively which will help in a lecture theatre environment where people need to get out quickly for the next class. Editing and post-production is done through the web using HTML5, no plug-ins required, and it is possible to edit individual sources in isolation as well as the entire video.
Closed captions can be added automatically based on the speech-to-text engine which Panopto is using to drive the in-video search, but it is also possible for Creators to request a variety of human transcription services which are contracted for separately. We’ll soon discover how well it can handle academic language and the interesting range of accents we have in this neck of the woods.
Joined a web meeting in which a representative from Panopto demonstrated their lecture capture system as this is another area of interest for us currently. I already have some experience with Panopto from a pilot programme at Northumbria University a few years ago.
Using pretty much any standard webcam, Panopto can record lectures or workshops and the recording can be combined with a presentation in a web-based video editor. It can also be used for recording someone in front of their computer, much like the tool in the content editor of Canvas. Videos are stored on a private YouTube style repository which could potentially replace our existing media library, and video feeds can be live casted which is also something we use our media library for. One feature I don’t recall from my prior experience is the ability for students to add their own notes at specific time stamps which I like the idea of, and there is what was claimed to be a universal search function for any word or phrase spoken or shown on screen. I wonder if that has been tested with the unique range of accents we have in these parts.