Second day of Panopto training covered everything we need to know in order to setup and administer the new system. This included a comprehensive run through of all of the configuration options and system settings, how to manually manage user accounts and the folder structure if required, and an overview of the various support resources which are available to us including the main support site and the process for logging incidents with their service desk.
It’s official. Contracts have been signed, Canvas integration has been tested, and now we’ve had our first batch of training for Panopto, the University’s new lecture capture system which we’re branding internally as reVIEW.
This session covered how the system will be accessed and used by Viewers (students, essentially), and Creators (lecturers). There wasn’t a lot to cover for Viewers. We’re planning on having everything integrated through Canvas, so it’s just a case of navigating to the relevant item or accessing the reVIEW tool in the menu. Playback speed can be varied between half and twice speed which is nice, caption styles can be customised, and the search functionality is impressive – it doesn’t just work on text, but also for spoken terms thanks to a machine speech-to-text engine.
It is possible for students to be given access to create their own videos by using ‘Assignment’ folders which can be configured for them by module tutors, and simple quizzes can be added at any point throughout videos to check comprehension. Results for which can be fed back into the Canvas Gradebook.
There was much more content for Creators as would be expected, covering recording and editing. Recordings can include multiple sources, including any webcam and mic connected to the computer – and more than one source – PowerPoint presentations, and your entire computer screen. Recordings are uploaded to Panopto’s servers progressively which will help in a lecture theatre environment where people need to get out quickly for the next class. Editing and post-production is done through the web using HTML5, no plug-ins required, and it is possible to edit individual sources in isolation as well as the entire video.
Closed captions can be added automatically based on the speech-to-text engine which Panopto is using to drive the in-video search, but it is also possible for Creators to request a variety of human transcription services which are contracted for separately. We’ll soon discover how well it can handle academic language and the interesting range of accents we have in this neck of the woods.
Joined a web meeting in which a representative from Panopto demonstrated their lecture capture system as this is another area of interest for us currently. I already have some experience with Panopto from a pilot programme at Northumbria University a few years ago.
Using pretty much any standard webcam, Panopto can record lectures or workshops and the recording can be combined with a presentation in a web-based video editor. It can also be used for recording someone in front of their computer, much like the tool in the content editor of Canvas. Videos are stored on a private YouTube style repository which could potentially replace our existing media library, and video feeds can be live casted which is also something we use our media library for. One feature I don’t recall from my prior experience is the ability for students to add their own notes at specific time stamps which I like the idea of, and there is what was claimed to be a universal search function for any word or phrase spoken or shown on screen. I wonder if that has been tested with the unique range of accents we have in these parts.
Following on from the webinar preview of Medial version 5 we had back in January, this morning we had a visit from our new account manager who came to introduce themselves and give us some more information about version 5, and Medial’s plans for the future.
Following our recent decision to adopt Canvas, we were pleased to get a demonstration of the Canvas integration which is functionally identical to the Moodle and Blackboard integrations. This works in a similar manner to the YouTube integration for Canvas which adds an icon to the textbox editor toolbar, but instead of embedding the video it returns a link to the selected file in Medial. There is an update for the integration due to improve this behaviour, inserting a thumbnail icon instead.
In addition to all of the work we have to do on the rollout of Canvas, we do have one eye open towards updating our version of Medial too. With this in mind our account manager discussed the available options which are to update our hosted instance, switch to a SAAS model, or utilise a middle way option which is SAAS for Medial itself, but then links into our own cloud platform, e.g. Azure or AWS, for content storage. Either of the SAAS models bring the benefit of scaling to meet demand, whereas our current hosted version of Medial can only transcode one video at a time.
I wrote about their live streaming tool, MEDIALive, before, but today we got a demonstration of it in action using the iOS app. MEDIALive can cast the stream out to YouTube and Facebook Live as well as Medial itself, and makes it easy to add in pre and post roll event videos.
Finally we were privy to some plans for version 6, which includes the ability to push videos added to Medial out to a YouTube channel also, and a new closed captioning solution which will give you the option for automated speech to text captioning, or human transcription which offers better results but is more expensive.