Second day of Panopto training covered everything we need to know in order to setup and administer the new system. This included a comprehensive run through of all of the configuration options and system settings, how to manually manage user accounts and the folder structure if required, and an overview of the various support resources which are available to us including the main support site and the process for logging incidents with their service desk.
It’s official. Contracts have been signed, Canvas integration has been tested, and now we’ve had our first batch of training for Panopto, the University’s new lecture capture system which we’re branding internally as reVIEW.
This session covered how the system will be accessed and used by Viewers (students, essentially), and Creators (lecturers). There wasn’t a lot to cover for Viewers. We’re planning on having everything integrated through Canvas, so it’s just a case of navigating to the relevant item or accessing the reVIEW tool in the menu. Playback speed can be varied between half and twice speed which is nice, caption styles can be customised, and the search functionality is impressive – it doesn’t just work on text, but also for spoken terms thanks to a machine speech-to-text engine.
It is possible for students to be given access to create their own videos by using ‘Assignment’ folders which can be configured for them by module tutors, and simple quizzes can be added at any point throughout videos to check comprehension. Results for which can be fed back into the Canvas Gradebook.
There was much more content for Creators as would be expected, covering recording and editing. Recordings can include multiple sources, including any webcam and mic connected to the computer – and more than one source – PowerPoint presentations, and your entire computer screen. Recordings are uploaded to Panopto’s servers progressively which will help in a lecture theatre environment where people need to get out quickly for the next class. Editing and post-production is done through the web using HTML5, no plug-ins required, and it is possible to edit individual sources in isolation as well as the entire video.
Closed captions can be added automatically based on the speech-to-text engine which Panopto is using to drive the in-video search, but it is also possible for Creators to request a variety of human transcription services which are contracted for separately. We’ll soon discover how well it can handle academic language and the interesting range of accents we have in this neck of the woods.
I was invited along to this event today to contribute to the continuing development of our medical programmes, specifically with regards to the integrations between various systems. Representatives were there from VEO and SMOTS, who provide systems for video based observation. They gave us updates on their services – VEO have been developing integrations for ePortfolio systems and a bespoke VLE used by one of their clients, and SMOTS can now take any video input as a feed. We will shortly be acquiring an ambulance outfitted with cameras and SMOTS integration to add to our range of training environments.
To provide students with the best possible experience we want to be able to give them a single point of access for all of our systems, including something new, possible just a web form, for booking the various rooms and equipment which are available to them for practice. That place will be the VLE, Canvas. The representative from VEO couldn’t say how the integrations they have been working on have been developed, but knowing the company and having met someone from their development team previously, I would be surprised if this wasn’t an LTI. And if it is an LTI, then integrating into Canvas should be pretty straightforward. It’s another case of having the right tool for the job, choosing Canvas the best decision the University could have made. This wouldn’t even have been a possibility with LearningStudio.
Following on from the webinar preview of Medial version 5 we had back in January, this morning we had a visit from our new account manager who came to introduce themselves and give us some more information about version 5, and Medial’s plans for the future.
Following our recent decision to adopt Canvas, we were pleased to get a demonstration of the Canvas integration which is functionally identical to the Moodle and Blackboard integrations. This works in a similar manner to the YouTube integration for Canvas which adds an icon to the textbox editor toolbar, but instead of embedding the video it returns a link to the selected file in Medial. There is an update for the integration due to improve this behaviour, inserting a thumbnail icon instead.
In addition to all of the work we have to do on the rollout of Canvas, we do have one eye open towards updating our version of Medial too. With this in mind our account manager discussed the available options which are to update our hosted instance, switch to a SAAS model, or utilise a middle way option which is SAAS for Medial itself, but then links into our own cloud platform, e.g. Azure or AWS, for content storage. Either of the SAAS models bring the benefit of scaling to meet demand, whereas our current hosted version of Medial can only transcode one video at a time.
I wrote about their live streaming tool, MEDIALive, before, but today we got a demonstration of it in action using the iOS app. MEDIALive can cast the stream out to YouTube and Facebook Live as well as Medial itself, and makes it easy to add in pre and post roll event videos.
Finally we were privy to some plans for version 6, which includes the ability to push videos added to Medial out to a YouTube channel also, and a new closed captioning solution which will give you the option for automated speech to text captioning, or human transcription which offers better results but is more expensive.
Caught up with the recording of Medial’s preview of version 5 of their product from November on YouTube. It will bring improvements to the quality of video playback, which now defaults to the highest your internet connection and device can handle, and the player has switched to HTML5 by default, though Flash remains available to support the live streaming function and for users stuck on older devices.
A new feature is the ability to watch videos at 2x speed, a feature Rob was skeptical about but which people do want and will find useful. Teachers and admins now get more detailed stats on what people have been watching, the ability to set chapters to private or public, improvements to the live streaming and screen recording functions, and integration with Canvas. Live streaming is also now available to all users, not just system admins anymore, and can be done via an app for iOS and Android.
Over the past couple of years our Faculty of Health Sciences and Wellbeing has been very busy redeveloping their buildings and kitting them out with all the latest and greatest facilities and technologies, things like an almost exact replica of a hospital ward, complete with Sim People, and high definition cameras and screens in every room. Remember the Immersive Interactive room I wrote about? They’re getting one of those put in as we speak.
Something else they’ve purchased is VEO, a video annotation tool that lets you tag videos either live, using their iPad app, or in a browser for videos recorded on other devices and uploaded to their system. There are two scenarios the Faculty has in mind for this tool, having students use it themselves for their own learning by, for example, analysing each other’s performance at a given task, looking for strengths and areas that need improving, and to assist academics doing OSCEs (objective structured clinical examinations), or even replace the paper forms altogether, if it works well.
VEO is a fairly new tool, a spin-off from a development at Newcastle University, but it is now being used by a number of universities. Being local we benefited from having one of the people who developed the tool on site with us to explain the background, why it was developed, how it can be used and how we can administer it and help academics to make full use of it. It has a lot of potential, and also with it being a local start-up we have a great opportunity to work closely with VEO and contribute to their product development.
Stumbled upon this today, a tutorial on how to create interactive videos on YouTube and some use-case examples of how it could be used to enhance teaching and learning. I particularly like the first one with it’s possibility to reimagine choose your own adventure books for the 21st century!
Further to the video walkthrough, I have now been able to integrate Careers’ PowerPoint presentation into this Storyline item to create a more seamless experience for students. It has been well received and the customer has been impressed by what Storyline can do, so there is almost certainly going to lead onto more work in the future which will enhance the online material which our Careers and Employment service provides.
Another quick Storyline presentation, this time a video walkthrough of how to access and use a vacancy search tool provided by our Careers and Employment Service. This will be used in the next couple of weeks as part of their induction for new students and is, presently, just a link on their PowerPoint, though I have suggested to them that their PowerPoint could be imported into Storyline and integrated with this video to make it all seamless.
Organised by Jisc RSC Northern and held at the Stadium of Light in Sunderland, eFest 2014 was a conference bringing together staff from FE and HE institutions across the North East, with an emphasis on learning technologists and people from related fields, with service providers such as Turnitin, OneFile and MoodleRooms.
The whole day was fantastic, I got to meet lots of interesting new people, discovered some new services, many of which I went away and read up on, adding the best to my personal toolkit, but the highlight of the day was the presentation of Paula Kilburn from Stockton Riverside College who presented three case studies on the use of video marking. The first was the simplest, an academic using an iPad to record him as he annotated a student’s written work. In the second the academic used the screen and audio recording functions of QuickTime to record him as he worked through an audio file the student had created, demonstrating in real time the changes required which would have got the piece up a grade. In the final example an academic was watching a video while recording audio feedback, pausing or going back as required. In all three cases the resulting videos were uploaded to the College’s Planet eStream account with no, or minimal editing, the idea being to deliver better, faster feedback, not a polished video. In all three cases the academics reported that it was faster and easier for them to give better and more comprehensive feedback than would have been possible to write. The whole pilot was a huge success with students who received video feedback showing substantial improvement compared to the respective cohorts from previous years.
As always at these kinds of event, there was a open marketplace for tea, coffee, mingling and for various providers to demonstrate their wares, trying to attract people to them with the usual games and freebies. Turnitin, however, set the standard to beat with their Rubrics Cubes, very droll.
Finally, I would just like to say that with regards to the ‘Stadium of Light’ Metro station, I would humbly suggest to Nexus that to improve accuracy this station be renamed to the ‘Random Tesco car park over a kilometre away from the Stadium of Light, with no clear sign posting’ Metro station. My unexpected journey humbly reminded me to be grateful for smartphones, satellite navigation and the company of fellow wayward souls. In all seriousness, to anyone who needs to go to the Stadium of Light on the Metro, get off at St Peter’s station instead as it is actually closer.