top of page
Untitled design.png


DP HSN Black.png

Welcome to IPEP Conference 2020! In these extraordinary times, we have organised the conference to be held online. The theme this year is: "Communication", which covers two very relevant topics - Telehealth and First Nations Health. There are two parts to this conference:

Part 1 - Recorded videos, podcasts, and transcripts of insightful interviews conducted with experts and professors working and researching in Telehealth and/or First Nations Health which are available to you from Monday 14 Sept to Thursday 8 Oct (AEST).
Part 2 - A live Zoom session on Thursday 8 Oct, 7:30-9:30pm (AEST) to delve deeper into case studies with some of these experts.

Please head over to this link to register -

If you have any questions, please email us:



Dr. Jeanette Tamplin PhD, M.Mus, B.Mus (Hons), RMT, is a Senior Lecturer in Music Therapy at The University of Melbourne and music therapist at the Royal Talbot Rehabilitation Centre - Austin Health. Jeanette has worked as a music therapist in neurorehabilitation over the past 20 years and her research in this area focuses on the therapeutic effects of singing, speech and language rehabilitation, therapeutic songwriting, and coping and adjustment following neurological injury or illness. She coordinates and collaborates with several different research teams, and thus far has generated AUD$2.5 million in grant funding. She held an NHMRC-ARC Dementia Research Fellowship from 2016-2019 and co-edited a book on “Music and Dementia: From Cognition to Therapy (Oxford University Press). Jeanette is regularly invited to present at national and international fora. She publishes regularly in international and interdisciplinary refereed journals, has contributed chapters to several edited books on music therapy and co-authored a book: ‘Music Therapy Methods in Neurorehabilitation: A Clinician’s Manual’ (Jessica Kingsley Publishers).

Dr. Jeanette Tamplin


Telehealth: Music Therapy and Virtual Reality [Video Transcript]

Presenter: Dr. Jeanette Tamplin, The University of Melbourne


Hey everyone, my name is Dr. Jeanette Tamplin. I’m really grateful for the opportunity to share with you on behalf of our team on our collaborative process for addressing music therapy access issues using technology. This approach has grown from very humble beginnings back in 2016 and clearly has even greater relevance now in the phase of our current global pandemic and subsequent need to develop optimal online solutions for music therapy service provision. 


To explain why we thought group singing in virtual reality was a good idea in the first place, I will give you a little bit of background. I have been working in spinal cord injury rehabilitation for about 15 years now and conducted my PHD research in this area. And we conducted a pilot RCT which found that therapy singing groups could improve respiratory function and voice projection for people with quadriplegia. Now this is significant because respiratory dysfunction is a major cause of illness and death for people with quadriplegia. 


And the rationale of why singing may be used as an alternative form of respiratory muscle training is because when we sing, we are doing shorter and quicker inspirations, longer and slower expirations, our lung volume excursions are louder, and we have greater respiratory pressures and more sustained. And also, we know that singing is more enjoyable for most people so they are more likely to be motivated to use singing for training, which has greater implications for compliance, especially in the longer term. They are also more likely to incorporate singing into their lifestyle, which speaks to the sustainability aspect. Furthermore, it is accessible, so this is something that is quite relevant for people with a high level of spinal injury and singing is something that allows them to participate independently, even with very limited physical function. 


Now in this study, we found it difficult to recruit enough participants, mainly due to difficulties with distance and travel. This is because Australia is a big country and our population is widely spread out. And we also know that there are disproportionately higher numbers of people with quadriplegia who live in rural and remote areas. So we needed to find an innovate way to bring the intervention to participants, rather than having them come into the hospital. 


We also know that social isolation and depression are significant risks for people with quadriplegia. And we wanted to retain the group focus of the singing intervention because we know that many of the motivational, emotional and also peer support benefits come from singing with others, rather than by yourself. It is also more time and cost effective to do group treatment, rather than individual treatment. So online singing groups really seem to be the obvious solution, but there was an issue with internet latency which means that, as many of you will know now that we are in this pandemic that, traditional video conferencing doesn’t work due to the timing lag with live group singing or live group music making. 


Now I’m not very tech-minded myself, so I did find some wonderful people to work with at the University of Melbourne, to help work out how we can do group singing online. So, this slide shows Ben Loveridge and Yunhan Li who have been key members in this team to develop this prototype. We decided to add virtual reality into the solution as a visual platform, rather than just video conferencing because we thought it might be more immersive and also because it is super cool. 


We did some testing of different VR headsets and also off-the-shelf VR applications back in our first round of testing in 2016 in the spinal ward at the Royal Talibut rehabilitation centre, where I worked. We also wanted to see how this medium was experienced particularly following a spinal cord injury, so we asked participants for feedback about what they liked or didn’t liked and suggestions for how best to deliver an online group singing program. And I guess we were particularly looking at altered sensations and perceptions and things like that for people with spinal cord injury; will they decolorate for wearing a mask on their face for long periods and did they have any experience of nausea and things like that. 


The first thing we did was to find and test a way of getting low latency audio connections to allow in-time singing. Now this is something that many of you will be interested in, I think. We were able to minimize latency by sending the audio through a low latency USB audio device, an audio interface and through a software program called Jacktrip, which provides a system for high quality audio network performance over the Internet. Through this setup that you can see on the slide here, we were able to get a low latency audio connection under 25 milliseconds to allow in-time singing over the Internet within our state of Victoria. Now around 20 milliseconds sounds in-time enough to sing a moderately paced song together online. 


So just to demonstrate normal latency, we tried counting in time over wireless network and it will sound very familiar to you about the time lag, but I’ll play it to you.


(See video from 05:53 to 06:15)


Now using our Jacktrip audio set-up, we can get down till singing pretty much sounds in time. Let’s have a listen to this. 


(See video from 06:23 to 06:55)


So, some of images you see there are some images that you can see inside the VR, what it looks like inside the VR application V-time, which was an off-the-shelf product that we were testing out. And it looks pretty cool. 


Now, I just want to play you a little bit of this clip because it shows one of our collaborators sharing his thoughts on the experience of singing in virtual reality. 


(See video from 07:19 to 08:08)


So that description of feeling less embarrassed or inhibited about singing in virtual reality was something that we hadn’t expected or anticipated but was one of our key findings. 


Some other participant feedback on the VR experience. One person said, “when I was in VR, I wasn’t thinking about pain or anything like that. I wasn’t focusing or concentrating on my disability in sitting in the chair. It was more like “wow, I’m sitting on a rock in front of a campfire.” And someone else said, “being in wheelchair, I could access things in VR that I can’t do in real life. I might think, I can’t go outside, so I’ll go and do something in VR.” 

Another one of our participants talked about the fact that, when he came out of the VR he said, “that was the first time I’ve been to the bush since my accident”. Even though he hadn’t actually left the room physically, the way he spoke about having been somewhere indicated how immersive the experience was for him. 


Now the main problem with these off-the-shelf VR applications was that we couldn’t see lyrics or chord charts while wearing a VR headset. So, we needed to kind of rely on songs that I could play from memory and songs that participants knew the words from in our preliminary testing. So, the next stage, was to develop our own custom MT VR app that could have song lyrics and chords inside the virtual environment. 


To make it more accessible for people with limited hand function and also for music therapist who is playing an instrument, it was important to build custom features in it, such as gaze-based controls and scrolling lyrics. 


And in our custom VR app, all our controls are now gaze-based, so participants can enter the session and choose their avatar by pointing and clicking with their head. They can also choose the virtual setting for their session, whether they like to be by the campfire, in space or on top of a building. These hand-free controls are very useful for the therapist who needs to control the song selection and lyric scrolling while playing the guitar.  


So now is a short clip that demonstrates our custom MT VR platform, you just have to listen a bit more to me talking at the start. But the background visuals are really good illustrations to what the platform provides. 


(See video from 10:54 to 12:00)


Now excuse my blatant call for funding at the end there, this was a clip that was used to promote the research to potential funders. So, as you’ve heard on the video, we’ve made an interesting discovery and also, you’ve heard participant saying earlier on in the other clip that some people feel less inhibited in virtual reality as the avatar acts as a form of mask and you’re not looking at someone. If you’re feeling uncomfortable about singing in a group or in front of others, it can be less confronting than doing it physically in the same room as compared to doing it in VR. 


VR also facilitates the experience of going somewhere else for the group sessions. It can distract them from the difficult reality of physical impairments and reduce pain perceptions, this is what we found from our interviews with participants. But I think even if you just look at the visuals, you’ll kind of get a sense of why people are having this sense of immersion when they put on these masks and are transported to a 3D experience, a completely different environment. 


Users report a feeling of being transported to another world during sessions. Sometimes literally, they are out in space. This concept is known as presence, something that face to face or video conferencing alternatives will have difficulty providing; something unique to VR.


The system that we’ve designed happened in collaboration with therapists and patients with quadriplegia from the Victorian spinal cord services of Austin health and the results from our user experience questionnaires and qualitative interview analysis suggested that the MT VR system was found to be accessible solutions for participants with spinal cord injury. 


At the moment, we are still using separate systems for audio and visuals due to technical limitations with running them together in the same setup. So, our virtual reality is provided through wireless headsets and we’re currently using the Oculus Go headsets. The audio, however, is provided through a wired Ethernet connection from the computer. In other words, there is small latency between the VR visuals and audio. We found this not to be as important as the audio being in sync. That is because when you’re singing, you tend to rely more on audio cues anyway and not visual cues to be able to sing in time together. So, our brains kind of forgive that slight lag between the visuals and audio. Ideally, we love to be able to have all these elements together in a single system and package and this is something we’re all continuing to work on as the technology improves, but currently we have not been able to achieve that yet. 


We’ve recently published the results of these 2 phased proof and concept study around the development and feasibility testing of the online VR platform in the Journal of Telemedicine and Telecare. We’re now about to embark on a feasibility study using the MT VR solution to deliver the full 12 week singing intervention that we used in our previous study over the extent of going into participants’ homes. Currently, our testing has just been through single sessions and comparisons between getting people to have a go at singing together in a room versus singing together in the telehealth setting and comparing that to singing in a VR experience. And also, the testing out of the equipment and experience of that. So now we are about to head into the next phase where we will have people, actually in their own homes, going through the whole 12 weeks protocol. Just testing out for feasibility, in terms of technology, access, troubleshooting and things like that. 


In closing, I just really want to acknowledge all of the participants who have contributed to the design and refining of this innovate program and all the members of the research team, you can see some of them on the slide here. Both from the clinical and technology backgrounds. It really has been a truly collaborative team effort. 


Thank you very much for listening. 

bottom of page