The mystery of how music is perceived in the brain has begun to receive an ample amount of attention in the music neuroscience literature. With the invention of new technologies that can measure brain activity in vivo, more and more studies on cortical activations with music listening are appearing in the literature. These studies provide interesting insight as to what happens when we are listening or producing music. A recent study has examined cortical networks involved with speech and song perception.
According to Schon et al. (2010), participants who hear sung words, spoken words, and vocalise (singing without words) showed activations in cortical areas including bilateral activations of the middle and superior temporal gyri and inferior and middle frontal gyri. These activations (as measured by 3-T Magnetic Resonance Imaging) were of different strengths depending on what the auditory stimulus, with more activations in the singing condition. Furthermore, singing and speech activations were found to be more similar than singing and vocalise (probably due to lack of semantic and phonological information).
So, what on earth does this mean for the music therapy clinician? Well, as a lab-based study it doesn’t translate directly into clinical outcomes. But on a more basic level, this is great information. If listening to speech, song, and vocalize were functionally distinct in the brain (i.e., used different areas) then adding music for speech perception in therapy would probably have little effect. But since these networks overlap (or are shared) we can use music to access the nonmusical areas of the brain. Not only that – but with music there was more activation of these areas – again great news!
This doesn’t mean that you should sing everything to your clients, as there is no known research showing that a full-blown “musical” as a session will yield better results (plus, we want to use least to most accommodations – if the client can respond to verbal instructions then give verbal instructions). Rather, use music in a systematic way to help access the “speech processing” areas of the brain for functional outcomes (i.e., receptive language skills). Also, get rid of notions that speech is only processed in the left side of the brain and music only processed in the right side of the brain. More and more neuroscience studies show bilateral activations for both speech and music (however there appear to be hemispheric specializations).
Again, this information does not directly translate into clinical practice. However, this can help us to identify future areas of research and provide some initial insight as to why music is a unique stimulus for therapy.
Reference:
Schön, D., et al., (2010). Similar cerebral networks in language, music and song perception, NeuroImage. doi:10.1016/j.neuroimage.2010.02.023. PMID: 20156575