Neurology And Music: Notes Help Us Process Melodies
Researchers at New York University have discovered how brain rhythms are used to process music. The findings are published in the journal Proceedings of the National Academy of Sciences (PNAS).
In the study, researchers found that music training may enhance the functional role of brain rhythms; this would mean that our perception of notes and melodies, alike, could one day be used as a method to better understand the auditory system.
Previous research has shown that brain rhythms closely synchronize with speech--enabling us to properly isolate syllables, phrases of speech, and words, in particular, which are not marked by spaces or punctuation when we hear them. Yet up until now, it has not been clear how the role of cortical brain rhythms, or oscillations, play a part in other types of natural and complex sounds-including music.
During the study, researchers conducted three experiments involving magnetoencephalography (MEG), which measure tiny magnetic fields generated by brain activity. The participants were divided into two groups: musicians and non-musicians. Then, they were asked to detect short pitch distortions in 13-second clips of classical piano that varied in tempo--including half a note to eight notes per second.
For music that is faster than one note per second, both musicians and non-musicians showed cortical oscillations that synchronized with the note rate of the clips--in other words, these oscillations were effectively employed by everyone to process the sounds they heard, although musicians' brains synchronized more to the musical rhythms. Only musicians, however, showed oscillations that synchronized with unusually slow clips.
This difference, the researchers say, may suggest that non-musicians are unable to process the music as a continuous melody rather than as individual notes. Moreover, musicians much more accurately detected pitch distortions--as evidenced by corresponding cortical oscillations. Brain rhythms, they add, therefore appear to play a role in parsing and grouping sound streams into 'chunks' that are then analyzed as speech or music.
For more great science stories and general news, please visit our sister site, Headlines and Global News (HNGN).