EMOTIONAL HEARING DECLINES WITH AGE
in a series of studies of emotional processing, older adults were worse at discriminating the acted emotion of nonsense speech sounds than younger adults. this could be a specific deficit, a general deficit, or an artefact.
this story was in episode 32 #ageing #aging #speech #emotion
the error bar says
the Daily Mail explains why 'grandparents don't understand jokes' according to a study which did not involve grandparents or jokes.
instead, the researchers gave a group of people of about 22 years old & another group of people about 68 years old a series of nonsense sentences. the sentences had been spoken by an actor & were supposed to sound emotional - happy, sad, angry, disgusted, fearful, pleasantly surprised or neutral. after hearing each nonsense sentence, the volunteers indicated which of the seven possible emotions were being expressed.
there was also some brain recording & brain stimulation going on, but the Daily Mail & the authors focused mostly on the finding that older adults were worse at discriminating which emotion was being acted out in the nonsense speech sounds. some emotions were also harder to discriminate than others.
are older people worse at discriminating emotion?
the study is relatively straight-forward, the authors have collected lots of data & the conclusion that older adults are worse at something than younger adults is perhaps not surprising. but the error bar wants to say this:
first, older adults were just worse in general than younger adults. we don't know if this is specific to the particular groups of adults they were recruiting, for example young university students versus a community sample of older adults.
second, we also don't know that this is specific to emotional discrimination, since no other tasks were included in the study - perhaps the older adults just had worse hearing? hearing was not tested, but everyone at least reported having no problems.
third, while the newspaper & the authors both make specific conclusions about differences between the groups in individual emotions, for example about happy versus sad or angry, the data does not support this - in the jargon there was 'no statistical interaction between age & emotion'. further, any differences between emotions could also be due to the single actor who voiced the speech being better at conveying some emotions than others.
fourth, the brain imaging data suggest that the younger participants were doing something quite differently to the older adults. immediately before the speech stimuli were presented, the young adults' data look very clear, but the older adults look less clear. & this suggests differences in how the young adults were preparing to listen to the sounds, during the so-called fixation period before the speech was presented.
fifth, the brain imaging data - using electroencephalography, or EEG - are rather vague with respect to what, exactly, was being measured. EEG is known for its ability to distinguish what's going on at different times in the brain. the timing information is really important. the researchers analysed two time periods of information, from 180 to 280 milliseconds & from 280 to 400 milliseconds after the speech began. so all these data are from the first half a second of the speech that was being listened to. but, according to a previous paper - & not reported by the current one - the sentences were an average of two to three seconds long. so the brain imaging data, therefore, relate only to the onset of the sounds & likely not their emotional content.
sixth, both of the brain stimulation experiments did not have any effect. the error bar celebrates the reporting of negative brain stimulation results.