transcript of episode 27: IS IT BIG ENOUGH? 25th March 2022

[🎶 INTRO: "Spring Swing" by Dee Yan-Kee 🎶]

welcome to the error bar: cracking the brain science news egg wide open

in this episode: how the visual brain controls blind people's feet & how every single brain scanning study ever done needed thousands more participants to be reliable.

here is the brain news on the 25th March 2022:



blindness has a massive effect on the brain. a lot of the brain is responsive to stimulation of the eyes [impression of Brian Cox] what we scientists call light [impression ends]. so when people lose their sight, these large portions of brain either end up doing nothing, or re-organising to do something else.

a long series of studies over many years has found evidence that blind people have better touch, or better hearing, or are better at particular cognitive tasks. these studies show how these blind people's brains may have reorganised so that the once-visual or potentially-visual parts of the brain now do other things.

reports that blind people have an advantage over the sighted, or that blind people's brains work differently are, understandably, exciting & interesting to the news media. & that's presumably why the New Scientist thinks that now is the time to trawl BioRXiv, the pre-print server, for an early version of just such a report.

error bar listeners may be aware that a 'pre-print' is a scientific paper that has not been peer-reviewed or has not been published in a journal. but we can still read it.

in this new report by scientists in Japan, 12 blind & 12 sighted people were asked to move their two feet rhythmically up and down while sitting in a chair. while their feet were flapping around, the researchers placed a powerful electromagnet on the back of their heads. their brains were stimulated in one of 14 different locations overlying the so-called 'visual cortex' - an important part of the brain involved in vision, at least in sighted people.

by moving the magnet around these 14 locations, the authors found that one, just one location, made the blind people worse at moving their feet rhythmically. their movements became more variable & less-tuned to the once-per-second rhythm that they were trying to keep to.


do the blind use the 'visual' brain for movement?


well, they might do. but this paper provides almost-laughably-weak evidence for it. i shall explain.

first, the positives: the experimental design, procedure & motivation for the study all look good. it seems like the authors had a clear plan & carried it out very well. but the problems started when the data appeared. most good plans do not survive contact with the enemy.

second, the negatives. there are many, but regular listeners will know what's coming.

one: the authors stimulated 14 different locations on the head, but inform us that, really, they were only interested in one of them. so the other 13 locations - 93% of the useful data - were ignored, thrown away to a supplementary file drawer. i could not access these supplementary materials, dear listener.

two: the authors did do a control condition - one in which there was no actual stimulation to the head, but these data have also been discarded because the participants realised that their head wasn't being stimulated. er, ok. so that's another one of the 16 experimental conditions discarded. the entire paper, then, depends on only 2 of the original 16 conditions being proffered to the public for scrutiny.

three: it gets worse - if you can imagine - the authors present a large number of tests - between blind & sighted, between stimulation & no stimulation, between half of the blind group who were athletes - blind footballers - & half who weren't - oh, did i, did i not mention that design factor already? - sorry - & for almost all of these statistical comparisons there was no clear differences found. except one.

four: that one statistical test, ladies & gentlemen, keeping in mind that 14 of the 16 original conditions have been thrown away, was weak.

i suspect that most of the listeners know about 'p-values' & about 'statistical significance,' but for those who want or need a reminder: there is a convention in many sciences to accept that a one-in-twenty chance of the data you've found arising by chance is about the right amount of chance occurrence before you start being interested in the results. this one-in-twenty level is the classic 'five percent level of statistical significance', in the jargon.

& this whole paper depends on the authors reporting a four point nine five nine percent chance. & they reported this number in an unnecessarily-opaque way, almost as if they were trying to hide it.

now the error bar is emphatically not accusing any particular scientist of the forbidden act of 'p-hacking'. i am simply pointing out that about 90% of the data in this study were arbitrarily thrown away & the remaining data are dangling on a statistical thread so bare it would make a spider blush.




the science was by Ikegami et al. 2022: bioRxiv; reported in The New Scientist by Jason Arunn Murugesu on 16/Mar/22



the New Scientist has done well this episode, finding two error bar worthy stories for me to read on a sunny Saturday morning. thank you New Scientist.

from a possibly p-hacked preprint to a massive, multiauthor mega manuscript in a glamour mag, the New Scientist tells us that "brain scanning studies are usually too small to find reliable results". oh dear.

if this is true, undergraduate students up & down the country will be rejoicing. finally - FINALLY - there is evidence that all samples are indeed 'too small' & this standard criticism can be used with abandon next exam season.

in the paper published last week in Nature magazine, 43 scientists studied around 50 thousand brains & concluded that we in fact need thousands of brains before our statistical analyses are of any use in relating the human brain to the performance of cognitive complex... [bugger]

in the paper published last week in Nature magazine, 43 scientists studied around 50 thousand brains. they concluded that we in fact need thousands of brains before our statistical analyses are of any use in relating the human brain to the performance of cogna... [ah, fucking hell].. complex cognitive tasks!

in the paper published last week in Nature magazine, 43 scientists studied around 50 thousand brains. they concluded that we in fact need thousands of brains before our statistical analyses are of any use in relating the human brain to the performance of complex cognitive tasks or mental health.

if this mega factoid is true, the error bar should be shut down. standard errors have become a standard error; the confidence interval for confidence intervals has ended.


is there any hope for brain imaging?

well yes, obviously.

this is not an open-access paper & i don't think Nature needs any more encouragement to produce sexy-sounding titles, so here is just a quick error bar opinion.

the report specifically relates to something called 'Brain Wide Association Studies' - this is a kind of approach that takes a large number of brains, looks at how they differ & tests whether these differences can be related to specific cognitive or mental health outcomes - like reaction times in a particular task, or a particular diagnosis or a score on a questionnaire.

it does *not* relate to 'all brain imaging' studies. not!

in the new study, 48,809 brains were taken from three large existing datasets & re-analysed billions of times in different ways. the goal was to see how brain structure & function is related to 41 different demographic, cognitive & mental measurements.

i've not read the details, but the gist is: even the very strongest relationships between brain & behaviour give only very weak statistical results, requiring hundreds or thousands of brains to make reliable conclusions.

so is that the end of brain imaging? no. this is a massive study looking - post-hoc - at the brains & behaviours of three other massive brain & behaviour studies. these studies all measured many different things with, presumably, the rather open-ended goal of finding something.

it would be unfair to describe these fantastic research projects as monumental fishing trips.

but as the error bar discovered last episode, just because a study is 'big' & comes to 'big' conclusions - such as that mental processing speed is high & constant until age 60 [newsflash: it really isn't], this does not mean the underlying data are good.

indeed: small, dedicated studies measuring a few aspects of brain & behaviour using high-quality & low-variability measures may well result in better data. better data gives better effect sizes & requires fewer brains.

so if this study heralds the end of 'Big MRI Data' research, that may not be a bad thing.


that massive, unfocussed studies find few strong relationships between brain & behaviour should not be a call to increase the number of participants or studies, but rather to massively improve the quality of our hypotheses & experiments.


the science was by Marek et al. 2022: Nature; reported in The New Scientist by @ClareWilsonMed on 16/Mar/22

[🎶 OUTRO: "Cosmopolitan - Margarita - Bellini" by Dee Yan-Kee 🎶]

it's closing time at the error bar, but do drop in next time for more brain news, fact-checking & neuro-opinions. take care.

the error bar was devised & produced by Dr Nick Holmes from the University of Nottingham. the music by Dee Yan-Kee is available from the free music archive. find us at the error bar dot com, on twitter at bar error, or email talk at the error bar dot com