THE ERROR BAR WAS WRONG

in episode 27, the error bar incorrectly said that the new scientist said that "all brain studies are too small". the new scientist did not say that; that was my interpretation of the study & the news reports. i apologise for exaggerating.  

this story was in episode 28 #brain #big #data #MRI #error #nitpicking


the error bar says

approximately once per year, the error bar makes a mistake. this year's mistake was made in episode 27, specifically in the story, reported by the New Scientist, about how "Brain scanning studies are usually too small to find reliable results".

i summarised that news article & the scientific paper it was based upon in my headline "ALL BRAIN STUDIES ARE TOO SMALL". because of the way i have set up the error bar website, the database & the code that automatically creates tweets & posts them on the web, this came out on twitter as "ALL BRAIN STUDIES ARE TOO SMALL says the New Scientist."

but they did not say that. this was a twitter bot malfunction & i apologise.

what they did say is:

first, that "Brain scanning studies are usually too small to find reliable results". this headline is a bit too vague, not mentioning the specific kind of study that was relevant. it is not all kinds of brain scanning study, just some kinds.

the article's sub-title clarified which studies were relevant, but made a statistical mistake. it read: "Most studies that have used MRI machines to find links between the brain's structure or function and complex mental traits had an average of 23 participants, but thousands are needed to find reliable results". the first part of the subtitle doesn't make sense - most studies don't have the average number; the average is calculated across all studies. the second part of the headline is also impossible to verify. unless we are certain about the true relationships that exist - something that almost never happens in science - how can we say we don't have enough data, or that the data we do have are unreliable?

i am picking holes, of course. but these holes matter, at least to me. my impression, from reading both the original article & the news coverage, was that the central claim is that most brain scanning studies of this kind were too small.

i don't agree. re-analysing three large & unfocused brain scanning studies, searching for post-hoc brain & behaviour correlations, tells us only that the data from those three studies were not sufficient; if we repeat exactly those three studies we would need thousands of MRI scans to find the post-hoc effects that the Nature paper found. what it doesn't tell us is how many MRI scans you would need if you had a better, or even the ideal dataset. the true effect sizes are out there to be discovered & we should not be dissuaded by their claim that you will need thousands of scans.


conclusion

to err is human