the error bar is a podcast which fact‐checks brain science news

subscribe: Spotify Podcasts   Apple Podcasts

latest episode 👇

OPEN SCIENCE & ITS ENEMIES, PART III   ⏱51:32

in this episode i conclude my critique of - some parts of - the open science movement by focusing on the positive reforms that might actually work

episode #41, 1st October 2024 #openscience #reform #critique #commentary   image source

  ⇦ listen here   ⏱51:32


  ⇨ transcript here


our main brain stories this episode...

Part III: The progressives

announcements

the error bar does not have an announcements section, for obvious reasons, so here it is.

first, there will be some live shows; the error bar is going to Wembley Stadium for nine straight days in a row - yeah, that's nine dates, Tay Tay, nine -, then Madison Square Gardens on the 3rd, Sydney Opera House later that afternoon, then finishing at the Birds Nest National stadium in Beijing the following morning; tickets are on sale... er NOW - yeah, just send bitcoin to the usual address. &... yeah, mm-hmm, good, good, my producer has just confirmed the tour is now sold out. thank you, thank you very much, error bar patreos.

second, the error bar now has digital object identifiers, or DOIs. following the lead of Heathers & Quintana on the Everything Hertz podcast, the audio recordings have been placed i think onto the open science framework or OSF platform, & each episode now has its own DOI, which appears on the error bar website. thanks to Everything Hertz for this innovation.

third, in a navel-gazing, onanistic act of narcissism, i googled "open science and its enemies" & found two published academic articles, one from a Harvard Law blog, the other in the Journal of Open Innovation: Technology, Market, and Complexity. i don't believe i've ever read these articles, but it is possible that i saw these titles appear somewhere. so, i acknowledge their existence & congratulate them on their brilliance as far as title-writing goes.

responses

in only the second occurrence of its kind, the error bar has had several virtual interactions with listeners to the podcast since the last episode. er, i won't give any names, but i do want to acknowledge their inputs to my thinking in advance of this final part of the trilogy.

Listener Number 1 questioned my claim in Part II that 'EEG was not noisy'. electroencephalography - or EEG - is a method of recording electrical signals from the nervous system without breaking the skin. it's non-invasive, safe, effective & has been extremely valuable for more than a hundred years of human neuroscience. Listener Number 1 pointed out that EEG picks up all sorts of noise - muscles, heartbeats, artefacts from the brain, between-participant errors in electrode placement, recording quality, as well as real differences between peoples' heads & brains. this is all true & it sounds like this person is an EEG expert, so i can't disagree with any of that. my point about noisy data was really that all methods are noisy - & not that EEG is not nois - so whether it's behaviour, brain activation, or heart rate, all of our experimental skills, methods & conventions have been tuned over years & decades so that we can use these methods to extract signals that are large enough to overcome the noise. & that noise is just inherent in the system under study. & statistics is all about this signal-to-noise ratio - the t-statistic & other estimates are mean differences or effects, divided by the standard error of those estimates; a Z-score is a difference divided by its standard deviation. & these are signal-to-noise ratios (or effect-to-variability ratios). so the fact that an EEG experiment may require three hours of data collection per participant & 20 participants per sample, is because the noise levels in that method are higher than in studies requiring only half an hour's monitoring of brain or behaviour. the level of noise in all our work determines the methods that we use to detect our signals. the alpha level for concluding statistically significant effects - of 0.05 - is a target & a guideline for how much data we need, how large our signals need to be relative to our noise, before we can make any experimental claims. so yes, Listener Number 1, EEG is indeed more noisy than other methods & this is why those EEG experiments usually take a bit more effort & time to run that other, perhaps simpler & less noisy experiments, so that in the end we have the same signal-to-noise ratio for our studies.

Listener Number 2 was, i fear, far more qualified that me to opine on the sociology & philosophy of open science, & sent some very thought-provoking thoughts to talk at the error bar dot com. Listener Number 2 described their approach to understanding the information structures & cultural practices of scientists in the open science movement & elsewhere. i really can't do this work justice here, but it struck me that we both came to the same conclusions about the problems in open science, just from very different directions. i came 'bottom-up', finding that the individual, specific claims & arguments of some parts of the open science movement were just empirically wrong, & tended to come with a fair pinch of arrogance & abuse. Listener Number 2 attacked the problems of open science 'top-down', from a much wider theoretical perspective - critiquing our basic concepts of information & data, arguing that the problems of science are not limited simply to making data open & available; the problems are inherently cultural, data is embedded in cultural exchanges between small communities of scientists; data, it seems, can never really be an 'open' or 'free' good to exchange in the market of scientific ideas. rather, it is always bound up in specific, particular social contexts. what i understand of this point, i can thoroughly agree with. & if i have misrepresented this view, Listener Number 2, email talk at the error bar dot com.

thank you, to Listeners Number 1 & 2. the mailbag is empty, the hotlines are open.

open science & its enemies. part III: the progressives

in the third & final part of my mini series about problems in the open science community, i want to turn from focussing on what i have identified as the bad actors in this field, to the good actors. to those who can poke their head even just slightly above the fog of social media war to see even just slightly further, & more clearly, towards a better future for the open sciences. i am calling them the Progressives, mostly because the word, like p-circlers & populists, begins with a p.

in case it is not clear from the last two episodes, or from my social media outbursts, i'm a strong proponent of fully open, transparent science. mostly because scientists' contracts, computers, or corpses will soon die, & their work will die with it. so i advocate helping future scientists by sharing your raw data before it's too late, & your lifetime's work is recycled along with hard drives full of porn & bitcoin keys.

since releasing the epic Part II in this series, i have returned to listen again to what the wider media have said about some of the men i critiqued in the previous parts. this was in part to reassure me that these proponents of open science reform really had broken-through into the mainstream, & therefore that they were acceptable targets for my strong attacks. i found three podcasts about Brown, Ritchie, Heathers & others. one was titled 'The Truth Police' which is on the wrong side of Orwellian for my comfort; another was about how science has lost its way. these podcasts are worth listening to, so i encourage you to do that.

at the end of hearing again about these "unsung heroes" fighting for better, more open science, i was left wondering why the journalists - or even the scientists interviewed - did not do more to talk about all the good, reliable, responsible scientists working very hard 'on the inside' to improve science. from the journalists' point of view, i can see the narrative attraction of interviewing these self-declared data-thugs who see themselves - ironically - as the maverick outsiders working hard through the night for long hours & for free, to police the mainstream which has lost its way. this kind of hero probably needs a uniform, of some sort. in other BBC podcasts on misinformation & modern life, these sorts of behaviours are portrayed as those of cranks & crack-pots, conspiracy theorists a long way outside of, & detached from, conventional thought; 'doing their own research,' perhaps in their mom's basement.

for some of the truth police - these proponents of scientific reform - carrots have not been enough, & sticks are now required to martial scientists' behaviour. in one recent, mildly-dystopian view, scientists should be routinely scanned & tested before publication, perhaps even chosen at random for scientific or transparency audits. if they are found to have any disallowed items or content in their portfolio, they will be refused access.

a few days ago i came across a quotation from Hannah Arendt, the American Political Philosopher. i wasn't looking for it & i don't read political philosophy, i just came across it in my real-life time-line, what older people might call a newspaper. Arendt wrote in the New Yorker, on the 12 September, 1970 that: "The most radical revolutionary will become a conservative on the day after the revolution".

this struck me as providing exactly the pivot required to get me from my negative assessment of some parts of the open science community to the positive, progressive perspective of a brighter future beyond the revolution. the small community of social psychologists who were radicalised in the first half of the last decade, became the radical revolutionaries of the open science movement. many of their reforms have been positive, bringing much-needed ideas & change to science. but there is a small, noisy, authoritarian, conservative right wing of this post-revolutionary scientific community who have discarded the carrots of persuasion & have reached instead for a stick, not to point their students helpfully to some detail on the blackboard, but instead to punish those who don't listen to their demands. this authoritarian right wing of the open science community has been the focus of this podcast mini-series, & it has taken 90 minutes of my time to dismiss them.

let us turn, instead, to what i am calling the progressive, libertarian left of the open science movement. can a kinder, more humanitarian, inclusive approach to open science bring the change we need? can we work with rather than against each other? in the mainstream rather than on the margins? 'insiders' rather than 'outsiders'? that's what i'll try to find out in this episode.

but before i turn outwards to find the best-practices of the genuinely-open parts of the open science movement, i want to give a brief personal history of my exposure to and interactions with open science. how did i get to my current view of this movement?

history

i attribute the first phase of my academic awokening to my favourite undergraduate module, "consciousness & philosophy of mind," which was taught by Martin J Farrell at the University of Manchester in 2000. through that module i realised that philosophers were worth listening to, & perhaps i had chosen the wrong degree.

but i persisted with psychology & neuroscience into my phd, throughout which i was obsessed with a paper i first read for my undergraduate dissertation. the paper - by Iriki & colleagues in 1996 - claimed that a short period of using a tool changed the way a monkey's brain combined information from vision & touch. i've read the paper about 10 times; it's been cited 1661 times - five or six of those from me. i still think it's one of the worst papers ever written. the paper contains a good idea - which explains its popularity - but the theory, experiment, data, anatomy & physiology are all poor. there are no statistics in the paper, because there's also no measurement, despite the ambiguous effects reported.

because i'd spent so much time immersed in that one paper, critiquing it from every angle, i later used other scientists' opinions about the paper to assess their quality as a scientist. 'oh', i thought, 'Professor X cited that paper without criticising it. Professor X must be an idiot. or lazy. or both.' i remember using this paper as a metric to assess how good other scientists were; i kept a sort of mental list of those who agreed with me; & i challenged people at conferences when i met them - 'have you read the paper? i mean: have you actually read it?' i think i found three people who had read it, including myself.

it has taken a large number of years to calm down after i realised that the most-cited paper in my research field was terrible, that almost no-one had read it, that the author was getting awards for his terrible, uncontrolled, confounded science, & that no-one was listening to my arguments. i felt betrayed by science - i had loved that paper during my undergraduate degree; & i'd used it in the discussion of my bsc extended essay.

when i did calm down, in about 2015, i realised that, actually, i had published most of my work on the topic, that lots of scientists were listening to me, that they were citing my articles. my outrage at discovering the bad science at the heart of my discipline seemed instant, at least in retrospect. by contrast, genuine reform, changing minds & changing scientific practice, might take decades more than an instant. & waiting for that change is hard.

at some point between the end of my phd & becoming a lecturer, i started posting the Excel files & other data online - the ones that i used to produce the figures in the papers i published. they're all still on my personal webpages [eg here, & here]. i should really put them on OSF now, but in any debate about open science, i can smugly say that my own data has been freely available since 2004.

in those intervening 20 years, i can recall a single case where a scientist wrote to me to request my data. rather, the request was for clarification about one of the methods in my papers, but i remember sending the raw data along with the email: 'there, take it all, you can check it all if you don't believe me'. i don't think the guy really wanted the data. 20 years. 1 request. is that a big number?

having studied statistics at high school, & again during three university degrees, & having run millions of statistical simulations for several papers my postdoc, i only really understood statistics when i had to teach it, in my first lectureship position. the t-test really is a wonderful thing, & understanding it was the key to everything else in statistics. at least for me.

i had to leave that job for - reasons - & in the interregnum between learning statistics properly & rekindling my love for the philosophy of science, i immersed myself, like so many colleagues, in twitter & the social mediums. i liked old twitter - you could be as nerdy as you needed & there would usually be someone out there that had just your level of nerdiness to get you, validate you, like you, follow you. & small, supportive, international communities of like-minded scientists blossomed. good times. it feels like BlueSky is back right where twitter was working well. i don't really want to get sucked down those rabbit holes again, but for now, all is well. [incidentally, if you have ever been forced to read some analytic philosophy, here's every joke ever.]

in 2019, much like Dominic Cummings, i knew that a global pandemic was coming, so in Autumn of that year i took a sabbatical to organise & analyse a massive brain imaging dataset that i can now reveal, remains unpublished. when COVID came, like many other middle-aged men, i started a podcast. before committing my other-critical words to magnetic tape, however, i aired all my dirty science laundry in public. that turned out to be a great move. at work later that year, my punishment for wasting my sabbatical on as-yet-unpublished MRI data was to teach the history & philosophy of psychology to Nottingham's second years. this would be like my own scientific renaissance. better than that, it would be like being born again, but scientifically. i could re-create my favourite undergraduate module from 20 years before.

i think i did a pretty good job covering about half of the history & philosophy of psychology. you can watch my recorded lectures on the Enlightenment, Evolution, the Mind-Body Problem. to update the module, i added a lecture on the Replication Crisis. link in the description. like & subscribe.

it was this last lecture that awokened me to the problems in open science that i have discussed in this mini-series. before writing this last lecture, i had experienced lots of the social media pile-ons of the late 2010s, of post-docs & phd students being mauled by the dogs of twitter war; but since i didn't engage too much, it never struck me that these pile-ons were a symptom of a new way of doing science - a bad way.

to prepare this lecture on the Replication Crisis, i bought & read Stuart Ritchie's book 'Science Fictions', & Chris Chambers' book 'The Seven Deadly Sins of Psychology'. you can buy both of these books for less than £15 in total at my favourite online book store, abebooks.co.uk. the two books pick up the same story, starting in 2011, of the Replication Crisis. Ritchie frames his book around Fraud, Bias, Negligence, & Hype; Chambers around Sin. both of these negative framings are fair enough, given that their aim is to expose the bad aspects of modern science. i very much enjoyed reading both of these books. i feel that Ritchie's was better researched, better written, & covered more ground than Chambers' book. but both had much to offer, & i based one of my lectures on these books. for a month or two, i was very happy to have found Ritchie & Chambers.

towards the end of the lecture series, time was running out, Lockdown Number Three was looming, students & i were getting tired. but i found three articles which began to change my mind on the replication crisis. the first was a review of Chris Chambers' book online, by a writer calling themselves 'Behavioural Scientist'. the reviewer found the book timely & well-written, but mentioned that it had already been written, with more detail & less glitz. in 1976. i've mentioned this book several times on the error bar - it's by Theodore X Barber - & i shall return to cover this book in detail in a later episode. i bought Barber's book while writing the lecture on the Replication Crisis, & i read it shortly after the lecture, probably over Christmas 2020, which i spent, like millions of others, locked down at home.

the second article which corrected my course on the open science movement was published on the 28th September 2020, by Kirstie Whitaker & Olivia Guest. it's deviously titled #bropenscience is broken science. in that article, Whitaker & Guest use the jocular, rhetorical device of #BrOpenScience as - i quote - "a tongue-in-cheek expression [which] also has a serious side, shedding light on the narrow demographics &d off-putting behavioural patterns seen in open science." - unquote - in this inspired article, these authors encapsulated everything that i had found a bit icky about the social media pile-ons & the worst excesses of science twitter war; & they pointed us all to better ways of reforming science. i encourage you to look back on this 4th anniversary of the BrOpenScience article & ask: what has really changed? here are a few sections in long quotation from this must-read article:

"Within the open science movement a bro will often be condescending, forthright, aggressive, overpowering, & lacking kindness & self-awareness (Reagle, 2013). Although they solicit debate on important issues, they tend to resist descriptions of the complexities, nuances, & multiple perspectives on their argument.

"You've interacted with a bro if you've ever had the feeling that what they're saying makes sense superficially, but would be hard to implement in your own research practices. In general, bros find it hard to understand - or accept - that others will have a different lived experience.

"At its worst, #bropenscience is the same closed system as before. There may be a little more sharing within a select in-group who have the skills & resources to engage with new initiatives but it doesn't reach out & open science up to those who historically have had little or no access to it (cf. Finley, 2017).

"It creates new breaks within science such as excluding people from participating in open science generally due to the behaviour of a vocal, powerful & privileged minority. It's a type of exclusionary, monolithic, inflexible rhetoric that ignores or even builds on structural power imbalances. It offers brittle & even hostile solutions & chastises those who do not follow them to the letter. As we shall discuss, open science & scholarship are more than that.

"As early career open scientists, neither of us fit neatly into many of the 'broposed' solutions - most researchers don't, & science is not a monolith. We have both dealt with published findings that cannot be reproduced. We are driven by frustration at the inefficiency of current research practices. Our work & philosophies are different & that's a feature, not a bug. A diverse & inclusive definition of open science is necessary to truly reform academic practice."

that's the end of the quotation.

the third eye-opening, course-correcting article that i read at that time was by published by Berna Devezer & colleagues in 2021. (in case Dominic Cummings is still listening - i must have added this reference into the next year's lecture, rather than pre-cogitating it - or "Bemming it" - in Autumn 2020.). Devezer & co's paper is titled "The case for formal methodology in scientific reform". it makes the strong case that many open science practices can be criticised on exactly the same grounds as the target articles they attack. this reflexivity has been brought to our attention again in the last few weeks, with much discussion over whether pre-registration & other aspirationally rigour-enhancing methods really do lead to high(er) replicability. a lot has been written about this shark-jumping episode in the history of open science, but for the unalloyed reader, a good place to start might be with an article in Science Magazine. much more important than Devezer et al.'s message that some attempts at reform will go astray, is their now blindingly-obvious point that if science is to be rigorous, then scientific reform must be rigorous.

it was after reading Devezer's work that i realised that replicability is a continuous, random variable, just like all the other variables we measure in science. whether it's a mean, a standard deviation, a t-test statistic, a p-value, statistical power, effect size or replication rate, all of these things vary, likely continuously, likely with randomness, likely with error, likely with some mathematical distribution & very likely only partly under our control. to conceive of replication as a binary outcome is to make the same mistake that scientists often make, to conceive of p-values as binary outcomes providing discrete decisions about whether we found or did not find something.

to begin to use replication as an outcome variable in science or meta-science, we need a formal methodology. we need a research framework or paradigm in which theories can be specified mathematically, implemented in code, translated into hypotheses & tested with data. & this is what the fourth of my three mind-altering articles makes explicit, in Olivia Guest & Andrea Martin's (2021) "How Computational Modeling Can Force Theory Building in Psychological Science". this framework reminded me of David Marr's three levels of explanation that are required in computational theories of the brain: what computation must be performed, what algorithm performs it, & how is this actually implemented in neural tissue?

in my final two lectures on the history of psychology, four years ago, i had argued that we should implement a range of solutions to the modern problems of science: critique, systematic review, meta-analysis, occasional retraction, statistical reform, pre-registration, replication, modelling, theory-building, open science & teaching. in general, i feel pretty much the same way now, but i would make one further point: all these solutions & others may be needed; they may be needed at different times, for different fields, & to different extents; none should be forced on any one field or scientist; none will provide the only-possible solution; there should be no organised gatekeepers, auditors, or validators.

four years on, after a rainy summer of biting the hand that feeds me, criticising & discussing problems in the open science movement, i am ready to turn a new corner. i hope that in the first half of this episode i have credited most of the scientists who brought my views to this point. there is one more scientist to credit. it was on twitter, i believe, where i first saw the term 'p-circling' used to describe the behaviour of some open science reformists. i believe that it was Joe Bak-Coleman who coined this term. apologies to the originator if it was not, but i don't want to enter the twixxer dead zone again to find out for sure. i also need to thank Joe for describing Part II of this podcast miniseries as a 'paper'. thank you.

used in the context of science reform & data policing, 'p-circling' gave me the sparky realisation that what some meta-scientists were doing was selective analysis & selective reporting. so many problems in probability, statistics, data analysis - & yes, in p-hacking - are due to the post-hoc selection of data. once you've selected data out of the original, carefully- & randomly-sampled dataset, everything changes. regression to the mean, voodoo correlations, circular analysis, & many other varieties of double-dipping will result. your data are no longer representative; you can no longer make the conclusions you wanted to make with the original dataset. so when Joe Bak-Coleman used the term 'p-circling', i realised then that the meta-scientific practice of searching for & selecting p-values out of the population of all available p-values was yet another variety of double dipping. his pithy phrase 'p-circling' placed that realisation on the other side of a mirror to that of p-hacking. it's exactly right. & p-circling is exactly the wrong way to measure or to reform science.

but what can i do?

if you listen to the rest is politics podcast, you'll know that Alastair Campbell has published his eighteenth book, aimed at getting people involved in politics. it's called "But what can i do?. i haven't read it, but i've heard a few clips on his podcast. & i am going to use what i think this book is about to tell you what i think i, you, & we can do to make science better. i mean: better than being a data thug, a truth policeman, or a science curator. & like everything else on this podcast, it is my opinion only. i make this podcast so that i no longer feel any need to write papers, BlueSky threads, chapters, or books on meta-science or science reform. i'm far too selfishly-self-interested in my own science to devote much more time to reform.

so, here is a list of seven things i or we can do better. let's call them the SEVEN WHOLESOME VIRTUES OF PSYCHOLOGY, if you like. i know for sure that i am going to be out-of-touch & ill-informed about what's going on, so this list may well disappoint. provide your feedback & corrections & improve my understanding by emailing talk at the error bar dot com.

the seven wholesome virtues of psychology

one. be humble.

if you're hosting a podcast, writing a blog, appearing in the media, or writing something down, practice saying "i don't know" & "i haven't read it" & "i am not an expert". these few, small words are some of the most important that English has to offer. these words should be among the first that we produce when answering difficult scientific questions, especially when the media are asking them. perhaps the media won't invite you back if you tell this truth; perhaps your telephone will stop ringing. well, enjoy the peace of your ignorance, & step back from just creating more bullshit that someone else has to spend taxpayer's money calling out.

two. be academic.

humble scientists should forgo seeking fame through our social media handles & remember that we are academics. we are allowed to be unashamedly intellectual in our pursuits. we do not have to have 50,000 devoted followers parroting our every outburst & buying our merch, or to provoke outrage online, or to generate favourable coverage in the mass media. we are allowed to be voices of calm & considered reason, to weigh up arguments, to take our time, to check all the facts & to limit our generalisations. when we're wrong we need to admit it. we need to take all valid criticism on board, yet refute invalid criticism with vigour. embrace being academic. don't seek to create a large following, appeal to thousands, or lead acolytes into the desert. focus on quality to the exclusion of popularity as required.

i think this is possible. despite occasional & relatively slight institutional pressure to do otherwise, being intellectual about otherwise trifling, arcane & unpopular matters is literally our job description. the third definition of 'academic' in the cambridge dictionary is: "based on ideas & theories & not related to practical effects in real life". other dictionaries are available.

three. offer critique.

while listening again to an interview with Dr Stuart Ritchie, he made a good point that made me almost leap out of my bench above platform number 9 at Reading station & shout "Eureka!". Ritchie bemoaned the lack of critical discussion after academic seminars or talks at weekly meetings or conferences. yes! he complained that most questioners congratulate the speaker on an excellent talk, & then there's only time for one or two questions, so we usually only hear backslapping, superficial questions, & answers that leave no time for us to be academic, to engage in the critical discussion that we're paid for. yes! conference talks & seminars too often feel like the commercial break. & the real programme is the hidden agenda & it takes place in the poster hall, the bar & in shadowy corner gatherings of the academic elites.

last month i attended a meeting in which two of the principal speakers presented preliminary data from uncontrolled pilot experiments. they both had 30 minute talking slots in front of the whole audience. neither had any data from control conditions to share. i wanted to speak up at the end of each one & ask the same question: "do you think you would get the same result if you included a control group?" but of course i didn't do that. i feel like i would have been the only one asking that critical question. so i left the meeting thinking very little of these two scientists, vowing never to read their work, or to attend that meeting again. i doubt that was the outcome the organisers were looking for.

there must be a way to allow speakers & audience to actually interact on an academic level, in plain view for all to see. to challenge weak evidence as it's presented. how can we make critique more open, clearer, & less confrontational? how can we normalise critique?

my favourite-ever seminars - in retrospect, at least - have been in philosophy departments, & mostly in France, sometimes with wine. assuming i'm not misremembering, there is a wholesome tradition of philosophy seminars having two speakers - one to present a thesis, the second an antithesis, & perhaps there will also be a synthesis. to bring critical discussion back as a normal, everyday part of science, we need to normalise & soften critique. perhaps a new format for seminars could help - a 30 minute talk, followed by a 15 minute reply & then a 15 minute discussion. i am going to pilot this approach at a workshop i'm running in April 2025. if you have any tips about how to make this work, please email talk at the error bar dot com.

another format for offering light-touch, low-impact critique is on pubpeer.com. you can post your comments a-nonymously or nonymously; you can post anything from correcting a typo through to a fundamental critique or an even accusation of fraud; you don't have to write a formal paper, get past a gatekeeping editor, or require anyone's permission. it's the academic equivalent of a fire and forget missile & i like it. i just hope that it doesn't go the same way as Publons, swallowed-up by the Impact Factor daemon, Clarivate Analytics.

four. embrace innovation.

this one's going to sound bland & corporate, so i'll move fast. in a talk given a few weeks ago, Brian Nosek of the Centre for Open Science offered his diagnosis of the cause of many of the modern problems in science. it is the scientific paper; & i strongly agree. for most of the history of science, journal submissions, correspondence, reviews & publications have been handwritten or printed, then posted over land or sea. this has happened within our careers. after my first few publications - & as late as 2009 - i received several postcard requests for a mailed copy of my paper from libraries around the world. along with my first few publications, i received 10 or 20 'off-prints' for free, once the article was published. i still have most of these offprints, somewhere on a shelf. a few i posted off to far-flung libraries around the world. it was a romantic time to be a scientist.

very quickly, things changed. papers are still called papers but paper is rarely used. computers were too small to hold our papers, & there was no internet; now everyone's computer can hold all the papers ever published, & send them all to someone else in a short time. but science itself - & particularly scientific careers, incentives & rewards - have changed much slower. will we get employed or promoted for curating a database, building a website, writing analysis code or software? perhaps some people will or are already, but my experience so far is that only papers & money talk. i feel this is changing, & fast, so what innovations can we embrace?

my first thought here was that the open science framework has to be one of the most important, useful & forward-thinking innovations out there. you don't have to drink the Open Science Kool-Aid to admire the vision, dedication & effort that has gone into setting this up, funding it & making it still exist after this many years. so hats off to you, OSF.

likewise, i'm now moving most of my code & documentation - slowly - onto GitHub. i'm sure there are many other similar services; GitHub allows you to develop & deploy software code across multiple sites; you can have a website, a wiki & documentation too. & it's all for free.

about a year ago, Google Scholar provided a nudge to make all your publicly-funded articles available, according to the mandates of the funding bodies that supported the work. i didn't see any announcement, articles, incentives, rewards or punishments. there just appeared a small bar under my citation histogram, highlighting which articles were not available. & i quickly turned the red bars to green by uploading a preprint. this was a welcome, unobtrusive & simple nudge & it worked immediately on me.

i once read about a journal that allowed users of their website to re-create graphs from the data & code included as a part of their publication. i can't find it now, & don't remember much more about it - so if you know of a journal doing live interactive data publication, email talk at the error bar dot com

.

closely connected to this is the concept of living systematic reviews & meta-analyses. most systematic reviews & meta-analyses take a snapshot of the available literature, say, in summer 2024. they spend 6 months curating & analysing, 6 months writing, another 6 months under review. & by the time it's published, the review is 2 years out of date & needs to be done again. instead, once established, a living systematic review & meta-analysis will - in theory - not depend on a particular paper, journal, or author. instead it will live its own life. & there are some exist already, & COVID was a strong driving factor for their development. there must be problems in implementation & maintenanct, but i feel that this is part of the future, & i have started developing one myself.

five. reform from inside.

one obvious problem with the self-styled 'data-sleuths' that i've spent so much time criticising in this podcast, is that they seem to pride themselves on being 'outsiders'; many of them have left academia & we are often reminded of that fact; the journalists who interview them seem to love the narrative of the lone maverick outsider blowing the whistle on an institution that has rotten from the inside out & lost its way. that these otherwise great scientists had to leave science in order to criticise it, fits with that narrative. but this narrative overlooks all the great scientists that remain in science & work tirelessly to improve it. the fifth & sixth virtues in my list pay respect to these 'insiders,' the unsung maverick heroes working against the odds, just within science.

academic journals are often blamed for anti-scientific practices, price-gouging, market monopolization, & profiteering; these are all fair criticisms, perhaps. but those journals are mostly edited, associate-edited, reviewed & filled with content by scientists like us. if we stopped working for or submitting our work to them, they would quickly fail. so, within the complex web of conflicted incentives that bind us to them, we must value these journals just as we complain about them. so instead let's join the journals. more open science-friendly editors & reviewers working for these journals will change them, in time.

i've given a small amount of support over a few years to three new, start-up journals run in the full spirit of open science. each one has - so far - failed. like most small businesses, starting a journal in your 'spare' time as an academic is hard work, for which the founders will receive little credit & zero profit. it's not something i would invest my time in. so my current approach is not to compete with the established journals, but rather to join them & reform them from the inside - we'll see how that goes!

one outstanding success at reforming journals from the inside must be credited - to Chris Chambers & the registered reports movement. the vision, time & effort required is truly herculean. i've only done half a registered report, for which i blame COVID. i don't really think it's the right publishing model for the science in my lab, but creating a new publication format & convincing more than 300 journals to use it deserves a proper science medal.

six. join societies

if you are working in science & find that science isn't working in the way you want it to, then you can either join or start a society. there are plenty of organisations running podcasts, workshops, platforms, providing guidance & generally working from the inside to improve science.

in the UK & many other countries you can join your local reproducibility network. you can rise to be chairperson of the board, or keep it local & work in your own institution. [you can find events on the Open Research Calendar.] to be honest, i'm out of my depth, i know little about this network or what it does. perhaps i should join?

if you like your podcast hosts to be supportive, respectful & more representative of the breadth of scientific opinion at the grass roots, then subscribe to the reproducibili-tea podcast. & if you like that, then join your local reproducibili-tea Journal Club - there are more than 100 clubs in 29 countries.

perhaps the most successful, influential & important society in this field is SIPS - the Society for Improving Psychological Science. i may be wrong, but this society is perhaps the most obvious consequence of the replication crisis in psychology, the Phoenix rising from the ashes of Daryl Bem's dumpster fire. you can hear more about this society in little nuggets spattered across 86 episodes of the Black Goat Podcast.

my final stop on the tour of nicer, non-authoritarian academic societies is FORRT - the Framework for Open and Reproducible Research Training. the final point i made, on my penultimate slide of teaching before leaving the University of Nottingham, was how weird it was that neither Chambers nor Ritchie, in their books on the failures of science, gave much if any space to the role of teaching. that was a missed opportunity that FORRT is taking great advantage of.

if there is not yet a society to join in your discipline or area, then you might just have to start it yourself. if any listeners are using brain stimulation in their research - & especially transcranial magnetic stimulation - then please get in touch via our GitHub pages at TMSMultiLab - i want to hear from you.

seven. read philosophy

apart from Immanuel Kant's 'Critique of Pure Reason', & Merleau-Ponty's 'Phenomenology of Perception,' there are few philosophical works that i have tried to read, but failed to learn something from. even from these two impenetrable doorstops i could probably learn something if only i tried a bit harder. i have almost always found that taking time out to read a bit on the history, philosophy or sociology of science - especially if it uses examples from research topics far from your own - can provide a distant, god-like perspective on the everyday research problems in your own field.

after reading my favourite - Kuhn's book on the history & sociology of scientific revolutions - i came to appreciate, for example, that the entire history of psychology research that i like & have worked in for decades, seems to be part of a single paradigm. to study the brain & behaviour, we sit people down at a computer, we attached them to a stimulator, or lay them down inside a magnet; we present a series of discrete stimuli to them; they or their brains make a series of discrete responses; we average the responses within participants; we do a general linear model analysis & report the average differences between conditions & across participants. this paradigm covers an awful lot of the brain & behavioural sciences. in reading Kuhn's book, i find it very hard even to think what new paradigm we might forcibly be shifted into one day.

my final suggestion is to give your brain some space this autumn, away from social media & read or listen to some history or philosophy of science.

epilogue

i hope this three-part mini-series has been interesting to listen to & even thought-provoking. i have certainly learnt a lot. this summertime indulgence has allowed me to put into words a range of worries, doubts & concerns that i have had over several years, since discovering the open science movement & some of its problems on coin-hub, formerly known as X, formerly known as xitter, formerly known as twixxer, formerly known as twitter, formerly good. the authoritarian right wing of the open science movement is small, getting smaller & must now be ignored. instead of trolling & being trolled, we need to get our heads down & work hard on the real, workable, scalable solutions to scientific problems. the libertarian left wing of open, collaborative, progressive science exists, it's pressing ahead, & it's leaving the social media wars behind; my aim now is just to join in. & if find that i don't want to join in, but just want instead to do my science unbothered by the froth of opinion on BlueSky, well that will be awesome too.

future episodes of the error bar will focus mostly on statistical, historical, or philosophical topics - to the extent that i am capable - & they will appear sporadically, perhaps every month. this will depend on how much time i have, in the second half of my scientific career, to spend on the podcast. the next six months will be a very busy period for me & the error bar - i've dropped a few hints & links to what i'm doing during this episode. so let's see how episodes 42 & beyond turn out.

it is unlikely that the error bar will spend any more time covering the brain news. the first 38 episodes did a fair bit of that & i'm not sure i saw much in the way of personal growth in doing it. but let take just one last check of the UK newspapers' brain science headlines over the last 6 months.

here is the brain news on the 1st October, 2024.

there are fourteen articles about dogs & cats, thirty eight on dementia & the ageing brain; elon musk, artificial intelligence, robots & infomercials about wearables take up 94% of the remaining neuroscience news coverage. there's one more definitive article on the real differences between men's & women's brains. in conclusion, dear listeners: there is no more brain news.