Valentine Delrue University of Ghent, Belgium
Delrue is a student of the history of italian & french meteorology & editor of rejected grant applications at the journal of trial & error
Valentine Delrue had some 🥤 at the error bar in episode 16 #trial #error #journal #failure #rejection
our discussion
welcome to the error bar. would you like to introduce yourself?
[VDR] er yes, thank you for having us. i am Valentine Delrue. i am a phd student at Ghent University & Ca'Foscari University of Venice. i work on the history of 18th century French and Italian meteorology. and at the journal of trial and error i am finishing up my work as a rejected grant applications editor. and i am also the history editor.
excellent, thank you. and our second guest, would you like to introduce yourself?
[SD] yeah, er, my name is Sean Devine. i'm a phd student at McGill University in Montreal, and i'm the editor of psychology and interim editor-in-chief of the journal of trial and error.
it's great to have you here. can i get you a drink?
[VDR] yes, thanks. um, i'd like a ginger beer, if you have that?
yes, of course, and Sean?
[SD] i'll go with a chocolate milk, if you have that?
we have everything at the error bar.
when i first came across this new journal called the journal of trial and error, i was really surprised, and then i was impressed, and then intrigued about what this journal is doing and what it's for. it seems to me a really unique project. so can you describe your new journal and what it aims to do?
[SD] the journal of trial and error, or JOTE for short, is a peer-reviewed journal that publishes mixed, null, or unexpected results from a variety of scientific disciplines. our goal is to fill the gap between what's researched and what's published, and we're trying to highlight the important work that falls between the gaps of traditional publishing.
[SD] so, we all know within the scientific community that there are mountains of negative or non-confirmatory science that we rarely see, and that gets swept under the rug when we're talking about traditional publishing methods. and at the journal of trial and error what we try and do is give trial and error in science a platform that it deserves by publishing and critiquing studies that demonstrate theoretically important but nonetheless ambiguous or negative results.
it sounds like a fantastic mission for the journal. what specific problems do you think the journal can help to solve, or at least address?
[SD] on the whole today, science is very fragmented. and what i mean by that is the relationship between the science that gets done and the science that gets consumed, is not one-to-one. so, we published an introductory editorial in our first issue where we sort of go into that, but, to make a long story short: we specify three important gaps in modern science that we hope to address in some way with the journal and the project.
[SD] the first is that there's a gap between the image of science as practised and science itself. so, publicly we tend to think of science as the set of specialised methodologies that allow us to categorize and systematize the blooming, buzzing confusion of experience. er, and we tend to think of this as sort of solved and finished with. we just apply the method and get the result and we become smarter. but in practice science is really, really messy. and experiments rarely work on the first try, results are rarely falsified by a single study, and knowledge is gained in small incremental steps instead of great leaps. so this process of fine-tuning and messiness is at odds with the public image of science, and so we hope to give a bit of a platform to show what's really going on, er, when we talk about doing science.
[SD] the second gap is there's a gap between what's researched and what's published. so there's a mindset in academia of 'impact or perish' - the idea that if you're not publishing these ground-breaking studies every second week, that you're not worth much and you won't get tenure and you won't be able to succeed much as a researcher. and this is a huge problem because it pushes people to prioritise impact over what, whatever they might find in their actual research, which creates this gap between what people are actually researching - the results they're actually obtaining - and what's actually out there in the literature.
[SD] and i guess relatedly, the third gap we identify is between what's popular and what's replicable. so, i'm sure your listeners will know about the replication crisis. i think we diagnosed this partly as a result of the pressures of big discovery science, pushing scientists to engage in these questionable research practices, to publish, you know big stories that maybe aren't totally supported by the data that are included in those studies, er, and that ultimately don't replicate. and that's a huge problem, as, as i'm sure everybody knows, because no individual study can really prove anything, er and we really need replication to be confident in like, the magnitude of the effects and the effects themselves. er, but this sort of aggregation, this sort of meta-analytic thinking in science only works if the published science is reflective of what is actually being researched, is reflective of all the positive and negative results in the literature.
[SD] our journal has a way of sort of answering all of these by publishing research that is mixed or negative - sort of naked science, or science as it is - but also including meta-research articles, so articles about science that help us think and problematize current research that's already out there, and also reflection articles that take these failed studies - or these these unexpected, these studies with unexpected results - and contextualise them in the broader literature and say: 'well, you know, maybe we're only getting one side of the story and here's a separate example'.
[SD] so, i could go on about this for a long time, but i think that, that generally characterises what we're trying to do and what we think the problem is.
and when you were starting the journal, did you find there was nothing else available?
[SD] in the sciences it's not as if there weren't other platforms that have had the idea: 'hey what if we had this repository, or what if we published negative results,' i think it's a pretty intuitive idea if you're, sort of, clued in to the open science movements or what-not. what we found was missing, or at least what i found was missing, that the journal of trial and error spoke to me on, is somewhere that treats failure, and trial and error, as something important, and not as just an after-thought to be quantified and collected.
[SD] so i sort of alluded to this: in our journal, every time an empirical study is published - that meaning a study that collected some kind of data, went through some analysis and came to some conclusion that was unexpected or negative - it's accompanied by a reflection piece by a separate expert or scholar in the field. and the goal of doing this is to not say: 'well here's just a collection, a sort of art gallery of failures in academia,' but to treat it seriously, to say: 'no, these failures mean something, they, they have impact,' that if you don't replicate or if you don't, if you find an unexpected result based on this wealth of literature that you have reviewed, presumably in an introduction section or something like that, this is, this can mean something. or at the very least it can help other researchers know what to avoid in, say, experimental design or something.
[SD] and to me that was what was missing, like er, not only a recognition that you need both sides of the story to er glean truth in a sort of meta-analytic framework, but that you need both sides of the story, because both sides of the story are interesting and important and deserve to be interpreted and analysed.
Valentine did you have something to add?
[VDR] we also really just wanted to build a platform to discuss error, trial, and failure in science.
[VDR] another type of publication that we have are rejected grant applications. and there have been past projects to publish these online. er, some scholars decide to just take their past applications and put them on their own website, for example. or, and this happens more often, successful grant applications are published after a while. but we wanted to publish rejected grant applications also originally as a way to get submissions from all disciplines, as this, these difficulties to get a successful grant are something that all academics share. and we believe that a tremendous amount of researchers' time and effort is being lost in this whole process.
[VDR] we wanted to create a platform where these proposals could then be published, actually as pieces of, er, preliminary research, and as ways to continue a discussion about them - both about the ideas in the grant applications and also as a sort of collection of, um, meta-data, so that they could later be studied, and hopefully we could see - we can see - trends and biases in the process of awarding grants.
so on the, on the website of your journal you list the editorial board. maybe you can tell me a bit more about who are the editorial board and, and how did the team come together?
[VDR] the editorial board is made out of specific editors for each broad discipline. so, for example, Sean, er, is the editor for psychology and i'm the editor for history, for instance. and then we have our editor in chief Stefan Gaillard. but actually our, our journal is a lot broader than just editorial board, so we have our team leader Jobke Visser and Martijn van der Meer and we also have a great IT department who actually make the submissions into articles on our journal website, and also keep our main website running. then we have lovely team members involved with outreach, events, education, and directors for creative and sustainable development.
[VDR] so, in total we are a team of 20 people now. i think every member of the team has an important role that just transcends their title. you can see this also on our about page, of course. we also have very diverse backgrounds, coming from fields such as history, philosophy, psychology, English, medical sciences, physics, biology, and even more. and then together we have regular meetings where we discuss and reflect on new ideas regarding academic publishing and academic culture more broadly. and yes, i believe we all have our own interesting and important ideas to contribute to those discussions.
Sean do you want to add anything about, about the team, and how do you, yeah how do you all know each other or how did you get together?
[SD] i don't have much to add, i think valentine covered it well. i guess i would just say that it's great that we're also an international team. so, i, we're touching almost all the continents, or we will get there, one day. even though it started as a, as a relatively small project in the Netherlands, it's really grown into something, much more substantial that i, i've never been to the Netherlands and i hope to visit one day but i'm still, i still feel like i'm part of this project and i think i can speak for all the team members, hopefully, that they feel the same way.
so, i'm really impressed with the journal and the project. i think it's, it's absolutely what we need. and i'm also impressed at how early in your careers you've managed to do this. when i was in, a phd student, i, i remember campaigning against some charges at our accommodation, and maybe other phd students might, i don't know, campaign to change the kind of sandwiches they have at the cafe, but you guys started a journal. it's an amazing achievement. so can you tell us more about how that came to be?
[VDR] well, like any good idea, of course, the journal began over drinks after a lecture. so here, more specifically, it was a colloquium on open science in the fall of 2018, and Stefan, Martijn and Max, who were my past fellow classmates from the masters degree in history and philosophy of science at Utrecht University realised that open science is about more than making publications just openly available. so, it aims to decrease pressure on academics, for example, and critically look at our whole model of scientific publishing. so, the movement of open science also talks about a replication crisis, which we have mentioned, and, er, positive publication bias.
[VDR] so our co-founders thought about practical ways of incorporating these reflections on science into an idea that would actually make science more open, honest, and transparent. so, someone joked: 'how funny would it be if we made a whole journal just about failure?' this idea got then rebranded into, er, trial and error, rather than failure which does not sound so marketable. so JOTE was born out of reflections on current scientific practices in conversation with scientists, and mainly a wish to apply these insights and take practical actions.
[VDR] i consider the project as applied history, philosophy, and sociology of science, one could say, with a combination of idealism and not taking ourselves too seriously at the beginning. we discussed all the different aspects of our dreamed-of journal. we were constantly asking: 'how do we set up a journal in a way that we can bring the change that we want?' from there the ball kept on rolling and more and more people joined the project, so, as did i in the winter of 2020. and now i would say we are even evolving towards, we now call it 'the centre of trial and error'. we also want to do projects of education, be, as mentioned, a broader place of discussion through debates, and we also have plans of making our own podcast.
[VDR] so, er, we would also like, and we have the opportunity, to do our own research. i am particularly interested in what JOTE can mean for the humanities, as our current model of empirical and reflection articles might not be the best fit. instead, i want to ask our initial question: 'what is the gap between what is researched and what is published?' and specifically look at the case of the humanities. so, there are, probably, some similarities. replication studies in history are something that is starting to be done, for example by Pim Huijnen and Pieter Huistra [see here and listen here]. but we also expect that there are differences between disciplines, and how error, failure, and success, are being seen.
Sean, do have anything to add about the, the origins of JOTE?
[SD] no, that was, that was great. i'd just would add that i always like hearing the inception story, no matter how many times i hear it. as, our journal becomes more and more like a religion, it's like our Genesis, and so it's always cathartic to hear it over and over again
excellent. you're leading your people away from closed science and into the open future. and, erm, maybe you can say something about, what are the, like, the start-up costs? how much money do you need, and do you need real IT professionals, and do you need journalists, or can just do it in your attic?
[SD] for the most part we've been fortunate enough to receive funding from various funding bodies, erm, mostly at Utrecht University. but everyone who was, who was part of the project, except for one intern that we hired recently, has been doing this out of a genuine interest, of, of the mission of the journal. erm, and so we have a really amazing IT department, we have really amazing people who dedicate a lot of time, but i'm doing it for free, er, most of us are doing it for free. um, a lot of the start-up costs are, i guess you would say, bureaucratic or practical, but they don't actually, to my knowledge, go to, erm, many of the people involved - it's more just keeping everything afloat.
do you think it's fair to say that in an ideal world we wouldn't need JOTE because everyone would be reflecting on their errors and failures within the main articles?
[SD] you're absolutely right. in fact, in our first issue editorial we say that JOTE's ultimate goal is to become obsolete. so we're pretty adamant that JOTE can only exist in a scientific climate in which trial and error is systematically ignored. to put it shortly, it makes just as much sense to talk about a journal dedicated trial and error as it does to have a journal about trial and success. that is, it makes little sense. yet, still despite the fact that these should be held on even ground, we still are really only getting one side of science - and that is the success side - and so long as that is the case, there will be a need for projects like ours to give a voice to the voiceless of science, and to stay true to the transparency that the scientific method demands of us.
[SD] so, in that sense, our goal is relatively modest, which is just to live up to the ideals that we set ourselves up for as scientists. but, in practice, we're talking about a radical departure from business as usual. we're talking about a full paradigm shift of the kinds that you alluded to in your question, where, if we could achieve this ideal world where everybody talks about errors, and errors are treated on the same level as success, then there would be no need for the journal of trial and error. so if we do succeed in this, all the better. if we fail, well we hope our failure is just instructive to others.
and if you fail you'll have somewhere to write it up.
[SD] there you go. it's all, er, it's all er, the long game, the long game until we have something to write at the end.
Valentine, how do you feel about the future of your journal?
[VDR] i hope, um, that the journal can be a platform for discussion in different disciplines but also between different disciplines. i also hope that what i call a taxonomy of trial and error offers up the opportunity to think about similarities and differences between the academic disciplines.
one service we offer here at the error bar is absolution for your science sins. are there any scientific errors or research sins that you would like to be forgiven for?
[VDR] i am a historian of science, so of course i have a great interest in science, but i do not consider myself as a scientist in the English way of thinking about the term. but i can, i guess, always confess to a certain sin. so, i think that what i could do more of, probably, is also at the basics of open science, and something that does not happen a lot in historical research, is publishing your open data, for example; giving more insight into your research progress and the research process. this could then be a way to do more replication studies, or learn more from other historians.
and Sean do you have any confessions to make?
yeah, i think i am certainly guilty of being that person who reads the abstract and says 'i read the paper'. erm, yeah, that's just fatigue from being in front of a screen too much - especially this past year. in the future i hope i can actually read the papers in more detail than just be like 'oh yeah, i get it'.