about

for:

we will:

because:

policies

with at least 25 brain science news, & ~1000 new research articles published every fortnight (e.g., search for "brain AND neur* AND psych*" here), the error bar needs some filters. here is how articles are selected for the podcast:

process

we're often asked here at the error bar "just how do you create a fortnighly brain science news podcast with a few dozen regular listeners?". wonder no more:

  1. Monday 15-17h: check for news articles published since last episode & select stories
  2. Tuesday 08-13h: read, review, summarise; create story pages on website
  3. Tuesday 13-14h: lunch, very often a cheese & pickle sandwich
  4. Tuesday 14-15h: record with audacity
  5. Tuesday 16-18h: edit & mix; adjust level with auphonic
  6. Wednesday-Thursday: listen, edit, upload to anchor.fm
  7. Thursday 18h: check websites, RSS feed, relax
  8. Friday: 08-09h: create wavve trailer & Tweet
  9. ...wait 10 days...

article rating systems:

media reports

for each media report covered in our stories, we assign a 4-point rating, representing our opinion of how well the media source has reported the science. we are not saying whether the general claims are true or false (e.g., 'red wine causes cancer') - we are assessing whether the media report adequately reflects the science.

the four levels are:

fiction

stories classed as 'fiction' are those which either have no scientific basis at all, which do not relate to a specific piece of scientific research, or which completely fail to represent the science accurately or adequately. there is no scientific story here.

fudge

stories which 'fudge' are those where there is a genuine piece of scientific research being reported, but the report is highly selective, it distorts the scientific methods, results or conclusions, or it seems to have non-scientific motivations. there is a real scientific story here, but the report fails to tell it.

fair

stories judged 'fair' report a genuine piece of scientific research, and on the whole they do so fairly. there may be some selection, bias or non-scientific framing. there is a real scientific story here, and the report does a fair job of telling it.

fact

stories earning 'fact' status report a genuine piece of scientific research without distorting the scientific methods, results or conclusions. the report may have non-scientific motivations or context, but this is clear and separate from the science. there is a real scientific story here, and the report tells it well.

scientific articles

for each brain science article that we read in full, we assign a 3-point rating, representing our opinion of how well the science has been done and/or reported. this is the kind of scientific opinion that we would give during the 'peer review' process. we are not saying whether the general claims are true or false (e.g., 'red wine causes cancer') - we are assessing whether the science has been done and reported adequately.

the three levels are:

reject

articles that we 'reject' are those which, in our view, have a fundamental flaw in the design or conduct of the study. this flaw means that the article is unable to provide good evidence for the claims made. an example could be the lack of a control condition or group.

revise

articles rated 'revise' are those without obvious fundamental flaws in the design or conduct, but aspects of the analysis or reporting of the study make it difficult to assess the claims made. an example could be missing details of methods, or poor statistical analysis.

accept

'accepted' articles are those without obvious fundamental flaws in the design or conduct, and which analyse and report the data correctly, fairly, and with appropriate uncertainty.