the ability to communicate with researchers while asleep is an exciting prospect for brain science. 1 in 12 of the volunteers in this study might have been able to do it. sometimes. a bit  

original article: Konkoly et al., 2021 (Current Biology), reported in: Scientific American by Diana Kwon on 18th February 2021 & The Independent by Adam Smith on 23rd February 2021 & The Daily Mail by Johnathan Chadwick on 18th February 2021 image source

this story was in episode 4 #sleep #EEG #lucid #stats #selection

the error bar says

Scientific American, The Independent, and The Dail Mail all reported on a remarkable study of the apparent two-way communication between researchers and volunteers who were asleep. the volunteers were experiencing 'lucid dreams' where they have some control over their dreams

in the study - which was actually four independent studies from four countries, all with slightly different methods and participants - 36 volunteers were trained to give simple eye-movements or facial twitches in response to (very) simple maths questions. three left-right eye movements is the correct answer to 'one plus two'; five eyebrow twitches is the wrong answer to 'two plus two'

the volunteers then went to sleep in the laboratory, connected to some electrodes recording activity from their brain and facial muscles. the remarkable bit of the story is that when the volunteers started showing signs of the 'rapid eye movement' sleep phase which indicates dreaming, the volunteers were then asked the maths questions again. and some of them responded - sometimes - with eye movements or facial twitches. and for a not-insignificant proportion of those questions, the movements corresponded with the correct answer

the journalists concluded that two-way communication is possible during sleep, that we can break into peoples' dreams and leave them messages. while the numbers of successful communications remained quite small, the scientists hailed this a proof-of-concept study

can we really communicate with the sleeping?

remarkable claims require remarkable evidence. this is a remarkable report, so the error bar is raising its evidence threshold to 'remarkable'

i want to believe it. it should be possible to communicate with lucid dreamers. it fits with my expectations. but i'm sorry listeners - i'm not convinced. i don't necessarily believe that the evidence is not there in the data. i just don't think the reported analyses show this evidence sufficiently - or correctly

first, the overall study is actually four independent studies from four groups that have been somewhat cobbled together post-hoc. multi-site studies can be fantastically powerful ways to do prospective science, but this seems to be a retrospective collaboration. was the evidence from the individual groups insufficient? perhaps this paper is the pilot study for a prospective collaboration?

second, each study tests a different population, recruited in a different way, using different methods. the experimental training, task, interventions & analysis are all different, yet the data are all pooled

and third, that's my main problem with the paper. all the data are pooled together as if they are equivalent. but they're not. the data come from different numbers of people & different numbers of sleeping & dreaming events. for example, 80% of the data come from just 27% of the participants. but the statistics all ignore this hierarchical or clustered structure in the data

22 participants in the US were trained to dream lucidly; 10 Germans were recruited from an (expert) lucid dreamers' forum; 37 Dutch volunteers had had at least one lucid dream; & there was a single French narcolepsy patient. these 70 participants were reduced in various slightly opaque ways to the 36 mentioned in the report

from these 36 dreamers, only 6 produced one or more correct responses to questions asked during sleep. statistically, then, should we be analysing either the 36 tested participants, or the 6 selected 'responder' participants? we're then asked to focus on 158 communication attempts, themselves selected from a total of 850. (there were also 802 control attempts.) and from these 158 selected attempts, only 29 resulted in a correct answer. some of these 29 events were presented graphically in the paper

much of the analysis was done by independent experts' eyes, blind to condition - which is great. but why was there no attempt to analyse the data computationally? surely such signals - if they exist in the data - can be detected automatically?

finally, the statistical approach here is a mess: sometimes the stats use the participant as the 'unit of analysis' - an important concept about what is being measured - sometimes it's the experimental session being analysed, sometimes it's the data epoch, & sometimes it's the individual responses. what you want to be able to say at the end of a study like this is that 'people can answer questions when they're asleep'. but we can't say that. what we can say is that 'some questions can be answered when people are asleep' - the unit of analysis is the questions, not the people

in summary, there is so much whittling & selection of data here that it is very hard to tell what's wheat and what's chaff


a highly-selective analysis of highly selected participants shows that a small minority of them seem able to respond to simple questions while they sleep. the stats here are a mess


Scientific American: fact - scientific story reported well

The Independent: fair - scientific story mostly intact

The Daily Mail: fact - scientific story reported well