#digitalnatives and #edtech and #woolongong- The ‘digital natives’ debate: A critical review of the evidence. British Journal of Educational Technology, Bennett et al Feb 2008

I blogged the other day on a recent paper on the myth of the digital native. Here is another paper, by Sue Bennett, Karl Maton and Lisa Kervin, from nearly a decade ago, on the same theme – and equally trenchant:

The idea that a new generation of students is entering the education system has excited recent attention among educators and education commentators. Termed ‘digital natives’ or the ‘Net generation’, these young people are said to have been immersed in technology all their lives, imbuing them with sophisticated technical skills and learning preferences for which traditional education is unprepared. Grand claims are being made about the nature of this generational change and about the urgent necessity for educational reform in response. A sense of impending crisis pervades this debate. However, the actual situation is far from clear. In this paper, the authors draw on the fields of education and sociology to analyse the digital natives debate. The paper presents and questions the main claims made about digital natives and analyses the nature of the debate itself. We argue that rather than being empirically and theoretically informed, the debate can be likened to an academic form of a ‘moral panic’. We propose that a more measured and disinterested approach is now required to investigate ‘digital natives’ and their implications for education.

On an entirely different note, the authors are/were affiliated with the University of Woolongong. Recent days have seen the death of Geoff Mack, who wrote the song “I’ve Been Everywhere” Originally a list of Australian placenames :

The song inspired versions internationally – the best known being Johnny Cash’s and The Simpsons’ – but the wittiest alternative version is this (NB – Dapto is a few miles from Wollongong)

Anyway, back the digital natives. Bennet et al begin with a quote from Marcel Proust:

The one thing that does not change is that at any and every time it appears that there have been
‘great changes’.
Marcel Proust, Within a Budding Grove

The authors summarise what a digital native is supposed to be like – and the not exactly extensive evidence base for their existence:

The claim made for the existence of a generation of ‘digital natives’ is based on two
main assumptions in the literature, which can be summarised as follows:

1. Young people of the digital native generation possess sophisticated
knowledge of and skills with information technologies.
2. As a result of their upbringing and experiences with technology, digital natives have particular learning preferences or styles that differ from earlier generations of students.

In the seminal literature on digital natives, these assertions are put forward with limited
empirical evidence (eg, Tapscott, 1998), or supported by anecdotes and appeals to
common-sense beliefs (eg, Prensky, 2001a). Furthermore, this literature has been referenced,
often uncritically, in a host of later publications (Gaston, 2006; Gros, 2003;
Long, 2005; McHale, 2005; Skiba, 2005). There is, however, an emerging body of
research that is beginning to reveal some of the complexity of young people’s computer
use and skills.

No one denies that a lot of young people use a lot of technology – but not all:

In summary, though limited in scope and focus, the research evidence to date indicates
that a proportion of young people are highly adept with technology and rely on it for a
range of information gathering and communication activities. However, there also
appears to be a significant proportion of young people who do not have the levels of access or technology skills predicted by proponents of the digital native idea. Such generalisations about a whole generation of young people thereby focus attention on
technically adept students. With this comes the danger that those less interested and less able will be neglected, and that the potential impact of socio-economic and cultural factors will be overlooked. It may be that there is as much variation within the digital native generation as between the generations.

It is often suggested that children who are merrily exploring the digital world are ground down with frustration by not having the same access to computers in school. This is part of a more general (with familiar rhetoric for the health IT world) demand for transformation (the word “disruptive” in its modern usage had not quite caught on in 2008) As is often the case, the empirical evidence (and also, I would say, a certain degree of common sense) is not with the disrupters:

The claim we will now examine is that current educational systems must change in
response to a new generation of technically adept young people. Current students have
been variously described as disappointed (Oblinger, 2003), dissatisfied (Levin & Arafeh,
2002) and disengaged (Prensky, 2005a). It is also argued that educational institutions
at all levels are rapidly becoming outdated and irrelevant, and that there is an urgent
need to change what is taught and how(Prensky, 2001a; Tapscott, 1998). For example,
Tapscott (1999) urges educators and authorities to ‘[g]ive students the tools, and they
will be the single most important source of guidance on how to make their schools relevant and effective places to learn’ (p. 11).Without such a transformation, commentators
warn, we risk failing a generation of students and our institutions face imminent
obsolescence.

However, there is little evidence of the serious disaffection and alienation among students
claimed by commentators. Downes’ (2002) study of primary school children
(5–12 years old) found that home computer use was more varied than school use and
enabled children greater freedom and opportunity to learn by doing. The participants
did report feeling limited in the time they were allocated to use computers at school and
in the way their use was constrained by teacher-directed learning activities. Similarly,
Levin and Arafeh’s (2002) study revealed students’ frustrations at their school Internet
use being restricted, but crucially also their recognition of the school’s in loco parentis
role in protecting them from inappropriate material. Selwyn’s (2006) student participants
were also frustrated that their freedom of use was curtailed at school and ‘were
well aware of a digital disconnect but displayed a pragmatic acceptance rather than the
outright alienation from the school that some commentators would suggest’ (p. 5).

In 2008 Bennett et al summarised similar issues relating to students actual rather than perceived technical adeptness and net savviness to the 2016 authors:

Furthermore, questions must be asked about the relevance to education of the everyday
ICTs skills possessed by technically adept young people. For example, it cannot be
assumed that knowing how to look up ‘cheats’ for computer games on the Internet
bears any relation to the skills required to assess a website’s relevance for a school
project. Indeed, existing research suggests otherwise. When observing students interacting
with text obtained from an Internet search, Sutherland-Smith (2002) reported
that many were easily frustrated when not instantly gratified in their search for immediate
answers and appeared to adopt a ‘snatch and grab philosophy’ (p. 664). Similarly,
Eagleton, Guinee and Langlais (2003) observed middle-school students often making
‘hasty, random choices with little thought and evaluation’ (p. 30).
Such research observes shallow, random and often passive interactions with text,which
raise significant questions about what digital natives can actually do as they engage
with and make meaning from such technology. As noted by Lorenzo and Dziuban
(2006), concerns over students’ lack of critical thinking when using Internet-based
information sources imply that ‘students aren’t as net savvy as we might have assumed’
(p. 2). This suggests that students’ everyday technology practices may not be directly
applicable to academic tasks, and so education has a vitally important role in fostering
information literacies that will support learning.

Again, this is a paper I could quote bits from all day – so here are a couple of paragraphs from towards the end that summarises their (and my) take on the digital natives:

Neither dismissive scepticism nor uncritical advocacy enable understanding of whether
the phenomenon of digital natives is significant and in what ways education might need
to change to accommodate it. As we have discussed in this paper, research is beginning
to expose arguments about digital natives to critical enquiry, but much more needs to be
done. Close scrutiny of the assumptions underlying the digital natives notion reveals
avenues of inquiry that will inform the debate. Such understanding and evidence are
necessary precursors to change.

The claim that there is a distinctive new generation of students in possession of sophisticated
technology skills and with learning preferences for which education is not
equipped to support has excited much recent attention. Proponents arguing that education
must change dramatically to cater for the needs of these digital natives have
sparked an academic form of a ‘moral panic’ using extreme arguments that have lacked
empirical evidence.

Finally, after posting the prior summary of Kirschner and deBruckyne’s paper, I searched hashtag #digitalnatives on Twitter and – self-promotingly – replied to some of the original tweeters with a link to the paper (interestingly quite a few #digitalnatives tweets were links to discussions of the Kirschner/deBruckyne paper) Some were very receptive, but others were markedly defensive. Obviously a total stranger coming along and pedantically pointing out your hashtag is about something that doesn’t exist may not be the most polite way of interacting on twitter – but also quite a lot of us are quite attached to the myth of the digital native

Can fMRI solve the mind-body problem? Tim Crane, “How We Can Be”, TLS, 24/05/17

In the current TLS, an excellent article by Tim Crane on neuroimaging, consciousness, and the mind-body problem. Many of my previous posts here related to this have endorsed a kind of mild neuro-scepticism, Crane begins his article by describing an experiment which should the literally expansive nature of neuroscience:

In 2006, Science published a remarkable piece of research by neuroscientists from Addenbrooke’s Hospital in Cambridge. By scanning the brain of a patient in a vegetative state, Adrian Owen and his colleagues found evidence of conscious awareness. Unlike a coma, the vegetative state is usually defined as one in which patients are awake – they can open their eyes and exhibit sleep-wake cycles – but lack any consciousness or awareness. To discover consciousness in the vegetative state would challenge, therefore, the basic understanding of the phenomenon.

The Addenbrooke’s patient was a twenty-three-year-old woman who had suffered traumatic brain injury in a traffic accident. Owen and his team set her various mental imagery tasks while she was in an MRI scanner. They asked her to imagine playing a game of tennis, and to imagine moving through her house, starting from the front door. When she was given the first task, significant neural activity was observed in one of the motor areas of the brain. When she was given the second, there was significant activity in the parahippocampal gyrus (a brain area responsible for scene recognition), the posterior parietal cortex (which represents planned movements and spatial reasoning) and the lateral premotor cortex (another area responsible for bodily motion). Amazingly, these patterns of neural responses were indistinguishable from those observed in healthy volunteers asked to perform exactly the same tasks in the scanner. Owen considered this to be strong evidence that the patient was, in some way, conscious. More specifically, he concluded that the patient’s “decision to cooperate with the authors by imagining particular tasks when asked to do so represents a clear act of intention, which confirmed beyond any doubt that she was consciously aware of herself and her surroundings”.

Owen’s discovery has an emotional force that one rarely finds in scientific research. The patients in the vegetative state resemble those with locked-in syndrome, a result of total (or near-total) paralysis. But locked-in patients can sometimes demonstrate their consciousness by moving (say) their eyelids to communicate (as described in Jean-Dominique Bauby’s harrowing and lyrical memoir, The Diving Bell and the Butterfly, 1997). But the vegetative state was considered, by contrast, to be a condition of complete unconsciousness. So to discover that someone in such a terrible condition might actually be consciously aware of what is going on around them, thinking and imagining things, is staggering. I have been at academic conferences where these results were described and the audience was visibly moved. One can only imagine the effect of the discovery on the families and loved ones of the patient.

Crane’s article is very far from a piece of messianic neurohype, but he also acknowledges the sheer power of this technology to expand our awareness of what it means to be conscious and human, and the clinical benefit that is not something to be sniffed at. But, it doesn’t solve the mind-body problem – it actually accentuates it:

Does the knowledge given by fMRI help us to answer Julie Powell’s question [essentially a restatement of the mind-body problem by a food writer]? The answer is clearly no. There is a piece of your brain that lights up when you talk and a piece that lights up when you walk: that is something we already knew, in broad outline. Of course it is of great theoretical significance for cognitive neuroscience to find out which bits do what; and as Owen’s work illustrates, it is also of massive clinical importance. But it doesn’t tell us anything about “how we can be”. The fact that different parts of your brain are responsible for different mental functions is something that scientists have known for decades, using evidence from lesions and other forms of brain damage, and in any case the very idea should not be surprising. FMRI technology does not solve the mind–body problem; if anything, it only brings it more clearly into relief.

Read the whole thing, as they say. It is a highly stimulating read, and also one which, while it points out the limits of neuroimaging as a way of solving the difficult problems of philosophy, gives the technology and the discipline behind it its due.

Less-than-busy doctors: “The Beetle Hunter” Arthur Conan Doyle

S J Perelman wrote a series of New Yorker articles titled “Cloudland Revisited”, wherein he re-read or re-watched various books and movies of his youth. In what now seems a slightly grating way , he invariably finds them ludicrous pulp. Anyhow, in “Doctor, What Big Green Eyes You Have”, Sax Rohmer’s Fu Manchu stories come in for the treatment. In this, Perelman writes:

“Petrie, I have travelled from Burma not in the interests of the British Government merely, but in the interest of the entire white race, and I honestly believe – though I pray I may be wrong – that its survival depends largely on the success of my mission.” Can Petrie, demands Smith, spare a few days from his medical duties for “the strangest business, I can promise you, that ever was recorded in fact or fiction”? He gets the expected answer: “I agreed readily enough for, unfortunately, my professional duties were not onerous.” The alacrity with which doctors of that epoch deserted their practice has never ceased to impress me. Holmes had only to crook his finger and Watson went bowling away in a four wheeler, leaving his patients to fend for themselves. If the foregoing is at all indicative, the mortality rate of London in the nineteen-hundreds must have been appalling.

 

My understanding is that Arthur Conan Doyle had a quiet career as a private ophthalmologist before literary work overtook his medical efforts. Of course, the structure of medicine as a career was very different then. The medical student and junior doctor of popular and popular-ish fiction tends to have more free time than is the norm nowadays.

Conan Doyle’s short story The Beetle Hunter is very much in this mould. Perhaps this paragraph reflects more about Conan Doyle’s own view of the medical professional than strictly being a piece of social history, but there you go:

I had just become a medical man, but I had not started in practice, and I lived in rooms in Gower Street. The street has been renumbered since then, but it was in the only house which has a bow-window, upon the left-hand side as you go down from the Metropolitan Station. A widow named Murchison kept the house at that time, and she had three medical students and one engineer as lodgers. I occupied the top room, which was the cheapest, but cheap as it was it was more than I could afford. My small resources were dwindling away, and every week it became more necessary that I should find something to do. Yet I was very unwilling to go into general practice, for my tastes were all in the direction of science, and especially of zoology, towards which I had always a strong leaning. I had almost given the fight up and resigned myself to being a medical drudge for life, when the turning-point of my struggles came in a very extraordinary way.

A story in which a recent medical graduate now is immersed in idleness would be seen as fatally implausible. He or she would be doing pro bono work down the lab, sequencing some beetle genome or other. Of course, this striving means we are Much Better People than those of long ago. Doesn’t it?

 

 

 

 

 

 

 

Leandro Herrero: “A team is not a meeting”

 

Another wonderful reflection from Leandro Herrero, this time I am being more selective in my quoting…:

 

One of the most toxic practices in organisational life is equating ‘team’ and ‘team meeting’. You could start a true transformation by simply splitting them as far apart as you can and by switching on the team permanently. In a perfect team, ‘stuff happens’ all the time without the need to meet. Try the disruptive idea ‘Team 365’ to start a small revolution.

In our minds, the idea that teams are something to do with meetings is well embedded. And indeed, teams do meet… But ‘the meeting’ has become synonymous with ‘the team’. Think of the language we often use. If there is an issue or something that requires a decision and this is discussed amongst people who belong to a team, we often hear things such as, “let’s bring it to the team”. In fact, what people mean really is, “let’s bring it to the meeting. Put it on the agenda.” By default, we have progressively concentrated most of the ‘team time’ in ‘meeting time’. The conceptual borders of these two very different things have become blurred. We have created a culture where team equals meetings equals team. And this is disastrous.

As a consequence of the mental model and practice that reads ‘teams = meeting = teams’, the team member merely becomes an event traveller (from a few doors down or another country?). These team travellers bring packaged information, all prepared for the disclosure or discussion at ‘the event’.

Leandro Herrero: “An enlightened top leadership is sometimes a fantastic alibi for a non-enlightened management to do whatever they want”

From Leandro Herrero’s  website, a “Daily Thought” which I am going to take the liberty of quoting in full:

Nothing is more rewarding than having a CEO who says world-changing things in the news, and who produces bold, enlightened and progressive quotes for all admirers to be. That organization is lucky to have one of these. The logic says that all those enlightened statements about trust, empowerment, humanity and purpose, will be percolated down the system, and will inform and shape behaviours in the milfeulle of management layers below.

I take a view, observed many times, that this is wishful thinking. In fact, quite the opposite, I have seen more than once how management below devolves all greatness to the top, happily, whilst ignoring it and playing games in very opposite directions. Having the very good and clever and enlightened people at the top is a relief for them. They don’t have to pretend that they are as well, so they can exercise their ‘practical power’ with more freedom. That enlightened department is covered in the system, and the corporate showcase guaranteed.

The distance between the top and the next layer down may not be great in organizational chart terms, yet the top may not have a clue that there is a behavioural fabric mismatch just a few centimeters down in the organization chat.

I used to think years ago, when I was older, that a front page top notch leader stressing human values provided a safe shelter against inhuman values for his/her organization below. I am not so sure today. In fact, my alarm bell system goes mad when I see too much charismatic, purpose driven, top leadership talk. I simply smell lots of alibis below. And I often find them. After all, there is usually no much room for many Good Cops

Yet, I very much welcome the headline grabbing by powerful business people who stress human values, and purpose, and a quest for a decent world. The alternative would be sad. I don’t want them to stop that. But let’s not fool ourselves about how much of that truly represents their organizations. In many cases it represents them.

I guess it all goes back, again, to the grossly overrated Role Model Power attributed to the leadership of organizations, a relic of traditional thinking, well linked to the Big Man Theory of history. Years of Edelman’s Trust Barometer, never attributing the CEO more than 30% of the trust stock in the organization, have not convinced people that the ‘looking up’ is just a small part of the story. What happens in organizations has a far more powerful ‘looking sideways’ traction: manager to manager, employee to employee. Lots of ritualistic dis-empowering management practices can site very nicely under the umbrella of a high empowerment narrative at the top, and nobody would care much. The top floor music and the music coming from the floor below, and below, are parallel universes.

Traditional management and MBA thinking has told us that if this is the case, the dysfunctionality of the system will force it to break down. My view is the opposite. The system survives nicely under those contradictions. In fact it needs them.

 

I found this reflection, especially the final three paragraphs, particularly striking. Health care organisations are getting better and better at talking the talk at the highest levels about empowerment and respect and [insert Good Thing here] – but how much that really has an impact on the daily management practices that are the day to day reality of working within that organisation?

I also like the scepticism about Role Model Power of the Big Man (or Woman) on top. Dr Herrero, described on his Twitter as an “organisational architect”, clearly has a healthy view of the reality that underlies much rhetoric. I look forward to the HSE’s Values in Action project which is very much following the lines of his work.

Brief thoughts on biases

Cognitive biases are all the rage in intellectual discourse, especially since the publication of Thinking, Fast And Slow.

thinkingfast
Recent on Twitter I came across this tweet:

(the image isn’t with the embedded tweet so you will have to follow the link)

Not only is the diagram “beautiful if terrifying”, but the accompanying article at the link is a terrific overview of biases. It also makes the point very clearly that biases are tools – and are responses to problems. Much of the discourse around bias makes them sound like unmixed evils and realising they are in fact approaches to the world that help up survive and (possibly) thrive, with the potential to mislead also, is important.

agreewith

I have been contemplating a longer piece on bias, and the role of bias-discourse in contemporary debates (especially online) Bias-hunting has become a bit like the Popperian view of Marxism and Freudianism – an approach that explains everything. There are so many biases that everyone and every assertion can be accused of possessing at least one.

This is something I wish to expand on at some point. Bias discourse is very prevalent in the medical literature – this is broadly to be welcomed. Yet I am suspicious (this is perhaps a bias of some kind) that bias discourse can be misused to shut down debates and dialogues, and that some of the proposed solutions – “metacognition”, the scientific method (reified to an uncomfortable degree) – are themselves prone to bias.

 

“The Wild West of Health” care: mental health Apps, evidence, and clinical credibility

“The Wild West of Health” care: mental health Apps, evidence, and clinical credibility

We read and hear much about the promise of mobile health. Crucial in the acceptance of mobile health by the clinical community is clinical credibility. And now, clinical credibility is synonymous with evidence, and just “evidence” but reliable, solid evidence. I’ve blogged before about studies of the quality of mental health smartphone apps. I missed this piece from Nature which, slightly predictably, is titled “Mental Health: There’s an app for that.” (isn’t “there’s an App for that a little 2011-ish though?) It begins by surveying the immense range of mental health-focused apps out there:

 

Type ‘depression’ into the Apple App Store and a list of at least a hundred programs will pop up on the screen. There are apps that diagnose depression (Depression Test), track moods (Optimism) and help people to “think more positive” (Affirmations!). There’s Depression Cure Hypnosis (“The #1 Depression Cure Hypnosis App in the App Store”), Gratitude Journal (“the easiest and most effective way to rewire your brain in just five minutes a day”), and dozens more. And that’s just for depression. There are apps pitched at people struggling with anxiety, schizophrenia, post-traumatic stress disorder (PTSD), eating disorders and addiction.

The article also has a snazzy  infographic illustrating both the lack of mental health services and the size of the market:

naturegraph

The meat of the article, however, focuses on the lack of evidence and evaluation of these apps. There is a cultural narrative which states that Technology = Good and Efficient, Healthcare = Bad and Broken and which can give the invocation of Tech the status of a godterm, pre-empting critical thought. The Nature piece, however, starkly illustrates the evidence gap:

But the technology is moving a lot faster than the science. Although there is some evidence that empirically based, well-designed mental-health apps can improve outcomes for patients, the vast majority remain unstudied. They may or may not be effective, and some may even be harmful. Scientists and health officials are now beginning to investigate their potential benefits and pitfalls more thoroughly, but there is still a lot left to learn and little guidance for consumers.

“If you type in ‘depression’, its hard to know if the apps that you get back are high quality, if they work, if they’re even safe to use,” says John Torous, a psychiatrist at Harvard Medical School in Boston, Massachusetts, who chairs the American Psychiatric Association’s Smartphone App Evaluation Task Force. “Right now it almost feels like the Wild West of health care.”

There isn’t an absolute lack of evidence, but there are issues with  much of the evidence that is out there:

Much of the research has been limited to pilot studies, and randomized trials tend to be small and unreplicated. Many studies have been conducted by the apps’ own developers, rather than by independent researchers. Placebo-controlled trials are rare, raising the possibility that a ‘digital placebo effect’ may explain some of the positive outcomes that researchers have documented, says Torous. “We know that people have very strong relationships with their smartphones,” and receiving messages and advice through a familiar, personal device may be enough to make some people feel better, he explains.

And even saying that (and, in passing, I would note that in branch of medical practice, a placebo effect is something to be harnessed, not denigrated – but in evaluation and study, rigorously minimising it is crucial) there is a considerable lack of evidence:

But the bare fact is that most apps haven’t been tested at all. A 2013 review8 identified more than 1,500 depression-related apps in commercial app stores but just 32 published research papers on the subject. In another study published that year9, Australian researchers applied even more stringent criteria, searching the scientific literature for papers that assessed how commercially available apps affected mental-health symptoms or disorders. They found eight papers on five different apps.

The same year, the NHS launched a library of “safe and trusted” health apps that included 14 devoted to treating depression or anxiety. But when two researchers took a close look at these apps last year, they found that only 4 of the 14 provided any evidence to support their claims10. Simon Leigh, a health economist at Lifecode Solutions in Liverpool, UK, who conducted the analysis, says he wasn’t shocked by the finding because efficacy research is costly and may mean that app developers have less to spend on marketing their products.

Like any healthcare intervention, an App can have adverse effects:

When a team of Australian researchers reviewed 82 commercially available smartphone apps for people with bipolar disorder12, they found that some presented information that was “critically wrong”. One, called iBipolar, advised people in the middle of a manic episode to drink hard liquor to help them to sleep, and another, called What is Biopolar Disorder, suggested that bipolar disorder could be contagious. Neither app seems to be available any more.

And even more fundamentally, in some situations the App concept itself and the close relationship with gamification can backfire:

Even well-intentioned apps can produce unpredictable outcomes. Take Promillekoll, a smartphone app created by Sweden’s government-owned liquor retailer, designed to help curb risky drinking. While out at a pub or a party, users enter each drink they consume and the app spits out an approximate blood-alcohol concentration.

When Swedish researchers tested the app on college students, they found that men who were randomly assigned to use the app ended up drinking more frequently than before, although their total alcohol consumption did not increase. “We can only speculate that app users may have felt more confident that they could rely on the app to reduce negative effects of drinking and therefore felt able to drink more often,” the researchers wrote in their 2014 paper13.

It’s also possible, the scientists say, that the app spurred male students to turn drinking into a game. “I think that these apps are kind of playthings,” says Anne Berman, a clinical psychologist at the Karolinska Institute in Stockholm and one of the study’s authors. There are other risks too. In early trials of ClinTouch, researchers found that the symptom-monitoring app actually exacerbated symptoms for a small number of patients with psychotic disorders, says John Ainsworth at the University of Manchester, who helped to develop the app. “We need to very carefully manage the initial phases of somebody using this kind of technology and make sure they’re well monitored,” he says.

I am very glad to read that one of the mHealth apps which is a model of evidence based practice is one that I have both used and recommended myself – Sleepio:

sleepio-logo

One digital health company that has earned praise from experts is Big Health, co-founded by Colin Espie, a sleep scientist at the University of Oxford, UK, and entrepreneur Peter Hames. The London-based company’s first product is Sleepio, a digital treatment for insomnia that can be accessed online or as a smartphone app. The app teaches users a variety of evidence-based strategies for tackling insomnia, including techniques for managing anxious and intrusive thoughts, boosting relaxation, and establishing a sleep-friendly environment and routine.

Before putting Sleepio to the test, Espie insisted on creating a placebo version of the app, which had the same look and feel as the real app, but led users through a set of sham visualization exercises with no known clinical benefits. In a randomized trial, published in 2012, Espie and his colleagues found that insomniacs using Sleepio reported greater gains in sleep efficiency — the percentage of time someone is asleep, out of the total time he or she spends in bed — and slightly larger improvements in daytime functioning than those using the placebo app15. In a follow-up 2014 paper16, they reported that Sleepio also reduced the racing, intrusive thoughts that can often interfere with sleep.

The Sleepio team is currently recruiting participants for a large, international trial and has provided vouchers for the app to several groups of independent researchers so that patients who enrol in their studies can access Sleepio for free.

sleepioprog

This is extremely heartening – and as stated above, clinical credibility is key in the success of any eHealth / mHealth approach. And what does clinical credibility really mean? That something works, and works well.