The curse of the quick fix

I’ve been reading Simon Garfield’s wonderful book Timekeepers: How The World Became Obsessed With Time. It is a fascinating set of narratives on the modern relationship with time. Towards the end, it slightly turns into a series of lists of conceptual art pieces that sound less Deeply Meaningful than Garfield makes out (oddly reminiscent of Evgeny Morozov’s To Solve Everything Click Here in this regard) and occasionally some of his more jokey passages grate, but most of the time (ho ho) it is a book that makes one see the taken-for-granted of the modern world for what it is. There are very funny passages on time management self-help books and on the world of haut horologie, and extremely thought-provoking ones on our time-poor age (or is it a perception? One of the time management gurus is actually wisest on this…)

Anyway a passage which struck me as especially germane to medicine, health care in general, and health IT in particular was the following – which is actually Garfield citing another author, but there you go:

And can any of these books really help us in these decisions? Can even the most cogently aligned bullet point and quadrant matrix transform a hard-wired mind? The notion of saving four hours every ten minutes is challenged by The Slow Fix: Why Quick Fixes Don’t Work by Carl Honoré. The book set its tone with an epigram from Othello: ‘How poor are they who have not patience! What wound did ever heal but by degrees?’6

The quick fix has its place, Honoré argues – the Heimlich manoeuvre, the duct tape and cardboard solution from Houston that gets the astronauts home in Apollo 13 – but the temporal management of one’s life is not one of them. He reasons that too much of our world runs on unrealistic ambitions and shabby behaviour: a bikini body within a fortnight, a TED talk that will change the world, the football manager sacked after two months of bad results. [<a href=”https://www.ted.com/talks/carl_honore_praises_slowness”>Honoré himself has nevertheless done a TED Talk – SS]

He cites examples of rushed and dismal failings from manufacturing (Toyota’s failure to deal with a problem with a proper solution that might have prevented the recall of 10 million cars) and from war and diplomacy (military involvement in Iraq). And then there is medicine and healthcare, and the mistaken belief – held too often by the media and initially the Bill and Melinda Gates foundation – that a magic bullet could cure the big diseases if only we worked faster and smarter and pumped in more cash. Honoré mentions malaria, and the vague but quaint story of a phalanx of IT wizards showing up at the Geneva headquarters of the World Health Organisation with a mission to eradicate malaria and other tropical diseases. When he visited he found the offices somewhat at odds with those of Palo Alto (ceiling fans and grey filing cabinets, no one on a Segway). ‘The tech guys arrived with their laptops and said, “Give us the data and the maps and we’ll fix this for you.”’ Honoré quotes one long-term WHO researcher, Pierre Boucher, saying. ‘And I just thought, “Will you now?” Tropical diseases are an immensely complex problem . . . Eventually they left and we never heard from them again.’”

As my own practice has developed over the years, I have come to a realisation that quick fixes tend to unfix themselves over time, and the quick fix mentality carries a huge cost over time.

Here is Honoré’s TED Talk. Garfield has a very entertaining passage in the book where he talks at a rival of TED’s, which has a 17 minute limit (TED has an 18 minute one)

Advertisements

#digitalnatives and #edtech and #woolongong- The ‘digital natives’ debate: A critical review of the evidence. British Journal of Educational Technology, Bennett et al Feb 2008

I blogged the other day on a recent paper on the myth of the digital native. Here is another paper, by Sue Bennett, Karl Maton and Lisa Kervin, from nearly a decade ago, on the same theme – and equally trenchant:

The idea that a new generation of students is entering the education system has excited recent attention among educators and education commentators. Termed ‘digital natives’ or the ‘Net generation’, these young people are said to have been immersed in technology all their lives, imbuing them with sophisticated technical skills and learning preferences for which traditional education is unprepared. Grand claims are being made about the nature of this generational change and about the urgent necessity for educational reform in response. A sense of impending crisis pervades this debate. However, the actual situation is far from clear. In this paper, the authors draw on the fields of education and sociology to analyse the digital natives debate. The paper presents and questions the main claims made about digital natives and analyses the nature of the debate itself. We argue that rather than being empirically and theoretically informed, the debate can be likened to an academic form of a ‘moral panic’. We propose that a more measured and disinterested approach is now required to investigate ‘digital natives’ and their implications for education.

On an entirely different note, the authors are/were affiliated with the University of Woolongong. Recent days have seen the death of Geoff Mack, who wrote the song “I’ve Been Everywhere” Originally a list of Australian placenames :

The song inspired versions internationally – the best known being Johnny Cash’s and The Simpsons’ – but the wittiest alternative version is this (NB – Dapto is a few miles from Wollongong)

Anyway, back the digital natives. Bennet et al begin with a quote from Marcel Proust:

The one thing that does not change is that at any and every time it appears that there have been
‘great changes’.
Marcel Proust, Within a Budding Grove

The authors summarise what a digital native is supposed to be like – and the not exactly extensive evidence base for their existence:

The claim made for the existence of a generation of ‘digital natives’ is based on two
main assumptions in the literature, which can be summarised as follows:

1. Young people of the digital native generation possess sophisticated
knowledge of and skills with information technologies.
2. As a result of their upbringing and experiences with technology, digital natives have particular learning preferences or styles that differ from earlier generations of students.

In the seminal literature on digital natives, these assertions are put forward with limited
empirical evidence (eg, Tapscott, 1998), or supported by anecdotes and appeals to
common-sense beliefs (eg, Prensky, 2001a). Furthermore, this literature has been referenced,
often uncritically, in a host of later publications (Gaston, 2006; Gros, 2003;
Long, 2005; McHale, 2005; Skiba, 2005). There is, however, an emerging body of
research that is beginning to reveal some of the complexity of young people’s computer
use and skills.

No one denies that a lot of young people use a lot of technology – but not all:

In summary, though limited in scope and focus, the research evidence to date indicates
that a proportion of young people are highly adept with technology and rely on it for a
range of information gathering and communication activities. However, there also
appears to be a significant proportion of young people who do not have the levels of access or technology skills predicted by proponents of the digital native idea. Such generalisations about a whole generation of young people thereby focus attention on
technically adept students. With this comes the danger that those less interested and less able will be neglected, and that the potential impact of socio-economic and cultural factors will be overlooked. It may be that there is as much variation within the digital native generation as between the generations.

It is often suggested that children who are merrily exploring the digital world are ground down with frustration by not having the same access to computers in school. This is part of a more general (with familiar rhetoric for the health IT world) demand for transformation (the word “disruptive” in its modern usage had not quite caught on in 2008) As is often the case, the empirical evidence (and also, I would say, a certain degree of common sense) is not with the disrupters:

The claim we will now examine is that current educational systems must change in
response to a new generation of technically adept young people. Current students have
been variously described as disappointed (Oblinger, 2003), dissatisfied (Levin & Arafeh,
2002) and disengaged (Prensky, 2005a). It is also argued that educational institutions
at all levels are rapidly becoming outdated and irrelevant, and that there is an urgent
need to change what is taught and how(Prensky, 2001a; Tapscott, 1998). For example,
Tapscott (1999) urges educators and authorities to ‘[g]ive students the tools, and they
will be the single most important source of guidance on how to make their schools relevant and effective places to learn’ (p. 11).Without such a transformation, commentators
warn, we risk failing a generation of students and our institutions face imminent
obsolescence.

However, there is little evidence of the serious disaffection and alienation among students
claimed by commentators. Downes’ (2002) study of primary school children
(5–12 years old) found that home computer use was more varied than school use and
enabled children greater freedom and opportunity to learn by doing. The participants
did report feeling limited in the time they were allocated to use computers at school and
in the way their use was constrained by teacher-directed learning activities. Similarly,
Levin and Arafeh’s (2002) study revealed students’ frustrations at their school Internet
use being restricted, but crucially also their recognition of the school’s in loco parentis
role in protecting them from inappropriate material. Selwyn’s (2006) student participants
were also frustrated that their freedom of use was curtailed at school and ‘were
well aware of a digital disconnect but displayed a pragmatic acceptance rather than the
outright alienation from the school that some commentators would suggest’ (p. 5).

In 2008 Bennett et al summarised similar issues relating to students actual rather than perceived technical adeptness and net savviness to the 2016 authors:

Furthermore, questions must be asked about the relevance to education of the everyday
ICTs skills possessed by technically adept young people. For example, it cannot be
assumed that knowing how to look up ‘cheats’ for computer games on the Internet
bears any relation to the skills required to assess a website’s relevance for a school
project. Indeed, existing research suggests otherwise. When observing students interacting
with text obtained from an Internet search, Sutherland-Smith (2002) reported
that many were easily frustrated when not instantly gratified in their search for immediate
answers and appeared to adopt a ‘snatch and grab philosophy’ (p. 664). Similarly,
Eagleton, Guinee and Langlais (2003) observed middle-school students often making
‘hasty, random choices with little thought and evaluation’ (p. 30).
Such research observes shallow, random and often passive interactions with text,which
raise significant questions about what digital natives can actually do as they engage
with and make meaning from such technology. As noted by Lorenzo and Dziuban
(2006), concerns over students’ lack of critical thinking when using Internet-based
information sources imply that ‘students aren’t as net savvy as we might have assumed’
(p. 2). This suggests that students’ everyday technology practices may not be directly
applicable to academic tasks, and so education has a vitally important role in fostering
information literacies that will support learning.

Again, this is a paper I could quote bits from all day – so here are a couple of paragraphs from towards the end that summarises their (and my) take on the digital natives:

Neither dismissive scepticism nor uncritical advocacy enable understanding of whether
the phenomenon of digital natives is significant and in what ways education might need
to change to accommodate it. As we have discussed in this paper, research is beginning
to expose arguments about digital natives to critical enquiry, but much more needs to be
done. Close scrutiny of the assumptions underlying the digital natives notion reveals
avenues of inquiry that will inform the debate. Such understanding and evidence are
necessary precursors to change.

The claim that there is a distinctive new generation of students in possession of sophisticated
technology skills and with learning preferences for which education is not
equipped to support has excited much recent attention. Proponents arguing that education
must change dramatically to cater for the needs of these digital natives have
sparked an academic form of a ‘moral panic’ using extreme arguments that have lacked
empirical evidence.

Finally, after posting the prior summary of Kirschner and deBruckyne’s paper, I searched hashtag #digitalnatives on Twitter and – self-promotingly – replied to some of the original tweeters with a link to the paper (interestingly quite a few #digitalnatives tweets were links to discussions of the Kirschner/deBruckyne paper) Some were very receptive, but others were markedly defensive. Obviously a total stranger coming along and pedantically pointing out your hashtag is about something that doesn’t exist may not be the most polite way of interacting on twitter – but also quite a lot of us are quite attached to the myth of the digital native

“The myths of the digital native and the multitasker”

One common rhetorical device heard in technology circles – including eHealth circles – is the idea that those born after 1980, or maybe 1984, or maybe 1993, or maybe 2000, or maybe 2010 (you get the picture) are “digital natives” – everyone else is “digital immigrant” In the current edition of Teaching and Teacher Education, Kirschner and de Bruyckere have an excellent paper on this myth, and the related myth of multitasking.

The “highlights” of the paper (I am not sure if these are selected by the authors or by the editors – UPDATE: see comment by Paul Kirschner below!) are pretty to the point:

Highlights

Information-savvy digital natives do not exist.

Learners cannot multitask; they task switch which negatively impacts learning.

Educational design assuming these myths hinders rather than helps learning.

The full article is via subscription/library online, and this recent post on the blog of Nature discusses this paper and others on this myth. This is Kirschner and de Bruyckere’s abstract:

Current discussions about educational policy and practice are often embedded in a mind-set that considers students who were born in an age of omnipresent digital media to be fundamentally different from previous generations of students. These students have been labelled digital natives and have been ascribed the ability to cognitively process multiple sources of information simultaneously (i.e., they can multitask). As a result of this thinking, they are seen by teachers, educational administrators, politicians/policy makers, and the media to require an educational approach radically different from that of previous generations. This article presents scientific evidence showing that there is no such thing as a digital native who is information-skilled simply because (s)he has never known a world that was not digital. It then proceeds to present evidence that one of the alleged abilities of students in this generation, the ability to multitask, does not exist and that designing education that assumes the presence of this ability hinders rather than helps learning. The article concludes by elaborating on possible implications of this for education/educational policy.

The paper is one of those trenchantly entertaining ones academia throws up every so often. For instance here the authors are on the origins of the “digital native” terminology (and “homo zappiens”, a new one on me):

A

ccording to Prensky (2001), who coined the term, digital natives
constitute an ever-growing group of children, adolescents,
and nowadays young adults (i.e., those born after 1984; the official
beginning of this generation) who have been immersed in digital
technologies all their lives. The mere fact that they have been
exposed to these digital technologies has, according to him,
endowed this growing group with specific and even unique characteristics
that make its members completely different from those
growing up in previous generations. The name given to those born
before 1984 – the year that the 8-bit video game saw the light of
day, though others use 1980 – is digital immigrant. Digital natives
are assumed to have sophisticated technical digital skills and
learning preferences for which traditional education is unprepared
and unfit. Prensky coined the term, not based upon extensive
research into this generation and/or the careful study of those
belonging to it, but rather upon a rationalisation of phenomena and
behaviours that he had observed. In his own words, he saw children
“surrounded by and using computers, videogames, digital music
players, video cams, cell phones, and all the other toys and tools of
the digital age” (2001, p.1). Based only upon these observations, he
assumed that these children understood what they were doing,
were using their devices effectively and efficiently, and based upon
this that it would be good to design education that allows them to
do this. Prensky was not alone in this. Veen and Vrakking (2006),
for example, went a step further coining the catchy name homo
zappi€ens to refer to a new breed of learners that has developed e
without either help from or instruction by others e those metacognitive
skills necessary for enquiry-based learning, discovery based
learning, networked learning, experiential learning, collaborative
learning, active learning, self-organisation and self regulation,
problem solving, and making their own implicit (i.e.,
tacit) and explicit knowledge explicit to others.

The saw that children are invariably more tech savvy then their parents is also a myth:

Looking at pupils younger than university students, the largescale
EU Kids Online report (Livingstone, Haddon, Gorzig, € &
Olafsson, 2011 ), placed the term ‘digital native’ in first place on
its list of the ten biggest myths about young people and technology.
They state: “Children knowing more than their parents has been
136 P.A. Kirschner, P. De Bruyckere / Teaching and Teacher Education 67 (2017) 135e142
exaggerated … Talk of digital natives obscures children’s need for
support in developing digital skills” and that “… only one in five
[children studied] used a file-sharing site or created a pet/avatar
and half that number wrote a blog … While social networking
makes it easier to upload content, most children use the internet for
ready-made, mass produced content” (p. 42). While the concept of
the digital native explicitly and/or implicitly assumes that the
current generation of children is highly digitally literate, it is then
rather strange to note that many curricula in many countries on
many continents (e.g., North America, Europe) see information and
technology literacy as 21st century skills that are core curriculum
goals at the end of the educational process and that need to be
acquired.

Two more recent studies show that the supposed digital divide
is a myth in itself. A study carried out by Romero, Guitert, Sangra,
and Bullen (2013) found that it was, in fact, older students (>30
years and thus born before 1984) who exhibited the characteristics
attributed to digital natives more than their younger counterparts.
In their research, 58% of their students were older than 30 years
who “show the characteristics of this [Net Generation profile]
claimed by the literature because, on analysing their habits, they
can be labelled as ICT users more than digital immigrants” (p. 176).
In a study on whether digital natives are more ‘technology savvy’
than their middle school science teachers, Wang, Hsu, Campbell,
Coster, and Longhurst (2014) conclude that this is not the case.

The authors are not arguing that curricula and teaching methods do not need to change and evolve, but that the myth of the digital native should not be the reason for doing so:

Finally, this non-existence of digital natives makes clear that one
should be wary about claims to change education because this
generation of young people is fundamentally different from previous
generations of learners in how they learn/can learn because
of their media usage (De Bruyckere, Hulshof, & Kirschner, 2015).
The claim of the existence of a generation of digital natives, thus,
cannot be used as either a motive or an excuse to implement
pedagogies such as enquiry-based learning, discovery-based
learning, networked learning, experiential learning, collaborative
learning, active learning, self-organisation and self-regulation or
problem solving as Veen and Vrakking (2006) argued. This does not
mean education should neither evolve nor change, but rather that
proposed changes should be evidence informed both in the reasons
for the change and the proposed changes themselves, something
P.A. Kirschner, P. De Bruyckere / Teaching and Teacher Education 67 (2017) 135e142 137
that ‘digital natives’ is not.
The non-existence of digital natives is definitely not the ‘reason’
why students today are disinterested at and even ‘alienated’ by
school. This lack of interest and alienation may be the case, but the
causes stem from quite different things such as the fact that
diminished concentration and the loss of the ability to ignore
irrelevant stimuli may be attributed to constant task switching
between different devices (Loh & Kanai, 2016; Ophir, Nass, &
Wagner, 2009; Sampasa-Kanyinga & Lewis, 2015). This, however,
is the topic of a different article.

The paper also deals with multi-tasking. Firstly they examine the nature of attention. “Multi-tasking” is an impossibility from this point of view, unless the tasks are automatic behaviours. They cite a range of research which, unsurprisingly enough, link heavy social media usage (especially with the user instantly replying to stimuli) with poorer educational outcomes:

Ophir et al. (2009) in a study in which university students who
identified themselves as proficient multitaskers were asked to
concentrate on rectangular stimuli of one colour on a computer
monitor and ignore irrelevant stimuli entering their screen of a
different colour observed that
heavy media multitaskers are more susceptible to interference
from irrelevant environmental stimuli and from irrelevant
representations in memory. This led to the surprising result that
heavy media multitaskers performed worse on a test of taskswitching
ability, likely because of reduced ability to filter out
interference from the irrelevant task set (p. 15583).
Ophir et al. (2009) concluded that faced with of distractors,
heavy multitaskers were slower in detecting changes in visual
patterns, were more susceptible to false recollections of the distractors
during a memory task, and were slower in task-switching.
Heavy multitaskers were less able than light/occasional multitaskers
to volitionally restrain their attention only to task relevant
information.

The authors specifically warn caution about the drive that students bring their own device to school.

Why is this paper so important? As the authors show (and the author of the Nature blog post linked to above also observes) this is not a new finding. There are many pieces out there, both academic and journalistic, on the myth of the digital native. This paper specifically locates the dicussion in education and in teacher training (they say much also on the issue of supposedly “digital native” teachers) and is a trenchant warning on the magical thinking that has grown up around technology.

There are obvious parallels with health and technology. The messianic, evangelical approach to healthtech is replete with its own assumptions about digital natives, and magical thinking about how easily they navigate online worlds. Using a handful of social medial tools or apps with visual interactive systems does not translate into a deep knowledge of the online world, or indeed a wisdom about it (or anything else)

Helmholtz and the ophthalmoscope, Eurotimes, 2008

DBP_1994_1752_Hermann_von_Helmholtz.jpg
Recently I rediscovered some articles for Eurotimes, the European Journal of Cataract and Refractive Surgeons that I had forgotten I had written. I have posted here before some of my book reviews for Eurotimes. I also wrote some pieces on historical ophthalmological figures – the first on Goethe and his work in optics, the second on Hermann von Helmholtz who was one of those towering, foundational figures in modern physics but who also invented the ophthalmoscope

Figure

In the last article, I considered one of the towering geniuses of world culture, Johann Wolfgang von Goethe. Goethe made enormous contributions to world literature and philosophy, and significant contributions to the nascent sciences of visual perception, linguistics, plant morphology, and felt he would be remembered most of all for his work on optics. Goethe perhaps epitomises the “natural philosopher”, the original term for “scientist” – an individual of boundless curiosity and enthusiasm, a gifted amateur in the true sense. Science owes much to the activities of men and women who operated outside the dynamic of universities and in an age before the research institute or the grant.

Hermann Ludwig Ferdinand von Helmholtz (1821-1894) is a less towering cultural presence than Goethe. His scientific activities have had a more lasting influence. He bridges the worlds of “natural philosophy” and organised, university based science – both in terms of his lifespan (eleven when Goethe died, he lived to directly influence Einstein and Maxwell) and in his professional life (originally training under paternal pressure as a doctor, he was appointed Professor of Physics in Berlin in 1871). Much of his work attacked the speculative tendencies of the natural philosophers, and was grounded firmly in observation and experiment.

Yet such was the breadth of his activity that he reminds one of the multi-talented natural philosopher as much as a contemporary, specialised physicist or physiologist. The Oxford Companion to the History of Modern Science describes him in summary as “physiologist, physicist, philosopher and statesman of science.” This begins to capture the breadth and diversity of his interest and involvement. We will discuss his work on perception, and on ophthalmic optics, below, but it is important to recall he was simultaneously working on conservation of energy, thermodynamics, and electrodynamics, and developed the philosophy of science itself. His writings ranged from the age of the earth to the origin and fate of the solar system.

4141DJ1H1DL._SX340_BO1,204,203,200_

One of the more humbling characteristics of the scientists of the past was their seeming mastery of measurement. We are so used to highly accurate, precise computerised measuring apparatus that we can forget that until relatively recently, researchers often had to build and calibrate their own equipment. And going back only a little further, they had to invent it as well. Most readers of EuroTimes probably use one of Helmholtz’s inventions every day – the ophthalmoscope.

Invented in 1851, the ophthalmoscope is a perfect illustration of Helmholtz’s combination of experimental and inventive skill. The invention made him world famous overnight. Helmholtz was actually independently reinventing a device of Charles Babbage’s from 1847. As so often in science, it was the reinventor who recognised the usefulness and applicability of the invention, rather than the first inventor (Babbage, of course, also managed to invent but not complete the first computer) The handheld ophthalmoscope was developed by Greek ophthalmolosist Andreas Anagonstakis later in the 1850s, and in 1915 William Noah Allyn and Frederick Welch invented the self illuminating ophthalmoscope (and founded Welch Allyn) that is the direct precursor of the modern device.
Potsdam_Gymnasium_Hermann-von-Helmholtz-Gymnasium-seit-1991-S-8DJ-S_770_282961

Who was Helmholtz, this man of so many talents and interests and such lasting influence? Born in Potsdam on 31st August 1821 into a lower middle class family that emphasised the importance of education and cultural activities, his father Ferdinand was a teacher of philosophy and psychology in the local secondary school. His mother was a descendant of William Penn, the founder of Pennsylvania, and her maiden name was Penne. Ferdinand Helmholtz was also a close friend of the philosopher Fichte. The scientific and philosophical worlds of the nineteenth century often seem amazingly small and parochial.

50cd72123b5c8c5972a7a07f41b4b383--scientists

Helmholtz’s natural inclination as a student was to pursue studies in physics – however his father observed the financial support available for medical students and the lack thereof for physics students, and persuaded him into medical studies. He enrolled in the Friedrich-Wilhelms-Institut in Berlin, the Prussian military’s medical training college. After this, the served as a medical officer in the Prussian military for a time, simultaneously publishing articles on heat and muscle physiology. In 1847 he published his treatise On The Conservation of Force, which was the clearest and ultimately most influential account of what would become known as the principle of the conservation of energy. From his observations of muscles physiology and activity, he tried to demonstrate that there is no energy loss in muscle movement, and no “life force” is necessary to move a muscle.

In 1848 he left military service and embarked on an academic career. In 1849, he became an associate professor of physiology in Konigsberg.. Shortly after he announced the invention of the ophthalmoscope and also made another discovery that would seal his fame – measuring the rate of conduction of signals in nerves. It had been believed that sensory signals arrived at the brain instantaneously, and it was considered beyond the capabilities of experimental science to measure the rate of nerve conduction. Using a new invention, the chronograph, Helmholtz measured the difference between stimulus and reaction times at different parts of the body, and found the speed of neural conduction to be comparable to that of sound, not light.

A full account of all Helmholtz’s discoveries and scientific achievements would take volumes. He had an intense interest in visual perception, especially visual illusions. This interest was based on his philosophical position that we are separate from the world of objects, and isolated from external physical events, except for perceptual signals which, not unlike language, must be learned and read according to various assumptions. These assumptions may or may not be appropriate. This philosophy underlay many of his research activities and interests, and also his idea that perceptions are “unconscious inferences.”

Most of what goes on in the nervous system, according to Helmholtz, is not represented in consciousness. Psychological and physiological experimental findings often surprise us for this reason, because we cannot discover by introspection how we see or how we think. We derive a perception from incomplete data, hence “unconscious inference.” This idea influenced Freud’s idea of the unconscious, and Helmholtz’s student Wilhelm Wundt, who took Helmholtz’s work and ideas further. Another of his students, Heinrich Hertz, further developed Helmholtz’s work on energy and electrodynamics.
optics-timeline-1851-2000-4-638

Helmholtz had a huge impact on all areas of perceptual science, and many areas of physics. His name lives in a variety of laws and concepts (Helmholtz illusion, Helmholtz free energy, Helmholtz-Kelvin contaction) and that of an association of research institutes in Germany. And of course, for the humble working ophthalmologist, every day, almost without thinking, Helmholtz’s influence as the originator of the modern ophthalmoscope is literally palpable.

Information underload – Mike Caulfied on the limits of #Watson, #AI and #BigData

From Mike Caufield, a piece that reminds me of the adage Garbage In, Garbage Out:

For many years, the underlying thesis of the tech world has been that there is too much information and therefore we need technology to surface the best information. In the mid 2000s, that technology was pitched as Web 2.0. Nowadays, the solution is supposedly AI.

I’m increasingly convinced, however, that our problem is not information overload but information underload. We suffer not because there is just too much good information out there to process, but because most information out there is low quality slapdash takes on low quality research, endlessly pinging around the spin-o-sphere.

Take, for instance, the latest news on Watson. Watson, you might remember, was IBM’s former AI-based Jeopardy winner that was going to go from “Who is David McCullough?” to curing cancer.

So how has this worked out? Four years later, Watson has yet to treat a patient. It’s hit a roadblock with some changes in backend records systems. And most importantly, it can’t figure out how to treat cancer because we don’t currently have enough good information on how to treat cancer:

“IBM spun a story about how Watson could improve cancer treatment that was superficially plausible – there are thousands of research papers published every year and no doctor can read them all,” said David Howard, a faculty member in the Department of Health Policy and Management at Emory University, via email. “However, the problem is not that there is too much information, but rather there is too little. Only a handful of published articles are high-quality, randomized trials. In many cases, oncologists have to choose between drugs that have never been directly compared in a randomized trial.”
This is not just the case with cancer, of course. You’ve heard about the reproducibility crisis, right? Most published research findings are false. And they are false for a number of reasons, but primary reasons include that there are no incentives for researchers to check the research, that data is not shared, and that publications aren’t particularly interested in publishing boring findings. The push to commercialize university research has also corrupted expertise, putting a thumb on the scale for anything universities can license or monetize.

In other words, there’s not enough information out there, and what’s out there is generally worse than it should be.

You can find this pattern in less dramatic areas as well — in fact, almost any place that you’re told big data and analytics will save us. Take Netflix as an example. Endless thinkpieces have been written about the Netflix matching algorithm, but for many years that algorithm could only match you with the equivalent of the films in the Walmart bargain bin, because Netflix had a matching algorithm but nothing worth watching. (Are you starting to see the pattern here?)

In this case at least, the story has a happy ending. Since Netflix is a business and needs to survive, they decided not to pour the majority of their money into newer algorithms to better match people with the version of Big Momma’s House they would hate the least. Instead, they poured their money into making and obtaining things people actually wanted to watch, and as a result Netflix is actually useful now. But if you stick with Netflix or Amazon Prime today it’s more likely because you are hooked on something they created than that you are sold on the strength of their recommendation engine.

Let’s belabor the point: let’s talk about Big Data in education. It’s easy to pick on MOOCs, but remember that the big value proposition of MOOCs was that with millions of students we would finally spot patterns that would allow us to supercharge learning. Recommendation engines would parse these patterns, and… well, what? Do we have a bunch of superb educational content just waiting in the wings that I don’t know about? Do we even have decent educational research that can conclusively direct people to solutions? If the world of cancer research is compromised, the world of educational research is a control group wasteland.

Hype, The Life Study and trying to do too much

A while back I reviewed Helen Pearson’s, “The Life Project” in the TLS. I had previously blogged on the perils of trying to do too much and mission creep and overload.

From the original draft of the review (published version differed slightly):

Pearson is laudably clear that the story of the birth cohorts is also a study of failure; the failure of the NHS to improve the inequality of health incomes between social classes, the failure of educational reforms and re-reforms to broach the similar academic achievement gap. Indeed, the book culminates in a failure which introduces a darker tone to the story of the birth cohort studies.

Launched in January 2015, the Life Study was supposed to follow 80,000 babies born in 2015 and intended to be a birth cohort for the “Olympic Children.” It had a government patron in David Willetts, who departure from politics in May 2015 perhaps set the stage for its collapse. Overstuffed antenatal clinics and a lack of health visitors meant that the Life Study’s participants would have to self-select. The optimistic scenario has 16,000 women signing up in the first eighteen months; in the first six months, 249 women did. By October 2015, just as Pearson was completing five years of work on this book, the study had officially been abandoned.

Along with the cancellation of the National Institute for Health’s National Children’s Study in December 2014, this made it clear that birth cohorts have been victims of their own success. An understandable tendency to include as much potentially useful information as possible seemed to have created massive, and ultimately unworkable cohorts. The Life Study would have generated vast data sets: “80,000 babies, warehouses of stool samples of placentas, gigabytes of video clips, several hundred thousand questionnaires and much more” (the history of the 1982 study repeated itself, perhaps.) Then there is the recruitment issue. Pregnant women volunteering for the Life Study would “travel to special recruitment centres set up for the study and then spend two hours there, answering questions and giving their samples of urine and blood.” Perhaps the surprise is that 249 pregnant women actually did volunteer for this.

Pearson’s book illustrates how tempting mission creep is. She recounts how birth cohorts went from obscure beginnings to official neglect with perpetual funding issues to suddenly becoming a crown jewel of British research. Indeed, as I observe in the review, while relatively few countries  have emulated the NHS’ structure and funding model, very many have tried to get on the birth cohort train.

This situation of an understandable enthusiasm and sudden fascination has parallels across health services and research. It is particularly a risk in eHealth and connected health, especially as the systems are inherently complex, and there is a great deal of fashionability to using technology more effectively in healthcare. It is one of those mom-and-apple-pie things, a god term, that can shut down critical thinking at times.

Megaprojects are seductive also in an age where the politics of funding research loom large. The big, “transformative” projects can squeeze out the less ambitious, less hype-y, more human-scale approaches. It can be another version of the Big Man theory of leadership.

Whatever we do, it is made up of a collection of tiny, often implicit actions, attitudes, near-reflexes, and is embedded in some kind of system beyond ourselves that is ultimately made up of other people performing and enacting a collection of tiny, often implicit actions, attitudes, and near-reflexes.

 

“Happy Organisations and Happy Workers” – blog post by Maria Quinlan

On the ARCH (Applied Research in Connected Health) website, research lead Dr Maria Quinlan  has a blog post entitled
“Happy Organisations and Happy Workers – a key factor in implementing digital health”

The whole is worth a read. Of course, having a happy organisation made up of happy workers is inherently important of itself, as well as from the point of view of implementing digital health. As Dr Quinlan writes in the first paragraph:

To paraphrase Tolstoy, “all happy organisations are alike; each unhappy organisation is unhappy in its own way.” The ability for healthcare organisations to innovate is a fundamental requirement for adopting and sustainably scaling digital health solutions.  If an organisation is unhappy, for example if it is failing to communicate openly and honestly, if staff feel overworked and that their opinion isn’t valued, it stands to reason that it will have trouble innovating and handling major complex transitions.

Reading this, I am struck by how important it is to make time in a day with an accumulation of pressing demands for reflection:

 

What these factors combine to achieve is happy, engaged workers – and happy workers are more effective, compassionate, and less likely to suffer burnout [2]. Clear objectives, praise, a sense that your voice matters – these can seem like fluffy ‘soft’ concepts and yet they are found over and over to be central to providing the right context within which new digital health innovations can flourish. Classic ‘high involvement’ management techniques – for example empowering team members to make decisions and not punishing them for every misstep are found to be key [1].  As Don Berwick of the Institute of Healthcare Improvement (IHI) says, people who feel joy in work are “not scared of data”, rather “joy is a resource for excellence” [3]

Managing what Sigal Barsade, Professor of Management at Wharton calls the ‘emotional’ culture of an organisation is a very important concept – especially in the healthcare environment which expects so much of staff [4]. Healthcare workers face pressures which many of us working in other fields can’t really comprehend, a recent systematic review found that clinicians have higher rates of suicidal ideation than the general population, with a high prevalence of burnout, psychiatric morbidity and depression linked to excessive workload [5].  Attempting to introduce innovative new ways of working within such constrained environments can be challenging to say the least. Exhausted workers, those with little time in their day for reflection, or those who work in organisations which fear failure are less likely to innovate [6].

Much of the rhetoric around healthcare innovation tends to be messianic in tone. A gap between this rhetoric and the messy, pressured reality of healthcare can diminish the credibility of innovators.

The concept of “adaptive reserve” is an important one, especially in the context of reforms and innovations being introduced into already pressured environments:

Drawing from their work researching healthcare organisations ability to handle complex transitions in the US, Jaen et al (2010) developed a 23-item scale measure for what they term ‘adaptive reserve’. Adaptive reserve is an internal capability for change which includes being agile; capable of continuous learning; and being adept at self-assessment, reflection and improvisation. The Adaptive Reserve questionnaire asks staff to rate their organisation according to a variety of statements which include statements such as; ‘we regularly take time to consider ways to improve how we do things’ and ‘this organisation is a place of joy and hope’.

Overall, this a fascinating blog post on an issue which is close to my heart. I intend to post some more on this topic over the next while.