Theranos, hype, fraud, solutionism, and eHealth

CV3cJegU4AA5kcY

Anyone who has had to either give or take a blood sample has surely thought “there must be a better way.” The promise of replacing the pain of the needle and the seeming waste of multiple blood vials has an immediate appeal. If there was a technology that could

Theranos were one of the hottest health teach startups of the last decade. Indeed, their USP – that existing blood testing could be replaced by a pin prick – would have been a genuinely disruptive one.

Theranos was founded in 2003 by Elizabeth Holmes, then 19 years old, who dropped out of studying engineering in Stanford in order to start the company. In 2015 she was named by Forbes magazine as the youngest self-made female billionaire in history, with an estimated worth of $4.5 billion. In June 2016, Forbes revised their estimate to zero. What happened?

At times of writing, Holmes has been charged with “massive fraud” by the US Securities and Exchange Commission, and has agreed to pay a $500,000 fine and accept a ban from serving as a company director or officer for ten years. It is unclear if a criminal investigation is also proceeding.
At its height, Theranos had a seemingly stellar team of advisors. The board of directors has included such figures as Henry Kissinger, current US Secretary of Defence James “Mad Dog” Mattis, various former US Senators and business figure. In early 2016, in response to criticism that, whatever their other qualities, the clinical expertise of Mad Dog Mattis et al was perhaps light, it announced a medical advisory board including four medical doctors and six professors.

 

Elizabeth Holmes’ fall began in October 2015, when the Wall Street Journal’s John Carreyrou published an article detailing discrepancies between Theranos’ claims and the actual performance of their technology. This was in response to a Fortune cover story by Roger Parloff, who subsequently wrote a thoughtful piece on how he had been misled, but also how he missed a hint that all was not what it was.

 

Theranos’ claims to be able to perform over 200 different investigations on a pinprick of blood were not borne out; and it turned out that other companies’ products were used for the analysis of many samples.

 

The fall of Theranos has led to some soul-searching among the health tech stat up community. Bill Rader, an entrepreneur and columnist at Forbes, wrote on What Entrepreneurs Can Learn From Theranos:

 

     I have been watching first in awe of perceived accomplishments, and then feeling burned, then later vindicated, when the actual facts were disclosed. Don’t get me wrong, I really wanted their efforts to have been both real and successful – they would have changed healthcare for the better. Now, that seems unlikely to be the case.

 

By now, almost everyone has heard of Holmes and her company, and how she built Theranos on hype and secrecy, and pushed investors into a huge, $9 billion valuation. Now everyone in the industry is talking about this and lawsuits are flying.

Just a couple months ago, a Silicon Valley venture capitalist appeared on CNBC’s “Closing Bell” and instead of talking about the elephant in the room, he diverted to a defense strategy for the Theranos CEO.

 

He claimed Elizabeth Holmes had been “totally attacked,”and that she is “a great example of maybe why the women are so frustrated.”

He also went on to say, “This is a great entrepreneur who wants to change health care as we know it.”

 

The last statement was the strangest thing he said. Wouldn’t we all like to change things for the better? But “wanting” and “doing” are two different things.

 

 

 

Rader’s piece is worth reading for clinicians and IT professionals involved in health technology. The major lesson he draws is the need for transparency. He describes being put under pressure by his own board; why wasn’t he able to raise as much money as Theranos? It transpires that Theranos’ methods may make life more difficult for start-ups in the future, and Rader fears that legitimate health tech may suffer:

 

Nothing good has come of the mess created by Theronos secrecy, or as some have characterized, deception. The investor has been burned, the patient has been left with unfilled promises (yet again) and life science industry start-ups, like my company, have been left with even more challenges in raising much needed investment. And worse of all, diagnostic start-ups in general are carrying an unearned stigma.

 

In this interesting piece, Christine Farr notes that the biggest biotech and health care venture capital firms did not invest in Theranos, nor did Silicon Valley firms with actual clinical practices attached. As Farr writes, the Theranos story reflects systemic issues in funding of innovation, and the nature of hype. And one unfortunate consequence may be an excessive focus on Elizabeth Holmes; a charismatic figure lauded unrealistically at one point is ripe to become a scapegoat for all the ills of an industry the next.

 

The “stealth mode” in which Theranos operated in for the first ten years of its existence is incompatible with the values of healthcare and of the science on which it is based. Farr points out how unlikely it would be that a biotech firm vetting Theranos would let their lack of peer reviewed studies pass. The process of peer review and building evidence is key to the modern practice of medicine.

Another lesson is simply to beware of what one wants to be true. As written above, the idea of Theranos’ technology is highly appealing. The company, and Holmes, sailed on an ocean of hype and admiring magazine covers. The rhetoric of disruptive and revolutionizing healthcare featured prominently, as the 2014 Fortune magazine cover story reveals:

518ecmssujl-_sx387_bo1204203200_.0

 

Perhaps a healthy scepticism of claims to “revolutionise” health care will be one consequence of the Theranos affair, and a more robustly questioning attitude to the solutionism that plagues technology discourse in general.

Clinicians and health IT professionals should be open to innovation and new ideas, but also hold on to their professional duty to be confident new technologies will actually benefit the patient.

“They should teach that in school….”

One of the academic studies I haven’t had time to pursue (so only blog about) is a thematic analysis of editorials in medical journals – with a focus on the many many “musts”, “need to s”, “shoulds” and “have to s” imposed on doctors, “policymakers”, and so on.

Education is more prone to this, and from a wider group of people. Everyone has their idea of what “they” should teach, ascribing to schools magical powers to end social ills by simply putting something on the curriculum.

Much of this is very worthy and well-intentioned. People want their children to be prepared for life. That the things suggested may not lend themselves to “being on the curriculum” with any degree of effectiveness is rarely considered.
That curricula are pretty overloaded anyway is rarely considered.

Anyway, the UK Organisation “Parents and Teachers for Excellence” has been keeping track of these “X should be taught in schools calls” in 2018 so far.:

How often do you hear the phrase “Schools should teach…” in the media?
We’ve noticed that barely a week goes by without a well-meaning person or organisation insisting that something else is added to the curriculum, often without any consideration as to how it could be fitted into an already-squeezed school day. Obviously the curriculum needs to be updated and improved upon over time, and some of the topics proposed are incredibly important. However, there are only so many hours in the school week, and we believe that teachers and schools are the ones best placed to decide what their students need to know, and not have loads of additional things forced on them by government because of lobbying by others.

So far, as of today, this is the list:

So far this year we count 22 suggestions for what schools should do with pupils:
Why We Should Teach School Aged Children About Baby Loss
Make schools colder to improve learning
Schools ‘should help children with social media risk’
Pupils should stand or squat at their desks, celebrity GP says
MP’s call for national anthem teaching in schools to unite country
It’s up to us: heads and teachers must model principled, appropriate and ethical online behaviour
Primary school children need to learn about intellectual property, Government agency says
Call for more sarcasm at school is no joke
Schools should teach more ‘nuanced’ view of feminism, Girls’ School Association president says
Schools ‘should teach children about the dangers of online sexual content’
Schools should teach children resilience to help them in the workplace, new Education Secretary says
Government launches pack to teach pupils ‘importance of the Commonwealth’
Schools must not become like prisons in fight against knife crime, headteacher warns
Schools should teach all pupils first aid, MPs say
Call for agriculture GCSE to be introduced as UK prepares to leave the EU
Councils call for compulsory mental health counselling in all secondary schools
Set aside 15 minutes of dedicated reading time, secondary schools told
Pupils must be taught about architecture, says Gokay Deveci
A serious education on the consequences of obesity is needed for our most overweight generation

Teach girls how to get pregnant, say doctors
Start teaching children the real facts of life

I am confident there are a lot more out there PTE haven’t been linked with. From sarcasm to “how to get pregnant” to first aid to intellectual property to resilience.

I do wish someone would do my study on medical journals’ imperatives for me!

Why isn’t William C Campbell more famous in Ireland?

There have been only two Irish winners of Nobel Prizes other than Literature and Peace – Dungarvan-born Ernest Walton for physics in 1951 and Ramelton-born William C Campbell for Physiology or Medicine in 2015.

My memory of being in school in the 1990s was that Ernest Walton loomed fairly large in science popularisation at the time. I recall quite vividly coverage of his death in 1995, but also recall his being quoted and profiled fairly extensively. Of course, I could be a victim of a recall bias – I probably am. Yet it does seem that William C Campbell has not had nearly as much coverage, especially when you consider how media-saturated we are now.

Or perhaps that is the whole point. It feels like a silly comparison, but is may be like the Eurovision; once we cared deeply about winning this competition and getting recognition, now there is a flurry of excitement if we get to the final. Having said that, it isn’t like we have had any other science Nobels to get excited about since 1995.

Of course there is a reasonable amount of coverage of Campbell, in the Irish Times in particular some of it quite recent. A fair percentage of online coverage seems to be from the Donegal papers of the hail-the-local-hero variety, which is fair enough.

A search for ‘William Campbell “Irish Independent”‘ starts with two articles from the Independent on Campbell, then has this , then a range of articles about unrelated topics.

I came across this excellent piece on “the fragile culture of Irish journalism” by Declan Fahy – the fragility exemplified by the coverage of Campbell’s prize:

The reporting of Campbell’s Nobel win illuminated several more general features of Irish media coverage of science. The story originated outside Ireland, yet its local dimension was stressed. Its tone was celebratory. It was not covered by specialist science journalists. Only The Irish Times probed deeper into the background of the scientist and his work.

The story was interesting also because of the aspects of Campbell’s story that were not developed. Reporters did not use the announcement as a jumping-off point to explore some of the novel dimensions of Campbell’s story, such as the rights and wrongs of pharmaceutical companies’ ownership of drugs that could help millions of the world’s poorest people, the unseen research work of an industry-based scientist, and the complex case of a scientist of faith with an admitted “complicated sense of religion”.

The superficial reporting of the Campbell story is not an isolated case. It reflects more generally the state of Irish science journalism, where there are few dedicated science journalists, a shortfall of science coverage compared to other countries, a neglect of science policy coverage, a reliance on one outlet for sustained coverage, a dependence on subsidies for the production of some forms of journalistic content, and a dominant style of reporting that lacks a critical edge.

(in passing, Walton was also a scientist of faith, although perhaps with less “complicated sense of religion” than Campbell)

Fahy goes on, in what is a an extract from a book co-edited by Fahy, “Little Country, Big Talk” to enumerate some fo the issues both within the structure of media institutions and within Irish society and culture overall which contribute to this relative neglect. While there is an Irish Science and Technology Journalists Association, there is not a critical mass of science journalists. Writing in 2017, Fahy observes:

Compared to the US and UK, Ireland has a far less developed culture of science journalism. There are currently no full-time science journalists in mainstream Irish newspapers and broadcasters. The Irish Times had a dedicated science editor in Dick Ahlstrom, who has now retired (and, during his tenure, he had other significant editorial duties at the news organisation).

The Irish Times also had a longtime environmental correspondent, Frank McDonald, who retired in recent years. Earlier this year, former editor Kevin O’Sullivan combined these two roles, becoming environment and science editor. The paper also has a health correspondent and a specialist medical writer. The Irish Independent has an environment editor, Paul Melia.

The public service broadcaster, RTÉ, has had specialists in science or technology, but its correspondents have usually had dual briefs, reporting on education or health as well as science, and tending to cover education or health more so than science. That tendency, identified by Brian Trench in 2007’s Mapping Irish Media, has continued. In 2016, the incumbent in the role is responsible for science and technology, and tends to cover technology more than science.

Fahy also discusses the wider place of science in Irish culture and society. There are many many fascinating stories to tell about science in Ireland, such as Erwin Schrodinger’s time here (perhaps illustrative of Fahy’s point is that the very first Google result for “Schrodinger in Ireland” is this) and the many many stories collected by Mary Mulvihill in Ingenious Ireland. As I have just posted on Seamus Sweeney, I only learnt while researching this post that Mary Mulvihill died in 2015.

Of course, some of these stories can be told with a celebratory, or I-can’t-believe-this-happened-in-little-auld-Ireland focus, which again illustrates Fahy’s point. My own perception is that in 1995 the situation was actually a little better than it is now – that Irish science journalism is not in stasis but actually in reverse .

One striking point made by Fahy is that the science beat is often combined with health or technology- and these tend to win out in terms of focus. And the hard , critical questions don’t tend to get asked – often there is a strong bang of barely rewritten press release about articles on science topics.

Another thought – the retirement of Dick Ahlstrom and death of Mary Mulvihill alone robbed the already small pool of Irish science writers of some of the finest practitioners. Irish journalism – like Irish anything- is pretty much a small world and a couple of such losses can have a huge impact.

The myth of digital natives and health IT 

I have a post on the CCIO website on the Digital Native myth and Health IT

The opening paragraph: 

We hear a lot about digital natives. They are related to the similarly much-mentioned millenials; possibly they are exactly the same people (although as I am going to argue that digital natives do not exist, perhaps millenials will also disappear in a puff of logic). Born after 1980, or maybe after 1984, or maybe after 1993, or maybe after 2007, or maybe after 2010, the digital native grew up with IT, or maybe grew up with the internet, or grew up with social media, or at any rate grew up with something that the prior generation – the “digital immigrants” (born a couple of years before the first cut off above, that’s where I am too) – didn’t.

A Way Out of Burnout: Cultivating Differentiated Leadership Through Lament

Some interesting (and provocative) thoughts from the world of church leadership. “Lament” is not prominent in our culture anymore, at least not in our official culture… and one could wonder how to translate these ideas into a secular setting. Nevertheless, there is much to ponder here and I would feel that all in leadership positions – or roles susceptible to burnout – could benefit from reading this, whether they have religious faith or not.

I found the following paragraphs (of what is a long paper) especially resonated:

 

Leaders who are most likely to function poorly physically or emotionally are those who have failed to maintain a well-differentiated position. Either they have accepted the blame owing to irresponsibility and constant criticism of others, or they have gotten themselves into an overfunctioning position (that is, they tried too hard) and rushed in where angels and fools both fear to tread.[12]

Many programs often aim to cure clergy burnout by offering retreats that focus on rest and relaxation. However, Friedman asserts, “Resting and refreshment do not change triangles. Furthermore, because these programs focus on the burned-out ‘family’ member, they can actually add to his or her burden if such individuals are inclined to be soul searchers to begin with.”[13] These same soul-searching and empathetic clergy are vulnerable to seeing the overwhelming burdens that they carry for others as crosses that they ought to bear. Friedman calls this way of thinking “sheer theological camouflage for an ineffective immune system.”[14] When clergy bear other people’s burdens, they are encouraging others not to take personal responsibility. And often in bearing other people’s burdens, clergy easily tend to ignore their own “burdens” (ie. marriage issues, financial problems, etc.) and thus fail to be personally responsible for themselves.

 

London also discusses how “lament” and in some ways “passing the buck onto God” has Biblical roots:

God responds with sympathy to Jesus’ ad deum accusation and lament. Furthermore, one may easily interpret the empty tomb at the end of the Gospel as a sign of God’s ultimate response to Jesus’ lament: the resurrection (Mark 16:4-7). In the psalms of lament and in the cry of dereliction, we see that God does not respond with hostility but with a sympathetic openness to our struggle, our need for someone to blame and, in the words of Walter Brueggemann, our “genuine covenant interaction.”[34] God responds with sympathetic openness to Jesus’ ad duem accusation and then dispels the blame and emotional burden that no human could ever bear. Jesus receives the blame that humans cast upon him and then gives it to God who receives it, absorbs it and dispels it. Jesus let go of the blame by giving it to God. His cry of dereliction became his cry for differentiation. In this way, Jesus serves as a role model for leaders who receive blame from others and then need to differentiate in order to not take accusations personally. By practicing lament, leaders can turn the ad hominem accusations against themselves into ad deum accusations against God, who responds with sympathetic openness while receiving and dispelling the blame. Moreover, leaders can respond with empathy to the suffering of others, knowing that they will not have to bear the emotional burden that they have taken on, indefinitely. They can let go of the emotional burden by passing it on to God through the practice of lament.

This “passing of the buck” to God does not encourage irresponsibility. Rather, it gives the emotional baggage away to the only One who can truly bear it, thus freeing the other to take personal responsibility, without feeling weighed down by unbearable burdens. With this practice, a pastor can therefore receive blame and emotional baggage from parishioners in a pastoral setting because they can differentiate through lament. They can take the blame like Jesus because they, like Jesus, can also pass the buck to God through ad deum accusation. Eventually, the pastor will want to teach the parishioners to redirect their human need to blame onto God as well so as to occlude the cycle of scapegoating in the community.[

 

DANIEL DeFOREST LONDON

This is the final paper I wrote for the class “Leading Through Lament” with Dr. Donn Morgan at the Church Divinity School of the Pacific.

INTRODUCTION

On August 1, 2010, New York Times published an article titled “Taking a Break From the Lord’s Work,” which began with the following statements:  “Members of the clergy now suffer from obesity, hypertension and depression at rates higher than most Americans. In the last decade, their use of antidepressants has risen, while their life expectancy has fallen. Many would change jobs if they could.”[1] Although these are troubling reports, some of the statistics that came out of a study conducted by Fuller Theological Seminary in the late 1980s prove more disturbing: “80 percent [of pastors] believe that pastoral ministry is affecting their families negatively, 90 percent felt they were not adequately trained to cope with the ministry demands placed upon them, 70 percent…

View original post 5,779 more words

“Working here makes us better humans”

A daily thought from Leandro Herrero:

I have had a brilliant two day meeting with a brilliant client. One aspect of my
work with organizations that I truly enjoy is to help craft the ‘Behavioural DNA’ that shapes the culture of the company. This is a set of actionable behaviours that must be universal, from the CEO to the MRO (Mail Room Officer). They also need to pass the ‘new hire test’: would you put that list in front of a prospect employee and say ‘This is us’?

There was one ‘aspirational’ sentence that I put to the test: ‘Working here makes us better human beings’.

It was met with scepticism by the large group in the meeting, initially mainly manifested through body language including the, difficult to describe, cynical smiles. The rationalists in the group jumped in hard to ‘corporatize’ the sentence. ‘Do you mean better professionals?’ The long discussion had started. Or, perhaps, ‘do you mean…’ – and here the full blown corporate Academy of Language – from anything to do with skills, talent management, empowerment to being better managers, being better leaders, and so on.

‘No, I mean better human beings. Period!’- I pushed back. Silence.

Next stage was the litany of adjectives coming form the collective mental thesaurus: fluffy, fuzzy, soft, vague…

I felt compelled to reframe the question: ‘OK, so who is against working in a place that makes you inhuman? Everybody. OK, ‘ So who is against working in a place that makes you more human? Nobody. But still the defensive smiling.

It went on for a while until the group, ‘organically’, by the collective hearing of pros and cons, turned 180 degrees until everybody agreed that ‘Working in a place that makes you a better human being’ was actually very neat. But – there was a but – ‘Our leadership team wont like it. They will say that its fluffy, fuzzy, soft etc… In the words of the group, it was not ‘them’ anymore who had a problem, it was the infamous ‘they’.

The “difficult to describe” cynical smiles are familiar…. indeed I am sure I have perpetrated such smiles more than once myself!

Medicine can be a dehumanising profession, sometimes literally. Dehumanising in both ways – patients, especially some categories of patient, colleagues, but also we ourselves. Of course, the rationalist part of us can pick apart what “better humans” means…

“a tendency to overhype fixes that later turn out to be complete turkeys”

An interesting passage on the contemporary dynamics of the quick fix, from “The Slow Fix: Solve Problems, Work Smarter and Live Better in a Fast World” by Carl Honore:

“The media add fuel to that fire. When anything goes wrong – in politics, business, a celebrity relationship – journalists pounce, dissecting the crisis with glee and demanding an instant remedy. When golfer Tiger Woods was outed as a serial philanderer, he vanished from the public eye for three months before finally breaking his silence to issue a mea culpa and announce he was in therapy for sex addiction. How did the media react to being made to wait that long? With fury and indignation. The worst sin for a public figure on the ropes is to fail to serve up an instant exit strategy.

“That impatience fuels a tendency to overhype fixes that later turn out to be complete turkeys. An engineer by training, Marco Petruzzi worked as a globetrotting management consultant for 15 years before abandoning the corporate world to build better schools for the poor in the United States. We will meet him again later in the book, but for now consider his attack on our culture of hot air. ‘In the past, hard-working entrepreneurs developed amazing stuff over time, and they did it, they didn’t just talk about it, they did it,’ he says. ‘We live in a world now where talk is cheap and bold ideas can create massive wealth without ever having to deliver. There are multi-billionaires out there who never did anything but capture the investment cycle and the spin cycle at the right moment, which just reinforces a culture where people don’t want to put in the time and effort to come up with real and lasting solutions to problems. Because if they play their cards right, and don’t worry about the future, they can get instant financial returns’

“The myths of the digital native and the multitasker”

One common rhetorical device heard in technology circles – including eHealth circles – is the idea that those born after 1980, or maybe 1984, or maybe 1993, or maybe 2000, or maybe 2010 (you get the picture) are “digital natives” – everyone else is “digital immigrant” In the current edition of Teaching and Teacher Education, Kirschner and de Bruyckere have an excellent paper on this myth, and the related myth of multitasking.

The “highlights” of the paper (I am not sure if these are selected by the authors or by the editors – UPDATE: see comment by Paul Kirschner below!) are pretty to the point:

Highlights

Information-savvy digital natives do not exist.

Learners cannot multitask; they task switch which negatively impacts learning.

Educational design assuming these myths hinders rather than helps learning.

The full article is via subscription/library online, and this recent post on the blog of Nature discusses this paper and others on this myth. This is Kirschner and de Bruyckere’s abstract:

Current discussions about educational policy and practice are often embedded in a mind-set that considers students who were born in an age of omnipresent digital media to be fundamentally different from previous generations of students. These students have been labelled digital natives and have been ascribed the ability to cognitively process multiple sources of information simultaneously (i.e., they can multitask). As a result of this thinking, they are seen by teachers, educational administrators, politicians/policy makers, and the media to require an educational approach radically different from that of previous generations. This article presents scientific evidence showing that there is no such thing as a digital native who is information-skilled simply because (s)he has never known a world that was not digital. It then proceeds to present evidence that one of the alleged abilities of students in this generation, the ability to multitask, does not exist and that designing education that assumes the presence of this ability hinders rather than helps learning. The article concludes by elaborating on possible implications of this for education/educational policy.

The paper is one of those trenchantly entertaining ones academia throws up every so often. For instance here the authors are on the origins of the “digital native” terminology (and “homo zappiens”, a new one on me):

A

ccording to Prensky (2001), who coined the term, digital natives
constitute an ever-growing group of children, adolescents,
and nowadays young adults (i.e., those born after 1984; the official
beginning of this generation) who have been immersed in digital
technologies all their lives. The mere fact that they have been
exposed to these digital technologies has, according to him,
endowed this growing group with specific and even unique characteristics
that make its members completely different from those
growing up in previous generations. The name given to those born
before 1984 – the year that the 8-bit video game saw the light of
day, though others use 1980 – is digital immigrant. Digital natives
are assumed to have sophisticated technical digital skills and
learning preferences for which traditional education is unprepared
and unfit. Prensky coined the term, not based upon extensive
research into this generation and/or the careful study of those
belonging to it, but rather upon a rationalisation of phenomena and
behaviours that he had observed. In his own words, he saw children
“surrounded by and using computers, videogames, digital music
players, video cams, cell phones, and all the other toys and tools of
the digital age” (2001, p.1). Based only upon these observations, he
assumed that these children understood what they were doing,
were using their devices effectively and efficiently, and based upon
this that it would be good to design education that allows them to
do this. Prensky was not alone in this. Veen and Vrakking (2006),
for example, went a step further coining the catchy name homo
zappi€ens to refer to a new breed of learners that has developed e
without either help from or instruction by others e those metacognitive
skills necessary for enquiry-based learning, discovery based
learning, networked learning, experiential learning, collaborative
learning, active learning, self-organisation and self regulation,
problem solving, and making their own implicit (i.e.,
tacit) and explicit knowledge explicit to others.

The saw that children are invariably more tech savvy then their parents is also a myth:

Looking at pupils younger than university students, the largescale
EU Kids Online report (Livingstone, Haddon, Gorzig, € &
Olafsson, 2011 ), placed the term ‘digital native’ in first place on
its list of the ten biggest myths about young people and technology.
They state: “Children knowing more than their parents has been
136 P.A. Kirschner, P. De Bruyckere / Teaching and Teacher Education 67 (2017) 135e142
exaggerated … Talk of digital natives obscures children’s need for
support in developing digital skills” and that “… only one in five
[children studied] used a file-sharing site or created a pet/avatar
and half that number wrote a blog … While social networking
makes it easier to upload content, most children use the internet for
ready-made, mass produced content” (p. 42). While the concept of
the digital native explicitly and/or implicitly assumes that the
current generation of children is highly digitally literate, it is then
rather strange to note that many curricula in many countries on
many continents (e.g., North America, Europe) see information and
technology literacy as 21st century skills that are core curriculum
goals at the end of the educational process and that need to be
acquired.

Two more recent studies show that the supposed digital divide
is a myth in itself. A study carried out by Romero, Guitert, Sangra,
and Bullen (2013) found that it was, in fact, older students (>30
years and thus born before 1984) who exhibited the characteristics
attributed to digital natives more than their younger counterparts.
In their research, 58% of their students were older than 30 years
who “show the characteristics of this [Net Generation profile]
claimed by the literature because, on analysing their habits, they
can be labelled as ICT users more than digital immigrants” (p. 176).
In a study on whether digital natives are more ‘technology savvy’
than their middle school science teachers, Wang, Hsu, Campbell,
Coster, and Longhurst (2014) conclude that this is not the case.

The authors are not arguing that curricula and teaching methods do not need to change and evolve, but that the myth of the digital native should not be the reason for doing so:

Finally, this non-existence of digital natives makes clear that one
should be wary about claims to change education because this
generation of young people is fundamentally different from previous
generations of learners in how they learn/can learn because
of their media usage (De Bruyckere, Hulshof, & Kirschner, 2015).
The claim of the existence of a generation of digital natives, thus,
cannot be used as either a motive or an excuse to implement
pedagogies such as enquiry-based learning, discovery-based
learning, networked learning, experiential learning, collaborative
learning, active learning, self-organisation and self-regulation or
problem solving as Veen and Vrakking (2006) argued. This does not
mean education should neither evolve nor change, but rather that
proposed changes should be evidence informed both in the reasons
for the change and the proposed changes themselves, something
P.A. Kirschner, P. De Bruyckere / Teaching and Teacher Education 67 (2017) 135e142 137
that ‘digital natives’ is not.
The non-existence of digital natives is definitely not the ‘reason’
why students today are disinterested at and even ‘alienated’ by
school. This lack of interest and alienation may be the case, but the
causes stem from quite different things such as the fact that
diminished concentration and the loss of the ability to ignore
irrelevant stimuli may be attributed to constant task switching
between different devices (Loh & Kanai, 2016; Ophir, Nass, &
Wagner, 2009; Sampasa-Kanyinga & Lewis, 2015). This, however,
is the topic of a different article.

The paper also deals with multi-tasking. Firstly they examine the nature of attention. “Multi-tasking” is an impossibility from this point of view, unless the tasks are automatic behaviours. They cite a range of research which, unsurprisingly enough, link heavy social media usage (especially with the user instantly replying to stimuli) with poorer educational outcomes:

Ophir et al. (2009) in a study in which university students who
identified themselves as proficient multitaskers were asked to
concentrate on rectangular stimuli of one colour on a computer
monitor and ignore irrelevant stimuli entering their screen of a
different colour observed that
heavy media multitaskers are more susceptible to interference
from irrelevant environmental stimuli and from irrelevant
representations in memory. This led to the surprising result that
heavy media multitaskers performed worse on a test of taskswitching
ability, likely because of reduced ability to filter out
interference from the irrelevant task set (p. 15583).
Ophir et al. (2009) concluded that faced with of distractors,
heavy multitaskers were slower in detecting changes in visual
patterns, were more susceptible to false recollections of the distractors
during a memory task, and were slower in task-switching.
Heavy multitaskers were less able than light/occasional multitaskers
to volitionally restrain their attention only to task relevant
information.

The authors specifically warn caution about the drive that students bring their own device to school.

Why is this paper so important? As the authors show (and the author of the Nature blog post linked to above also observes) this is not a new finding. There are many pieces out there, both academic and journalistic, on the myth of the digital native. This paper specifically locates the dicussion in education and in teacher training (they say much also on the issue of supposedly “digital native” teachers) and is a trenchant warning on the magical thinking that has grown up around technology.

There are obvious parallels with health and technology. The messianic, evangelical approach to healthtech is replete with its own assumptions about digital natives, and magical thinking about how easily they navigate online worlds. Using a handful of social medial tools or apps with visual interactive systems does not translate into a deep knowledge of the online world, or indeed a wisdom about it (or anything else)

What’s Love Got to Do with It? A Longitudinal Study of the Culture of Companionate Love and Employee and Client Outcomes in a Long-term Care Setting, Barsdale and O’Neill 2014

I have blogged before about the relationship between morale and clinical outcomes. From 2014 in Administrative Science Quarterly , a paper which links this with another interest of mine, workplace friendships .


Here is the abstract:

In this longitudinal study, we build a theory of a culture of companionate love—feelings of affection, compassion, caring, and tenderness for others—at work, examining the culture’s influence on outcomes for employees and the clients they serve in a long-term care setting. Using measures derived from outside observers, employees, family members, and cultural artifacts, we find that an emotional culture of companionate love at work positively relates to employees’ satisfaction and teamwork and negatively relates to their absenteeism and emotional exhaustion. Employees’ trait positive affectivity (trait PA)—one’s tendency to have a pleasant emotional engagement with one’s environment—moderates the influence of the culture of companionate love, amplifying its positive influence for employees higher in trait PA. We also find a positive association between a culture of companionate love and clients’ outcomes, specifically, better patient mood, quality of life, satisfaction, and fewer trips to the emergency room. The study finds some association between a culture of love and families’ satisfaction with the long-term care facility. We discuss the implications of a culture of companionate love for both cognitive and emotional theories of organizational culture. We also consider the relevance of a culture of companionate love in other industries and explore its managerial implications for the healthcare industry and beyond.

Few outcomes are as “hard” – or as appealing to a certain strand of management – than “fewer trips to the emergency room.” The authors squarely and unashamedly go beyond the often euphemistic language of this kind of paper to focus on love:

‘‘Love’’ is a word rarely found in the modern management literature, yet for more than half a century, psychologists have studied companionate love— defined as feelings of affection, compassion, caring, and tenderness for others—as a basic emotion fundamental to the human experience (Walster and Walster, 1978; Reis and Aron, 2008). Companionate love is a far less intense emotion than romantic love (Hatfield and Rapson, 1993, 2000); instead of being based on passion, it is based on warmth, connection (Fehr, 1988; Sternberg, 1988), and the ‘‘affection we feel for those with whom our lives are deeply intertwined’’ (Berscheid and Walster, 1978: 177). Unlike self-focused positive emotions (such as pride or joy), which center on independence and self- orientation, companionate love is an other-focused emotion, promoting interdependence and sensitivity toward other people (Markus and Kitayama, 1991; Gonzaga et al., 2001).

Companionate love is therefore distinct from the romantic love which so dominates our thought when we think about love. As is often the case, we moderns are not nearly as new in our thinking as we would like to see ourselves:

Considering the large proportion of our lives we spend with others at work (U.S. Bureau of Labor Statistics, 2011), the influence of companionate love in other varied life domains (Shaver et al., 1987), and the growing field of positive organizational scholarship, which focuses on human connections at work (Rynes et al., 2012), it is reasonable to expect that this basic human emotion will not only exist at work but that it will also influence workplace outcomes. Although the term ‘‘companionate love’’ had not yet been coined, the work of early twentieth-century organizational scholars revealed rich evidence of deep connections between workers involving the feelings of affection, caring, and compassion that comprise companionate love. Hersey’s (1932) daily experi- ence sampling study of Pennsylvania Railroad System employees, for example, recorded the importance of caring, affection, compassion, and tenderness, as well as highlighting the negative effects when these emotions were absent, particularly in relationships with foremen. Similarly, Roethlisberger and Dickson’s (1939) detailed study of factory life provided crisp observations of companionate love in descriptions of workers’ interactions, describing supervisors who showed genuine affection, care, compassion, and tenderness toward their employees.

There is nothing new under the sun. In subsequent decades this kind of research was abandoned.  The authors go on to describe the distinctions between strong and weak cultures of companionate love:

Like the concept of cognitive organizational culture, a culture of companio- nate love can be characterized as strong or weak. To picture a strong culture of companionate love, first imagine a pair of coworkers collaborating side by side, each day expressing caring and affection toward one another, safeguarding each other’s feelings, showing tenderness and compassion when things don’t go well, and supporting each other in work and non-work matters. Then expand this image to an entire network of dyadic and group interactions so that this type of caring, affection, tenderness, and compassion occurs frequently within most of the dyads and groups throughout the entire social unit: a clear picture emerges of a culture of companionate love. Such a culture involves high ‘‘crystallization,’’ that is, pervasiveness or consensus among employees in enacting the culture (Jackson, 1966).

An example of high crystallization appears in a qualitative study of social workers (Kahn, 1993) in which compassion spreads through the network of employees in a ‘‘flow and reverse flow’’ of the emotion from employees to one another and to supervisors and back. This crystallization of companionate love can cross organizational levels; for example, an employee at a medical center described the pervasiveness of companionate love through- out the unit: ‘‘We are a family. When you walk in the door, you can feel it. Everyone cares for each other regardless of whatever level you are in. We all watch out for each other’’ (http://auroramed.dotcms.org/careers/employee_ voices.htm). Words like ‘‘all’’ and ‘‘everyone’’ in conjunction with affection, caring, and compassion are hallmarks of a high crystallization culture of companio- nate love.

Another characteristic of a strong culture of companionate love is a high degree of displayed intensity (Jackson, 1966) of emotional expression of affec- tion, caring, compassion, and tenderness. This can be seen in the example of an employee diagnosed with multiple sclerosis who described a work group whose members treated her with tremendous companionate love during her daily struggles with the condition. ‘‘My coworkers showed me more love and compassion than I would ever have imagined. Do I wish that I didn’t have MS? Of course. But would I give up the opportunity to witness and receive so much love? No way’’ (Lilius et al., 2003: 23).

In weak cultures of companionate love, expressions of affection, caring, compassion, or tenderness among employees are minimal or non-existent, showing both low intensity and low crystallization. Employees in cultures low in companionate love show indifference or even callousness toward each other, do not offer or expect the emotions that companionate love comprises when things are going well, and do not allow room to deal with distress in the workplace when things are not going well. In a recent hospital case study, when a nurse with 30 years of tenure told her supervisor that her mother-in- law had died, her supervisor responded not with compassion or even sympathy, but by saying, ‘‘I have staff that handles this. I don’t want to deal with it’’ (Lilius et al., 2008: 209). Contrast this reaction with one from the billing unit of a health services organization in which an employee described her coworkers’ reactions following the death of her mother: ‘‘I did not expect any of the compassion and sympathy and the love, the actual love that I got from co-workers’’ (Lilius et al., 2011: 880).

This is obviously a paper I could simply post extracts from all day but at this point I will desist. Perhaps rather than “What’s Love Got to Do With It? the authors could have invoked “All You Need is Love?

Information underload – Mike Caulfied on the limits of #Watson, #AI and #BigData

From Mike Caufield, a piece that reminds me of the adage Garbage In, Garbage Out:

For many years, the underlying thesis of the tech world has been that there is too much information and therefore we need technology to surface the best information. In the mid 2000s, that technology was pitched as Web 2.0. Nowadays, the solution is supposedly AI.

I’m increasingly convinced, however, that our problem is not information overload but information underload. We suffer not because there is just too much good information out there to process, but because most information out there is low quality slapdash takes on low quality research, endlessly pinging around the spin-o-sphere.

Take, for instance, the latest news on Watson. Watson, you might remember, was IBM’s former AI-based Jeopardy winner that was going to go from “Who is David McCullough?” to curing cancer.

So how has this worked out? Four years later, Watson has yet to treat a patient. It’s hit a roadblock with some changes in backend records systems. And most importantly, it can’t figure out how to treat cancer because we don’t currently have enough good information on how to treat cancer:

“IBM spun a story about how Watson could improve cancer treatment that was superficially plausible – there are thousands of research papers published every year and no doctor can read them all,” said David Howard, a faculty member in the Department of Health Policy and Management at Emory University, via email. “However, the problem is not that there is too much information, but rather there is too little. Only a handful of published articles are high-quality, randomized trials. In many cases, oncologists have to choose between drugs that have never been directly compared in a randomized trial.”
This is not just the case with cancer, of course. You’ve heard about the reproducibility crisis, right? Most published research findings are false. And they are false for a number of reasons, but primary reasons include that there are no incentives for researchers to check the research, that data is not shared, and that publications aren’t particularly interested in publishing boring findings. The push to commercialize university research has also corrupted expertise, putting a thumb on the scale for anything universities can license or monetize.

In other words, there’s not enough information out there, and what’s out there is generally worse than it should be.

You can find this pattern in less dramatic areas as well — in fact, almost any place that you’re told big data and analytics will save us. Take Netflix as an example. Endless thinkpieces have been written about the Netflix matching algorithm, but for many years that algorithm could only match you with the equivalent of the films in the Walmart bargain bin, because Netflix had a matching algorithm but nothing worth watching. (Are you starting to see the pattern here?)

In this case at least, the story has a happy ending. Since Netflix is a business and needs to survive, they decided not to pour the majority of their money into newer algorithms to better match people with the version of Big Momma’s House they would hate the least. Instead, they poured their money into making and obtaining things people actually wanted to watch, and as a result Netflix is actually useful now. But if you stick with Netflix or Amazon Prime today it’s more likely because you are hooked on something they created than that you are sold on the strength of their recommendation engine.

Let’s belabor the point: let’s talk about Big Data in education. It’s easy to pick on MOOCs, but remember that the big value proposition of MOOCs was that with millions of students we would finally spot patterns that would allow us to supercharge learning. Recommendation engines would parse these patterns, and… well, what? Do we have a bunch of superb educational content just waiting in the wings that I don’t know about? Do we even have decent educational research that can conclusively direct people to solutions? If the world of cancer research is compromised, the world of educational research is a control group wasteland.