Theranos, hype, fraud, solutionism, and eHealth

CV3cJegU4AA5kcY

Anyone who has had to either give or take a blood sample has surely thought “there must be a better way.” The promise of replacing the pain of the needle and the seeming waste of multiple blood vials has an immediate appeal. If there was a technology that could

Theranos were one of the hottest health teach startups of the last decade. Indeed, their USP – that existing blood testing could be replaced by a pin prick – would have been a genuinely disruptive one.

Theranos was founded in 2003 by Elizabeth Holmes, then 19 years old, who dropped out of studying engineering in Stanford in order to start the company. In 2015 she was named by Forbes magazine as the youngest self-made female billionaire in history, with an estimated worth of $4.5 billion. In June 2016, Forbes revised their estimate to zero. What happened?

At times of writing, Holmes has been charged with “massive fraud” by the US Securities and Exchange Commission, and has agreed to pay a $500,000 fine and accept a ban from serving as a company director or officer for ten years. It is unclear if a criminal investigation is also proceeding.
At its height, Theranos had a seemingly stellar team of advisors. The board of directors has included such figures as Henry Kissinger, current US Secretary of Defence James “Mad Dog” Mattis, various former US Senators and business figure. In early 2016, in response to criticism that, whatever their other qualities, the clinical expertise of Mad Dog Mattis et al was perhaps light, it announced a medical advisory board including four medical doctors and six professors.

 

Elizabeth Holmes’ fall began in October 2015, when the Wall Street Journal’s John Carreyrou published an article detailing discrepancies between Theranos’ claims and the actual performance of their technology. This was in response to a Fortune cover story by Roger Parloff, who subsequently wrote a thoughtful piece on how he had been misled, but also how he missed a hint that all was not what it was.

 

Theranos’ claims to be able to perform over 200 different investigations on a pinprick of blood were not borne out; and it turned out that other companies’ products were used for the analysis of many samples.

 

The fall of Theranos has led to some soul-searching among the health tech stat up community. Bill Rader, an entrepreneur and columnist at Forbes, wrote on What Entrepreneurs Can Learn From Theranos:

 

     I have been watching first in awe of perceived accomplishments, and then feeling burned, then later vindicated, when the actual facts were disclosed. Don’t get me wrong, I really wanted their efforts to have been both real and successful – they would have changed healthcare for the better. Now, that seems unlikely to be the case.

 

By now, almost everyone has heard of Holmes and her company, and how she built Theranos on hype and secrecy, and pushed investors into a huge, $9 billion valuation. Now everyone in the industry is talking about this and lawsuits are flying.

Just a couple months ago, a Silicon Valley venture capitalist appeared on CNBC’s “Closing Bell” and instead of talking about the elephant in the room, he diverted to a defense strategy for the Theranos CEO.

 

He claimed Elizabeth Holmes had been “totally attacked,”and that she is “a great example of maybe why the women are so frustrated.”

He also went on to say, “This is a great entrepreneur who wants to change health care as we know it.”

 

The last statement was the strangest thing he said. Wouldn’t we all like to change things for the better? But “wanting” and “doing” are two different things.

 

 

 

Rader’s piece is worth reading for clinicians and IT professionals involved in health technology. The major lesson he draws is the need for transparency. He describes being put under pressure by his own board; why wasn’t he able to raise as much money as Theranos? It transpires that Theranos’ methods may make life more difficult for start-ups in the future, and Rader fears that legitimate health tech may suffer:

 

Nothing good has come of the mess created by Theronos secrecy, or as some have characterized, deception. The investor has been burned, the patient has been left with unfilled promises (yet again) and life science industry start-ups, like my company, have been left with even more challenges in raising much needed investment. And worse of all, diagnostic start-ups in general are carrying an unearned stigma.

 

In this interesting piece, Christine Farr notes that the biggest biotech and health care venture capital firms did not invest in Theranos, nor did Silicon Valley firms with actual clinical practices attached. As Farr writes, the Theranos story reflects systemic issues in funding of innovation, and the nature of hype. And one unfortunate consequence may be an excessive focus on Elizabeth Holmes; a charismatic figure lauded unrealistically at one point is ripe to become a scapegoat for all the ills of an industry the next.

 

The “stealth mode” in which Theranos operated in for the first ten years of its existence is incompatible with the values of healthcare and of the science on which it is based. Farr points out how unlikely it would be that a biotech firm vetting Theranos would let their lack of peer reviewed studies pass. The process of peer review and building evidence is key to the modern practice of medicine.

Another lesson is simply to beware of what one wants to be true. As written above, the idea of Theranos’ technology is highly appealing. The company, and Holmes, sailed on an ocean of hype and admiring magazine covers. The rhetoric of disruptive and revolutionizing healthcare featured prominently, as the 2014 Fortune magazine cover story reveals:

518ecmssujl-_sx387_bo1204203200_.0

 

Perhaps a healthy scepticism of claims to “revolutionise” health care will be one consequence of the Theranos affair, and a more robustly questioning attitude to the solutionism that plagues technology discourse in general.

Clinicians and health IT professionals should be open to innovation and new ideas, but also hold on to their professional duty to be confident new technologies will actually benefit the patient.

“The myths of the digital native and the multitasker”

One common rhetorical device heard in technology circles – including eHealth circles – is the idea that those born after 1980, or maybe 1984, or maybe 1993, or maybe 2000, or maybe 2010 (you get the picture) are “digital natives” – everyone else is “digital immigrant” In the current edition of Teaching and Teacher Education, Kirschner and de Bruyckere have an excellent paper on this myth, and the related myth of multitasking.

The “highlights” of the paper (I am not sure if these are selected by the authors or by the editors – UPDATE: see comment by Paul Kirschner below!) are pretty to the point:

Highlights

Information-savvy digital natives do not exist.

Learners cannot multitask; they task switch which negatively impacts learning.

Educational design assuming these myths hinders rather than helps learning.

The full article is via subscription/library online, and this recent post on the blog of Nature discusses this paper and others on this myth. This is Kirschner and de Bruyckere’s abstract:

Current discussions about educational policy and practice are often embedded in a mind-set that considers students who were born in an age of omnipresent digital media to be fundamentally different from previous generations of students. These students have been labelled digital natives and have been ascribed the ability to cognitively process multiple sources of information simultaneously (i.e., they can multitask). As a result of this thinking, they are seen by teachers, educational administrators, politicians/policy makers, and the media to require an educational approach radically different from that of previous generations. This article presents scientific evidence showing that there is no such thing as a digital native who is information-skilled simply because (s)he has never known a world that was not digital. It then proceeds to present evidence that one of the alleged abilities of students in this generation, the ability to multitask, does not exist and that designing education that assumes the presence of this ability hinders rather than helps learning. The article concludes by elaborating on possible implications of this for education/educational policy.

The paper is one of those trenchantly entertaining ones academia throws up every so often. For instance here the authors are on the origins of the “digital native” terminology (and “homo zappiens”, a new one on me):

A

ccording to Prensky (2001), who coined the term, digital natives
constitute an ever-growing group of children, adolescents,
and nowadays young adults (i.e., those born after 1984; the official
beginning of this generation) who have been immersed in digital
technologies all their lives. The mere fact that they have been
exposed to these digital technologies has, according to him,
endowed this growing group with specific and even unique characteristics
that make its members completely different from those
growing up in previous generations. The name given to those born
before 1984 – the year that the 8-bit video game saw the light of
day, though others use 1980 – is digital immigrant. Digital natives
are assumed to have sophisticated technical digital skills and
learning preferences for which traditional education is unprepared
and unfit. Prensky coined the term, not based upon extensive
research into this generation and/or the careful study of those
belonging to it, but rather upon a rationalisation of phenomena and
behaviours that he had observed. In his own words, he saw children
“surrounded by and using computers, videogames, digital music
players, video cams, cell phones, and all the other toys and tools of
the digital age” (2001, p.1). Based only upon these observations, he
assumed that these children understood what they were doing,
were using their devices effectively and efficiently, and based upon
this that it would be good to design education that allows them to
do this. Prensky was not alone in this. Veen and Vrakking (2006),
for example, went a step further coining the catchy name homo
zappi€ens to refer to a new breed of learners that has developed e
without either help from or instruction by others e those metacognitive
skills necessary for enquiry-based learning, discovery based
learning, networked learning, experiential learning, collaborative
learning, active learning, self-organisation and self regulation,
problem solving, and making their own implicit (i.e.,
tacit) and explicit knowledge explicit to others.

The saw that children are invariably more tech savvy then their parents is also a myth:

Looking at pupils younger than university students, the largescale
EU Kids Online report (Livingstone, Haddon, Gorzig, € &
Olafsson, 2011 ), placed the term ‘digital native’ in first place on
its list of the ten biggest myths about young people and technology.
They state: “Children knowing more than their parents has been
136 P.A. Kirschner, P. De Bruyckere / Teaching and Teacher Education 67 (2017) 135e142
exaggerated … Talk of digital natives obscures children’s need for
support in developing digital skills” and that “… only one in five
[children studied] used a file-sharing site or created a pet/avatar
and half that number wrote a blog … While social networking
makes it easier to upload content, most children use the internet for
ready-made, mass produced content” (p. 42). While the concept of
the digital native explicitly and/or implicitly assumes that the
current generation of children is highly digitally literate, it is then
rather strange to note that many curricula in many countries on
many continents (e.g., North America, Europe) see information and
technology literacy as 21st century skills that are core curriculum
goals at the end of the educational process and that need to be
acquired.

Two more recent studies show that the supposed digital divide
is a myth in itself. A study carried out by Romero, Guitert, Sangra,
and Bullen (2013) found that it was, in fact, older students (>30
years and thus born before 1984) who exhibited the characteristics
attributed to digital natives more than their younger counterparts.
In their research, 58% of their students were older than 30 years
who “show the characteristics of this [Net Generation profile]
claimed by the literature because, on analysing their habits, they
can be labelled as ICT users more than digital immigrants” (p. 176).
In a study on whether digital natives are more ‘technology savvy’
than their middle school science teachers, Wang, Hsu, Campbell,
Coster, and Longhurst (2014) conclude that this is not the case.

The authors are not arguing that curricula and teaching methods do not need to change and evolve, but that the myth of the digital native should not be the reason for doing so:

Finally, this non-existence of digital natives makes clear that one
should be wary about claims to change education because this
generation of young people is fundamentally different from previous
generations of learners in how they learn/can learn because
of their media usage (De Bruyckere, Hulshof, & Kirschner, 2015).
The claim of the existence of a generation of digital natives, thus,
cannot be used as either a motive or an excuse to implement
pedagogies such as enquiry-based learning, discovery-based
learning, networked learning, experiential learning, collaborative
learning, active learning, self-organisation and self-regulation or
problem solving as Veen and Vrakking (2006) argued. This does not
mean education should neither evolve nor change, but rather that
proposed changes should be evidence informed both in the reasons
for the change and the proposed changes themselves, something
P.A. Kirschner, P. De Bruyckere / Teaching and Teacher Education 67 (2017) 135e142 137
that ‘digital natives’ is not.
The non-existence of digital natives is definitely not the ‘reason’
why students today are disinterested at and even ‘alienated’ by
school. This lack of interest and alienation may be the case, but the
causes stem from quite different things such as the fact that
diminished concentration and the loss of the ability to ignore
irrelevant stimuli may be attributed to constant task switching
between different devices (Loh & Kanai, 2016; Ophir, Nass, &
Wagner, 2009; Sampasa-Kanyinga & Lewis, 2015). This, however,
is the topic of a different article.

The paper also deals with multi-tasking. Firstly they examine the nature of attention. “Multi-tasking” is an impossibility from this point of view, unless the tasks are automatic behaviours. They cite a range of research which, unsurprisingly enough, link heavy social media usage (especially with the user instantly replying to stimuli) with poorer educational outcomes:

Ophir et al. (2009) in a study in which university students who
identified themselves as proficient multitaskers were asked to
concentrate on rectangular stimuli of one colour on a computer
monitor and ignore irrelevant stimuli entering their screen of a
different colour observed that
heavy media multitaskers are more susceptible to interference
from irrelevant environmental stimuli and from irrelevant
representations in memory. This led to the surprising result that
heavy media multitaskers performed worse on a test of taskswitching
ability, likely because of reduced ability to filter out
interference from the irrelevant task set (p. 15583).
Ophir et al. (2009) concluded that faced with of distractors,
heavy multitaskers were slower in detecting changes in visual
patterns, were more susceptible to false recollections of the distractors
during a memory task, and were slower in task-switching.
Heavy multitaskers were less able than light/occasional multitaskers
to volitionally restrain their attention only to task relevant
information.

The authors specifically warn caution about the drive that students bring their own device to school.

Why is this paper so important? As the authors show (and the author of the Nature blog post linked to above also observes) this is not a new finding. There are many pieces out there, both academic and journalistic, on the myth of the digital native. This paper specifically locates the dicussion in education and in teacher training (they say much also on the issue of supposedly “digital native” teachers) and is a trenchant warning on the magical thinking that has grown up around technology.

There are obvious parallels with health and technology. The messianic, evangelical approach to healthtech is replete with its own assumptions about digital natives, and magical thinking about how easily they navigate online worlds. Using a handful of social medial tools or apps with visual interactive systems does not translate into a deep knowledge of the online world, or indeed a wisdom about it (or anything else)

Can fMRI solve the mind-body problem? Tim Crane, “How We Can Be”, TLS, 24/05/17

In the current TLS, an excellent article by Tim Crane on neuroimaging, consciousness, and the mind-body problem. Many of my previous posts here related to this have endorsed a kind of mild neuro-scepticism, Crane begins his article by describing an experiment which should the literally expansive nature of neuroscience:

In 2006, Science published a remarkable piece of research by neuroscientists from Addenbrooke’s Hospital in Cambridge. By scanning the brain of a patient in a vegetative state, Adrian Owen and his colleagues found evidence of conscious awareness. Unlike a coma, the vegetative state is usually defined as one in which patients are awake – they can open their eyes and exhibit sleep-wake cycles – but lack any consciousness or awareness. To discover consciousness in the vegetative state would challenge, therefore, the basic understanding of the phenomenon.

The Addenbrooke’s patient was a twenty-three-year-old woman who had suffered traumatic brain injury in a traffic accident. Owen and his team set her various mental imagery tasks while she was in an MRI scanner. They asked her to imagine playing a game of tennis, and to imagine moving through her house, starting from the front door. When she was given the first task, significant neural activity was observed in one of the motor areas of the brain. When she was given the second, there was significant activity in the parahippocampal gyrus (a brain area responsible for scene recognition), the posterior parietal cortex (which represents planned movements and spatial reasoning) and the lateral premotor cortex (another area responsible for bodily motion). Amazingly, these patterns of neural responses were indistinguishable from those observed in healthy volunteers asked to perform exactly the same tasks in the scanner. Owen considered this to be strong evidence that the patient was, in some way, conscious. More specifically, he concluded that the patient’s “decision to cooperate with the authors by imagining particular tasks when asked to do so represents a clear act of intention, which confirmed beyond any doubt that she was consciously aware of herself and her surroundings”.

Owen’s discovery has an emotional force that one rarely finds in scientific research. The patients in the vegetative state resemble those with locked-in syndrome, a result of total (or near-total) paralysis. But locked-in patients can sometimes demonstrate their consciousness by moving (say) their eyelids to communicate (as described in Jean-Dominique Bauby’s harrowing and lyrical memoir, The Diving Bell and the Butterfly, 1997). But the vegetative state was considered, by contrast, to be a condition of complete unconsciousness. So to discover that someone in such a terrible condition might actually be consciously aware of what is going on around them, thinking and imagining things, is staggering. I have been at academic conferences where these results were described and the audience was visibly moved. One can only imagine the effect of the discovery on the families and loved ones of the patient.

Crane’s article is very far from a piece of messianic neurohype, but he also acknowledges the sheer power of this technology to expand our awareness of what it means to be conscious and human, and the clinical benefit that is not something to be sniffed at. But, it doesn’t solve the mind-body problem – it actually accentuates it:

Does the knowledge given by fMRI help us to answer Julie Powell’s question [essentially a restatement of the mind-body problem by a food writer]? The answer is clearly no. There is a piece of your brain that lights up when you talk and a piece that lights up when you walk: that is something we already knew, in broad outline. Of course it is of great theoretical significance for cognitive neuroscience to find out which bits do what; and as Owen’s work illustrates, it is also of massive clinical importance. But it doesn’t tell us anything about “how we can be”. The fact that different parts of your brain are responsible for different mental functions is something that scientists have known for decades, using evidence from lesions and other forms of brain damage, and in any case the very idea should not be surprising. FMRI technology does not solve the mind–body problem; if anything, it only brings it more clearly into relief.

Read the whole thing, as they say. It is a highly stimulating read, and also one which, while it points out the limits of neuroimaging as a way of solving the difficult problems of philosophy, gives the technology and the discipline behind it its due.

Once again, it isn’t about the tech

From MobiHealthNews:

West Virginia hospital system sees readmission reductions from patient education initiative
A telehealth initiative at Charleston Area Medical Center led to reduced readmission rates for several chronic conditions, the health system reported today.

What led to the reductions wasn’t the advent of video consultations with specialists or sophisticated biometric sensor monitoring, but health information for patients and workflow integration for hospital staff via SmarTigr, TeleHealth Services’s interactive patient education and engagement platform that offers videos designed to educate patients about their care and medication

Technology is an enabler of improved patient self-management and improved clinician performance – not an end in itself.

More on the health education elements of this project:

As only 12 percent of US adults have the proficient health literacy required to self-manage their health, the four-hospital West Virginia system launched the initiative in 2015 to see what they could do to improve that statistic. With SmarTigr, they developed condition-specific curriculums – which are available in multiple languages – and then “prescribed” the videos, which are integrated into smart TVs, hospital software platforms and mobile applications. Patients then complete quizzes, and the hospital staff review reports of patient compliance and comprehension, and all measurements become part of the patient’s medical record.

“Self-management” can be a godterm, shutting down debate, but the sad reality that health literacy (and, I would argue, overall literacy) is such in the general population that it will remain a chimera.

Finally, this project involved frontline clinicians via a mechanism I hadn’t heard of before – the “nurse navigator”

Lilly developed a standard educational approach by working with registered nurse Beverly Thornton, CAMC’s Health Education and Research Institute education director, as well as two “nurse navigators,” who work directly with the front-line nurses. They developed disease-specific video prescriptions for CHF and COPD that give a detailed list of educational content videos patients are to watch before they are discharged, followed by quizzes.

“actual clinic services with real doctors”

Again, from MobiHealthNews:

A new kind of doctor’s office opened in San Francisco this week: Forward, a membership-based healthcare startup founded by former Googler Adrian Aoun that infuses a brick-and-mortar office with data-driven technology and artificial intelligence.

For $149 per month, Forward members can come to the flagship office that features six examination rooms – equipped with interactive personalized displays – and doctors from some of the Bay Area’s top medical systems. Members are given wearable sensors that work with Forward’s proprietary AI for proactive monitoring that can alert members and their doctors of any abnormalities as well as capture, store and analyze data to develop personalized treatment plans. Members also have 24-7 mobile access to their data, rounding out what Aoun believes is a new type of preventative care.

What is interesting about this piece is that there are various other start-ups whose vision is not based on telemedicine or on “empowering consumers”, but on what is at its core the traditional surgery office except with much slicker tech. It is also interesting that Forward’s approach is based on a personal experience:

The impetus for Forward came from a personal experience of Aoun’s. When one of his close relatives had a heart attack, he found himself sitting in the ICU and realizing healthcare wasn’t quite what he thought it was. Seeing doctors having to obtain health records from multiple sources and wait days or weeks for test results and suffering from all-around communication breakdowns within their health system, he was inspired to create an alternative model – one focused on prevention, efficiency and connected tools to create a increasingly smart healthcare plans based on each individual’s needs and goals.

I took the title of this post from what I found a rather amusing aside in a later paragraph:

It also isn’t the first company to offer a hybrid of physical and digital services. In September 2016, startup Carbon Health opened its first clinic, also in San Francisco, that offers actual clinic services with real doctors

“actual clinic services with real doctors”! – sounds truly revolutionary – and quite a difference from the techno-utopian slant of the Financial Times piece I blogged about earlier in the week. At times readers may detect a certain weariness with the hype that surrounds digital health, the overuse of “revolutionary” and “transformative” and so on, the goes-without-saying presumption that healthcare is bloated and inefficient while tech is gleaming and slick and frictionless.  This is far from saying that healthcare doesn’t need change, and can’t learn from other fields – I look forward to hearing more about Forward.

The perils of trying to do too much: data, the Life Study, and Mission Overload

One interesting moment at the CCIO Network Summer School came in a panel discussion. A speaker was talking about the vast amount of data that can be collected and how impractical this can be. He gave the example of – while acknowledging that he completely understood why this particular data might be interesting – the postcode of  the patients most frequent visitor. As someone pointed out from the audience, the person in the best position to collect this data is probably the patient themselves.

When I heard this discussion, the part of my that still harbours research ambitions thought “that is a very interesting data point.” And working in a mixed urban/rural catchment area, in a service which has experienced unit closures and admission bed centralisation, I thought of how illustrative that would be of the personal experience behind these decisions.

However, the principle that was being stated – that clinical data is that which is generated in clinical activity – seems to be one of the only ways of keeping the potential vast amount of data that could go into an EHR manageable. Recently I have been reading Helen Pearson’s “The Life Project” , a review of which will shortly enough appear. Pearson tells the story of the UK Birth Cohort Studies. Most of this story is an account of these studies surviving against the institutional odds and becoming key cornerstones of British research. Pearson explicitly tries to create a sense of civic pride about these studies, akin to that felt about the NHS and BBC. However, in late 2015 the most recent birth cohort study, the Life Study, was cancelled for sheer lack of volunteers. The reasons for this are complex, and to my mind suggest something changing in British society in general (in the 1946 study it was assumed that mothers would simply comply with the request to participate as a sort of extension of wartime duty) – but one factor was surely the amount of questions to be answered and samples to be given:

But the Life Study aims to distinguish itself, in particular by collecting detailed information on pregnancy and the first year of the children’s lives — a period that is considered crucial in shaping later development.

The scientists plan to squirrel away freezer-fulls of tissue samples, including urine, blood, faeces and pieces of placenta, as well as reams of data, ranging from parents’ income to records of their mobile-phone use and videos of the babies interacting with their parents. (from Feb 2015 article in Nature by Pearson)

All very worthy, but it seems to me that the birth cohort studies were victims of their own success. Pearson describes that, almost from the start, they were torn between a more medical outlook and a more sociological outlook. Often this tension was fruitful, but in the case of Life Study it seems to have led to a Mission Overload.

I have often felt that there is a commonality of interest between the Health IT community, the research methodology community, and the medical education community and the potential of EHRs for epidemiology research, dissemination of best evidence at point of care  and realistic “virtual patient” construction is vast. I will come back to these areas of commonality again. However, there is also a need to remember the different ways a clinician, an IT professional, an epidemiologist, an administrator, and an educationalist might look at data. The Life Study perhaps serves as a warning.

Unintended consequences and Health IT

Last week along with other members of the Irish CCIO group I attended the UK CCIO Network Summer School. Among many thought provoking presentations and a wonderful sense of collegiality (and the scale of the challenges ahead), one which stood out was actually a video presentation by Dr Robert Wachter, whose review into IT in the NHS (in England) is due in the coming weeks and who is also the author of “The Digital Doctor: Hype, Hope and Harm at the Dawn of Medicine’s Computer Age”

digitaldoctor

Amongst many other things, Dr Wachter discussed the unintended consequences of Health IT. He discussed how, pretty much overnight, radiology imaging systems destroyed “radiology rounds” and a certain kind of discussion of cases. He discussed how hospital doctors using eHealth systems sit in computer suites with other doctors, rather than being on the wards. Perhaps most strikingly, he showed a child’s picture of her visit to the doctor. in which the doctor is turned away from the patient and her mother, hunched over a keyboard:

childspic.png

This reminded me a little of Cecil Helman’s vision of the emergence of a “technodoctor”, which I suspected was something of a straw man:

Like may other doctors of his generation – though fortunately still only a minority – Dr A prefers to see people and their diseases mainly as digital data, which can be stored, analysed, and then, if necessary, transmitted – whether by internet, telephone or radio – from one computer to another. He is one of those helping to create a new type of patient, and a new type of patient’s body – one much less human and tangible than those cared for by his medical predecessors. It is one stage further than reducing the body down to a damaged heart valve, an enlarged spleen or a diseased pair of lungs. For this ‘post-human’ body is one that exists mainly in an abstract, immaterial form. It is a body that has become pure information.

I still suspect this is overall a straw man, and Helman admits this “technodoctor” is “still only [part of] a minority” – but perhaps the picture above shows this is less of a straw man than we might be comfortable with.

Is there a way out of the trap of unintended consequences? On my other blog I have posted on Evgeny Morozov’s “To Solve Everything, Click Here.”  a book which, while I had many issue with Morozov’s style and approach (the post ended up being over 2000 words which is another unintended consequence), is extremely thought-provoking. Morozov positions himself against “epochalism” – the belief that because of technology (or other factors) we live in a unique era. He also decries “solutionism”, a more complex phenomenon, of which he writes:

I call the ideology that legitimizes and sanctions such aspirations “solutionism.” I borrow this unabashedly pejorative term from the world of architecture and urban planning – where it has come to refer to an unhealthy preoccupation with sexy, monumental and narrow-minded solutions – the kind of stuff that wows audiences at TED Conferences – to problems that are extremely complex, fluid and contentious. These are the kind of problems that, on careful examination, do not have to be defined in the singular and all-encompassing ways that “solutionists” have defined them; what’s contentious then, is not their proposed solution but their very definition of the problem itself. Design theorist Michael Dobbins has it right: solutionism presumes rather than investigates the problems that it is trying to solve, reaching “for the answer before the questions have been fully asked.” How problems are composed matters every bit as much as how problems are resolved.

As will be very clear from my other article, I don’t quite buy everything Morozov is selling (and definitely not the way he sells it!) , but in this passage I believe we are close to something that can help us avoid some of the traps that lead to unintended consequences. Of courses, these are by definition unintended, and so perhaps not that predictable, but by investigating rather than presuming the problems we are trying to solve, and not reaching for the answer before the questions have been fully asked, perhaps future children’s pictures of their trip to the hospital won’t feature a doctor turning their back on them to commune with the computer.

#irishmed, Telemedicine and “Technodoctors”

This evening (all going well) I will participate in the Twitter #irishmed discussion, which is on telemedicine.

On one level, telemedicine does not apply all that much to me in the clinical area of psychiatry. It seems most appropriate for more data-driven specialties, or ones which have a much greater role for interpreting (and conveying the results of!) lab tests. Having said that, in the full sense of the term telemedicine does not just refer to video consultations but to any remote medical interaction. I spend a lot of time on the phone.

I do have a nagging worry about the loss of the richness of the clinical encounter in telemedicine. I am looking forward to having some interesting discussions on this this evening. I do worry that this is an area in which the technology can drive the process to a degree that may crowd out the clinical need.

The following quotes are ones I don’t necessarily agree with at all, but are worth pondering. The late GP/anthropologist Cecil Helman wrote quite scathingly of the “technodoctor.” In his posthumously published “An Amazing Murmur of the Heart”, he wrote:

 

Young Dr A, keen and intelligent, is an example of a new breed of doctor – the ones I call ‘techno-doctors’. He is an avid computer fan, as well as a physician. He likes nothing better than to sit in front of his computer screen, hour after hour, peering at it through his horn-rimmed spectacles, tap-tapping away at his keyboard. It’s a magic machine, for it contains within itself its own small, finite, rectangular world, a brightly coloured abstract landscape of signs and symbols. It seems to be a world that is much easier for Dr A to understand , and much easier for him to control, than the real world –  one largely without ambiguity and emotion.

Later in the same chapter he writes:

 

Like may other doctors of his generation – though fortunately still only a minority – Dr A prefers to see people and their diseases mainly as digital data, which can be stored, analysed, and then, if necessary, transmitted – whether by internet, telephone or radio – from one computer to another. He is one of those helping to create a new type of patient, and a new type of patient’s body – one much less human and tangible than those cared for by his medical predecessors. It is one stage further than reducing the body down to a damaged heart valve, an enlarged spleen or a diseased pair of lungs. For this ‘post-human’ body is one that exists mainly in an abstract, immaterial form. It is a body that has become pure information.

Now, as I have previously written:

One suspects that Dr A is something of a straw man, and by putting listening to the patient in opposition to other aspects of practice, I fear that Dr Helman may have been stretching things to make a rhetorical point (surely one can make use of technology in practice, even be something of a “techno-doctor”, and nevertheless put the patient’s story at the heart of practice?) Furthermore, in its own way a recourse to anthropology or literature to “explain” a patient’s story can be as distancing, as intellectualizing, as invoking physiology, biochemistry or the genome. At times the anthropological explanations seem pat, all too convenient – even reductionist.

… and re-reading this passage from Helman today, involved as I am with the CCIO , Dr A seems even more of a straw man (“horned rimmed spectacles” indeed!) – I haven’t seen much evidence that the CCIO, which is fair to say includes a fair few “technodoctors” as well as technonurses, technophysios and technoAHPs in general, is devoted to reducing the human to pure information. Indeed, the aim is to put the person at the centre of care.

 

And yet… Helman’s critique is an important one. The essential point he makes is valid and reminds us of a besetting temptation when it comes to introducing technology into care. It is very easy for the technology to drive the process, rather than clinical need. Building robust ways of preventing this is one of the challenges of the eHealth agenda. And at the core, keeping the richness of human experience at the centre of the interaction is key. Telemedicine is a tool which has some fairly strong advantages, especially in bringing specialty expertise to remoter areas. However there would be a considerable loss if it became the dominant mode of clinical interaction.  Again from my review of An Amazing Murmur of the Heart:

 

In increasingly overloaded medical curricula, where an ever-expanding amount of physiological knowledge vies for attention with fields such as health economics and statistics, the fact that medicine is ultimately an enterprise about a single relationship with one other person – the patient – can get lost. Helman discusses the wounded healer archetype, relating it to the shamanic tradition. He is eloquent on the accumulated impact of so many experiences, even at a professional remove, of disease and death: “as a doctor you can never forget. Over the years you become a palimpsest of thousands of painful, shocking memories, old and new, and they remain with you for as long as you live. Just out of sight, but ready to burst out again at any moment”.

Every sufficiently advanced little thing she does is indistinguishable from magic

This has been the longest hiatus on this blog so far, and  my last post on November 19th wasn’t exactly a deep meditation on anything.

I am hoping to re-invigorate things a little by successively blogging about three events I attended in the recent past – one last week, one the week before that, and one way back in October. Thinking about it I think this blog will increasingly become a platform for me to working out my thoughts on various matters relating to the intersection of technology and healthcare, medical education, and evidence-based practice/methodology questions. More general writing and “curation” of my old writing will appear on my other blog

On November 25th I attended another meeting of the CCIO, following on from the last one in September. The same caveat (“not only are these opinions not those of the CCIO, the HSE, or any other institution I may have links with, they are barely even those of myself.”) applies.

Unfortunately I couldn’t make the entire day so missed some of the morning session. I was fortunate enough to catch the talk by Robert Cooke , IT Delivery Director for Community Health, which encompasses my own professional area of mental health. As Robert said in his presentation infrastructure-wise particularly, mental health is starting from a low base for eHealth – and therefore infrastructure development is an important place to start.

As with pretty much all of the presentations I have seen at the CCIO Robert’s was particularly impressive in its blend of enthusiasm and a tough-minded realism about the size of the challenge. No one at these meetings is getting up and announcing that tech will magically sort out what ails healthcare. Indeed Robert strongly made the point that systems and processes need to be addressed before technology is applied, rather than waiting for it to be a magic bullet.

There were other very interesting presentations but the highlight was the breakaway group. In a relatively small group myself and three other CCIO members were facilitated in addressing  a) our vision for what eHealth could make the healthcare system look like in five years time, b) what barriers and enablers exist relating to this vision, and c) what would need to change. This exercise was part of the work UCD’s Applied Research in Connected Health team are doing on Ireland’s eHealth journey. As often happens, the discussion was so stimulating that we didn’t get to c) (and barely covered b) in time)

During the discussions about “the vision thing”, the famous Arthur C Clarke quote ““Any Sufficiently Advanced Technology Is Indistinguishable From Magic” kept coming into my mind, along with a memory of a point about Assisted Living Technologies made by Jeffrey Soar at the International Psychogeriatric Association congress in Berlin (which I drafted a blog post on and hope to actually complete very shortly) – those assisted living technologies that are successful are unobtrusive, in the background, invisible.

So much was the Arthur C Clarke quote going round my mind I was impelled to tweet it:

It turned out when I tweeted this that an extremely witty twist on the quote has already been minted:

So my vision for the future of healthcare is sitting in a room talking to someone, without a table or a barrier between us, with the appropriate information about that person in front of me (but not a bulky set of notes, or desktop computer, or distracting handheld device) in whatever form is more convivial to communication between us. We discuss whatever it is that has that person with me on that day, what they want from the interaction, what they want in the long term as well as the short term. In conversation we agree on a plan, if a “plan” is what emerges (perhaps, after all, the plan will be no plan) – perhaps referral onto others, perhaps certain investigations, perhaps changes to treatment. At the end, I am presented with a summary of this interaction and of the plan, prepared by a sufficiently advanced technology invisible during the interaction, which myself and the other person can agree on. And if so, the referrals happen, the investigations are ordered, and all the other things that now involve filling out carbon-copy forms and in one healthcare future will involve clicking through drop-down menus, just happen.

 

That’s it.

 

 

Psychiatry and Society blog – 2008-2011

This is far from my first effort at blogging. There was a blog about classical music concerts in Dublin which may still exist out there. There has been a now defunct blog on the University of Warwick site entitled “Philosophy as a Way of Life.” There was a blog called “Taytoman Agonistes” which still exists – it was basically a commonplace book. There has been Scarface Project, , which I tried to get people interested in There has been Alarm Logos of Dublin, which I also have tried to get people interested in

And there was Psychiatry and Society , which was linked with a series of lectures of the same name I organised for UCD undergraduate medical students. The blog was the subject of academic research as you can read here. To quote that abstract in full:

Blogs have achieved immense popularity in recent years. The interactive nature of blogs and other web-based tools seem consonant with contemporary pedagogical theories regarding student engagement, learner-centred teaching and deep learning. The literature on the use of blogs in education and in particular medical education has focused largely on their potential use rather than the practical experience of medical educators.

We designed a series of teaching sessions designed to explore the interface between psychiatry, mental health, and wider social issues.  To complement this course, a blog specifically designed to provide extra information on the material covered was produced, and to act as a forum for discussion. A widely available, free-to-access web based tool was used to create and design the blog. One of the course tutors was the administrator, and invited the other tutors and lecturers from the course to write on the blog. The blog was publicised at the students’ lectures, at which all the students were present, and via the students’ eLearning platform.

To fully assess the effectiveness of the blog in helping students achieve the learning objectives, quantitative measurements are required. A focus group of students was formed to explore medical students’ use of blogs for educational purposes in general, and the use of this blog in particular. These findings, and reflections on the use of the blog from the lecturer’s point of view, are presented

And that’s more or less what we did. The main “reflection” that has stuck with me in the years since was a comment from a participant that she preferred books as they were more interactive than online resources; you can simply underline, highlight and generally write on a book. This has stayed with me as an example of the paradox that “interactive” technology is “interactive” in very specific, designed ways.

The blog is still there in all its Blogspot glory. There isn’t all that much evidence of student interactivity, except here, predictably enough in a post about faith and delusion. I didn’t realise that there have been comments left in more recent years. I am not sure if any make all that much sense, even the ones which aren’t spam (and which are written in the patented Mr Angry YOU ARE JUST WRONG style so common in internet discourse)

Looking through the blog overall, I don’t find much that deserves to survive the inevitable disappearance of blogspot in a few years. I did come across this  amusing story again which reminds me of something else entirely I will (probably) post here. Looking back ,there is a tension between the blog as a sort of electronic notice board (ie lecture A will be on date B) and my attempts to post contact that would evoke comment. This never really panned out. I deliberately kept a lid on prolixity and looked for topics that I thought would be interesting for a diverse group of medical students. Of course, in retrospect, it would have been best to enlist a group of medical students to actually blog themselves. Those days have come and gone, and Web 2.0 is rather old hat now, but it was an interesting experiment.