#flicishere, the #IoT and invisible health IT

 

#Hereisflic! Flic is a wireless smart button “for your smartphone, smarthome and smartlife” as the website puts it. While I am rather deficient in the smarthome and smartlife departments, I do have a smartphone and had an enjoyable evening playing around with Flics. A Flic is a little button – the pack above contained 4:

 

Each is a pleasingly solid little artefact. Put very simply, there are three ways of pressing the Flic – single click, double click, and hold. Each of these can be linked with an action of your smartphone (or smarthome devices/system) or using If This, Then That a whole range of other apps and devices:

Playing around with Flic was great fun and had that you-can-do-that? factor which I don’t get all that much with technology any more. Indeed, messing around with Flic got me thinking of grandiose, utopian vision of healthcare (I suspect some of my aversion to grandiose, utopian visions of technology and healthcare is pure reaction formation. And obviously my grandiose, utopian vision is better than everyone else’s grandiose, utopian vision) – which to recap was:

So my vision for the future of healthcare is sitting in a room talking to someone, without a table or a barrier between us, with the appropriate information about that person in front of me (but not a bulky set of notes, or desktop computer, or distracting handheld device) in whatever form is more convivial to communication between us. We discuss whatever it is that has that person with me on that day, what they want from the interaction, what they want in the long term as well as the short term. In conversation we agree on a plan, if a “plan” is what emerges (perhaps, after all, the plan will be no plan) – perhaps referral onto others, perhaps certain investigations, perhaps changes to treatment. At the end, I am presented with a summary of this interaction and of the plan, prepared by a sufficiently advanced technology invisible during the interaction, which myself and the other person can agree on. And if so, the referrals happen, the investigations are ordered, and all the other things that now involve filling out carbon-copy forms and in one healthcare future will involve clicking through drop-down menus, just happen.

That’s it.

I suppose putting flesh on those bones would involve a speech to text system that would convert the clinical encounter into a summary form “for the notes” (and for a summary letter for the person themselves, and the GP letter, and for the referrals) – perhaps some key phrases would be linked with certain formulations and phrases (to a great degree medical notes, even in psychiatry, are rather formulaic) – with of course capacity or editing and adding in free text. While clicking Flic-type devices during a consultation would be distracting, a set of different Flic type buttons with different clinical actions – ie contact psychologist to request a discussion on this patient, make provisional referral to dietitian, text community nurse to arrange a phone call – would certainly smoothen things much more than the carbon-copy world I currently live in.

When I wrote the above vision I was not familiar with the illustration Bob Wachter uses in his talks of a young girls picture of her trip to the doctor:

childspic

Turned away, tapping at a keyboard, disengaged from the family. That is what technology should not facilitate. Perhaps the internet of things could be a way of realising my particular grandiose vision of invisible Health IT.

“The Wild West of Health” care: mental health Apps, evidence, and clinical credibility

We read and hear much about the promise of mobile health. Crucial in the acceptance of mobile health by the clinical community is clinical credibility. And now, clinical credibility is synonymous with evidence, and just “evidence” but reliable, solid evidence. I’ve blogged before about studies of the quality of mental health smartphone apps. I missed this piece from Nature which, slightly predictably, is titled “Mental Health: There’s an app for that.” (isn’t “there’s an App for that a little 2011-ish though?) It begins by surveying the immense range of mental health-focused apps out there:

 

Type ‘depression’ into the Apple App Store and a list of at least a hundred programs will pop up on the screen. There are apps that diagnose depression (Depression Test), track moods (Optimism) and help people to “think more positive” (Affirmations!). There’s Depression Cure Hypnosis (“The #1 Depression Cure Hypnosis App in the App Store”), Gratitude Journal (“the easiest and most effective way to rewire your brain in just five minutes a day”), and dozens more. And that’s just for depression. There are apps pitched at people struggling with anxiety, schizophrenia, post-traumatic stress disorder (PTSD), eating disorders and addiction.

The article also has a snazzy  infographic illustrating both the lack of mental health services and the size of the market:

naturegraph

The meat of the article, however, focuses on the lack of evidence and evaluation of these apps. There is a cultural narrative which states that Technology = Good and Efficient, Healthcare = Bad and Broken and which can give the invocation of Tech the status of a godterm, pre-empting critical thought. The Nature piece, however, starkly illustrates the evidence gap:

But the technology is moving a lot faster than the science. Although there is some evidence that empirically based, well-designed mental-health apps can improve outcomes for patients, the vast majority remain unstudied. They may or may not be effective, and some may even be harmful. Scientists and health officials are now beginning to investigate their potential benefits and pitfalls more thoroughly, but there is still a lot left to learn and little guidance for consumers.

“If you type in ‘depression’, its hard to know if the apps that you get back are high quality, if they work, if they’re even safe to use,” says John Torous, a psychiatrist at Harvard Medical School in Boston, Massachusetts, who chairs the American Psychiatric Association’s Smartphone App Evaluation Task Force. “Right now it almost feels like the Wild West of health care.”

There isn’t an absolute lack of evidence, but there are issues with  much of the evidence that is out there:

Much of the research has been limited to pilot studies, and randomized trials tend to be small and unreplicated. Many studies have been conducted by the apps’ own developers, rather than by independent researchers. Placebo-controlled trials are rare, raising the possibility that a ‘digital placebo effect’ may explain some of the positive outcomes that researchers have documented, says Torous. “We know that people have very strong relationships with their smartphones,” and receiving messages and advice through a familiar, personal device may be enough to make some people feel better, he explains.

And even saying that (and, in passing, I would note that in branch of medical practice, a placebo effect is something to be harnessed, not denigrated – but in evaluation and study, rigorously minimising it is crucial) there is a considerable lack of evidence:

But the bare fact is that most apps haven’t been tested at all. A 2013 review8 identified more than 1,500 depression-related apps in commercial app stores but just 32 published research papers on the subject. In another study published that year9, Australian researchers applied even more stringent criteria, searching the scientific literature for papers that assessed how commercially available apps affected mental-health symptoms or disorders. They found eight papers on five different apps.

The same year, the NHS launched a library of “safe and trusted” health apps that included 14 devoted to treating depression or anxiety. But when two researchers took a close look at these apps last year, they found that only 4 of the 14 provided any evidence to support their claims10. Simon Leigh, a health economist at Lifecode Solutions in Liverpool, UK, who conducted the analysis, says he wasn’t shocked by the finding because efficacy research is costly and may mean that app developers have less to spend on marketing their products.

Like any healthcare intervention, an App can have adverse effects:

When a team of Australian researchers reviewed 82 commercially available smartphone apps for people with bipolar disorder12, they found that some presented information that was “critically wrong”. One, called iBipolar, advised people in the middle of a manic episode to drink hard liquor to help them to sleep, and another, called What is Biopolar Disorder, suggested that bipolar disorder could be contagious. Neither app seems to be available any more.

And even more fundamentally, in some situations the App concept itself and the close relationship with gamification can backfire:

Even well-intentioned apps can produce unpredictable outcomes. Take Promillekoll, a smartphone app created by Sweden’s government-owned liquor retailer, designed to help curb risky drinking. While out at a pub or a party, users enter each drink they consume and the app spits out an approximate blood-alcohol concentration.

When Swedish researchers tested the app on college students, they found that men who were randomly assigned to use the app ended up drinking more frequently than before, although their total alcohol consumption did not increase. “We can only speculate that app users may have felt more confident that they could rely on the app to reduce negative effects of drinking and therefore felt able to drink more often,” the researchers wrote in their 2014 paper13.

It’s also possible, the scientists say, that the app spurred male students to turn drinking into a game. “I think that these apps are kind of playthings,” says Anne Berman, a clinical psychologist at the Karolinska Institute in Stockholm and one of the study’s authors. There are other risks too. In early trials of ClinTouch, researchers found that the symptom-monitoring app actually exacerbated symptoms for a small number of patients with psychotic disorders, says John Ainsworth at the University of Manchester, who helped to develop the app. “We need to very carefully manage the initial phases of somebody using this kind of technology and make sure they’re well monitored,” he says.

I am very glad to read that one of the mHealth apps which is a model of evidence based practice is one that I have both used and recommended myself – Sleepio:

sleepio-logo

One digital health company that has earned praise from experts is Big Health, co-founded by Colin Espie, a sleep scientist at the University of Oxford, UK, and entrepreneur Peter Hames. The London-based company’s first product is Sleepio, a digital treatment for insomnia that can be accessed online or as a smartphone app. The app teaches users a variety of evidence-based strategies for tackling insomnia, including techniques for managing anxious and intrusive thoughts, boosting relaxation, and establishing a sleep-friendly environment and routine.

Before putting Sleepio to the test, Espie insisted on creating a placebo version of the app, which had the same look and feel as the real app, but led users through a set of sham visualization exercises with no known clinical benefits. In a randomized trial, published in 2012, Espie and his colleagues found that insomniacs using Sleepio reported greater gains in sleep efficiency — the percentage of time someone is asleep, out of the total time he or she spends in bed — and slightly larger improvements in daytime functioning than those using the placebo app15. In a follow-up 2014 paper16, they reported that Sleepio also reduced the racing, intrusive thoughts that can often interfere with sleep.

The Sleepio team is currently recruiting participants for a large, international trial and has provided vouchers for the app to several groups of independent researchers so that patients who enrol in their studies can access Sleepio for free.

sleepioprog

This is extremely heartening – and as stated above, clinical credibility is key in the success of any eHealth / mHealth approach. And what does clinical credibility really mean? That something works, and works well.

 

 

Engaging clinicians and the evidence for informatics innovations

A few weeks ago Richard Gibson from Gartner spoke to members of the CCIO group. It was a fascinating, wide-ranging talk – managing the time effectively was a challenge. Dr Gibson talked about the implications for acute care and long term care of technological innovations – as might be obvious from my previous post here, I have a concern that much of the focus on empowerment via wearables and consumer technology misses the point that the vast bulk of healthcare is acute care and long term care. As Dr Gibson pointed out, at the rate things are going healthcare will be the only economic, social, indeed human activity in years to go

One long term concern I have about connected health approaches is engaging the wide group of clinicians. Groups like the CCIO do a good job (in my experience!) of engaging the already interested, more than likely unabashedly enthusiastic. At the other extreme, there always going to be some resistance to innovation almost on principle. In between, there is a larger group interested but perhaps sceptical.

One occasional response from peers to what I will call “informatics innovations” (to emphasise that this not about ICT but also about care planning and various other approaches that do not depend on “tech” for implementation) is to ask “where is the evidence?” And often this is not a call for empirical studies as such, but for an impossible standard – RCTs!

Now, I advocate for empirical studies of any innovation, and a willingness to admit when things are going wrong based on actual experience rather than theoretical evidence. In education, I strongly support the concept of Best Evidence Medical Education and indeed in following public debates and media coverage about education I personally find it frustrating that there is a sense that educational practice is purely opinion-based.

With innovation, the demand for the kind of RCT based evidence is something of a category error. There is also a wider issue of how “evidence-based” has migrated from healthcare to politics. In Helen Pearson’s Life Project we read how birth cohorts went from ignored, chronically underfunded studies ran by a few eccentrics to celebrated, slightly less underfunded, flagship projects of British epidemiology and sociology. Since the 1990s, they have enjoyed a policy vogue in tandem with a political emphasis on “evidence-based policy.” My own thought on this is that it is one thing to have an evidence base for a specific therapy in medical practice, quite another for a specific intervention in society itself.

I am also reminded of a passage in the closing chapters of Donald Berwick’s Escape Fire (I don’t have a copy of the book to hand so bear with me) which essentially consists of a dialogue between a younger, reforming doctor and an older, traditionally focused doctor. Somewhat in the manner of the Socratic dialogues in which (despite the meaning ascribed now to “Socratic”) Socrates turns out to be correct and his interlocutors wrong, the younger doctor has ready counters for the grumpy arguments of the older one. That is until towards the very end, when in a heartfelt speech the older doctor reveals his concerns not only about the changes of practice but what they mean for their own patients. It is easy to get into a false dichotomy between doctors open to change and those closed to change; often what can be perceived by eager reformers as resistance to change is based on legitimate concern about patient care. There are also concerns about an impersonal approach to medicine. Perhaps ensuring that colleagues know, to as robust a level as innovation allows, that patient care will be improved, is one way through this impasse.

 

“Huge ($$), broken, and therefore easily fixed” : re-reading Neil Versel’s Feb 2013 column “Rewards for watching TV vs rewards for healthy behavior”

Ok, it may seem somewhat arbitrary to bring up a column on MobiHealthNews, a website which promises the latest in digital health news direct to your inbox. However this particular column, and also some of the responses which Versel provoked (collected here), struck a chord with me at the time and indeed largely inspired my presentation at this workshop at the 2013 eChallenges conference.

In 2012 I had beta tested a couple of apps in the general health field (I won’t go into any more specifics) – none of which seemed clinically useful. My interest in healthcare technology had flowed largely from my interest in technology in medical education. Versel’s column, and the comments attributed to “Cynical” in the follow up column by Brian Dolan, struck a chord. I also found they transcended the often labyrinthine structures of US Healthcare.

The key paragraph of Versel’s original column was this

What those projects all have in common is that they never figured out some of the basic realities of healthcare. Fitness and healthcare are distinct markets. The vast majority of healthcare spending comes not from workout freaks and the worried well, but from chronic diseases and acute care. Sure, you can prevent a lot of future ailments by promoting active lifestyles today, but you might not see a return on investment for decades.

..but an awful lot of it is worth quoting:

Pardon my skepticism, but hasn’t everyone peddling a DTC health tool focused on user engagement? Isn’t that the point of all the gamification apps, widgets and gizmos?

I never was able to find anything unique about Massive Health, other than its Massive Hype. It had a high-minded business name, a Silicon Valley rock star on board — namely former Mozilla Firefox creative lead Asa Raskin — and a lot of buzz. But no real breakthroughs or much in the way of actual products.

….

Another problem is that Massive Health, Google Health, Revolution Health and Keas never came to grips with the fact that healthcare is unlike any other industry.

In the case of Google and every other “untethered” personal health record out there, it didn’t fit physician workflow. That’s why I was disheartened to learn this week that one of the first twodevelopment partners for Walgreens’ new API for prescription refills is a PHR startup called Healthspek. I hate to say it, but that is bound to fail unless Walgreens finds a way to populate Healthspek records with pharmacy and Take Care Health System clinic data.

Predictably enough, there was a strong response to Versel’s column. Here is Dr Betsy Bennet:

As a health psychologist with a lot of years in pharma and healthcare, I am continually frustrated with the hype that accompanies most “health apps”. Not everyone enjoys computer games, not everyone wants to “share” the issues they’re ashamed of with their “social network”, not everyone is interested in being a “quantified self”. This is not to say that digital health is futile or a bad idea. But if we took the time to understand why so many doctors hate EHRs and patients are not interested in paying to “manage their health information” (What does that mean, anyway?) we would come a long way towards finding digital interventions that people actually want to use.

 

The most trenchant (particularly point 1) comment was from “Cynical”

Well written. This is one of the few columns (or rants) that actually understands the reality of healthcare and digital health (attending any health care conference will also highlight this divide). What I am finding is two fold:

1. The vast majority of these DTC products are created by people who have had success in other areas of “digital” – and therefore they build what they know – consumer facing apps / websites that just happen to be focused in health. They think that healthcare is huge ($$), broken, and therefore easily fixed using the same principals applied to music, banking, or finding a movie. But they have zero understanding of the “business of healthcare”, and as a result have no ability to actually sell their products into the health care industry – one of the slowest moving, convoluted, and cumbersome industries in the world.

2. Almost none of these products have any clinical knowledge closely integrated — many have a doctor (entrepreneur) on the “advisory board”, but in most cases there are no actual practicing physicians involved (physician founders are often still in med school, only practiced for a limited time, or never at all). This results in two problems – one of which the author notes – no understanding of workflow; the other being no real clinical efficacy for the product — meaning, they do not actually improve health, improve efficiency, or lower cost. Any physician will be able to lament the issues of self-reported data…

Instead of hanging out at gyms or restaurants building apps for diets or food I would recommend digital health entrepreneurs hang out in any casino in America around 1pm any day of the week – that is your audience. And until your product tests well with that group, you have no real shot.

This perspective from Jim Bloedau is also worth quoting., given how much of the rhetoric on healthcare and technology is focused on the dysfunctionality of the current system:

Who likes consuming healthcare? Nobody. How many providers have you heard say they wish they could spend more time in the office? Never. Because of this, the industry’s growth has been predicated on the idea that somebody else will do it all for me – employers will provide insurance and pay for it, doctors will provide care. This is also the driver of the traditional business model for healthcare that many pundits label as a “dysfunctional healthcare system.” Actually, the business of healthcare has been optimized as it has been designed – as a volume based business and is working very well.

Coming up to four years on, and from my own point of viewing having had further immersion in the health IT world, how does it stack up? Well, for one thing I seem not to hear the word “gamification” quite that much. There seems to be a realisation that having “clinical knowledge closely integrated” is not a nice to have have but an absolute sine qua non. Within the CCIO group and from my experience of the CCIO Summer school, there certain isn’t a sense that healthcare is going to be “easily fixed” by technology. Bob Wachter’s book and report also seem to have tempered much hype.

Yet an awful lot of Versel’s original critique and the responses he provoked still rings true about the wider culture and discussion of healthcare and technology, not in CCIO circles in my experience but elsewhere. There is still often a rather  inchoate assumption that the likes of the FitBit will in some sense transform things. As Cynical states above, in the majority of cases self-reported data is something there are issues with, (there are exceptions such as mood and sleep diaries, and Early Warning Signals systems in bipolar disorder, but there too a simplicity and judiciousness is key)

Re-reading his blog post I am also struck by his  lede, which was that mobile tech has enabled what could be described as the Axis of Sedentary to a far greater degree than it has enable the forces of exercise and healthy eating. Versel graciously spent some time on the phone with me prior to the EuroChallenges workshop linked to above and provided me with very many further insights. I would be interested to know what he makes of the scene outlined in his column now.

Risk and innovation: reflections post #IrishMed tweetchat on Innovation in Health Care:

riskgame

Last night there was an #Irishmed  tweetchat on Innovation and Healthcare . For those unfamiliar with this format, for an hour (from 10 pm Irish time) there is a co-ordinated tweet chat curated by Dr Liam Farrell and various guest. Every ten minutes or so a new theme/topic is introduced. There’s a little background here to last night’s chat. The themes were:

 

T1 – What does the term ‘Innovation in healthcare’ mean to you?

T2- What are the main challenges faced by healthcare organisations to be innovative and how do we overcome them?

T3 -What role does IT play in the innovation process?

T4 – How can innovations in health technology empower patients to own manage their own care?

T5 – How can we encourage collaboration to ensure innovation across specialties & care settings?

I’ve blogged before about some of my social media ambivalence, especially discussing complex issue. However I was favourably impressed – again – by the quality of discussion and a willingness to recognise nuance and complexity. The themes which tended to emerge were the importance of prioritising the person at the heart of healthcare, and  that innovation in healthcare should not be for its own sake but for improving outcomes and quality of care.

One aspect I ended up tweeting about myself was the issue of risk. In the innovation world, “risk-averse” is an insult. We can see this in the wider culture, with terms like “disruptive” becoming almost entirely positive, and a change in the public rhetoric around failure (whether this is actually leading to a deeper culture change is another question). In healthcare, for understandable reasons, risk is not something one simply tolerates blithely. It seems to me rather easy to decry this as an organisational failing – would you go to a hospital that wasn’t “risk-averse?” The other side of this is that pretending an organisation is innovative if it has very little risk tolerance is absurd. Innovation involves the unknown and the unknown inherently involves risk and unintended consequences . You can’t have innovation in a rigorously planned, predictable way, in healthcare or anywhere else.

I don’t have time to write about this in much detail, but it does strike me that this issue of risk and risk tolerance is key to this issue. It is easy to talk broadly about “culture” but in the end we are dealing not only with systems, but with individuals within that system with different views and experiences of risk. I have in the past found the writings of John Adams and the Douglas-Wildavsky  model of risk helpful in this regard (disclaimer: I am not endorsing all of the above authors views) and perhaps will return to this topic over the coming weeks. Find below an image of a “risk thermostat”: one of Adams’ ideas is that individuals and systems have a certain level of risk tolerance and reducing risk exposure in one area may lead to more risky behaviour in another (his example is drivers driving carefully by speed traps/black spot signs and more recklessly elsewhere)

risktherm.

The perils of trying to do too much: data, the Life Study, and Mission Overload

One interesting moment at the CCIO Network Summer School came in a panel discussion. A speaker was talking about the vast amount of data that can be collected and how impractical this can be. He gave the example of – while acknowledging that he completely understood why this particular data might be interesting – the postcode of  the patients most frequent visitor. As someone pointed out from the audience, the person in the best position to collect this data is probably the patient themselves.

When I heard this discussion, the part of my that still harbours research ambitions thought “that is a very interesting data point.” And working in a mixed urban/rural catchment area, in a service which has experienced unit closures and admission bed centralisation, I thought of how illustrative that would be of the personal experience behind these decisions.

However, the principle that was being stated – that clinical data is that which is generated in clinical activity – seems to be one of the only ways of keeping the potential vast amount of data that could go into an EHR manageable. Recently I have been reading Helen Pearson’s “The Life Project” , a review of which will shortly enough appear. Pearson tells the story of the UK Birth Cohort Studies. Most of this story is an account of these studies surviving against the institutional odds and becoming key cornerstones of British research. Pearson explicitly tries to create a sense of civic pride about these studies, akin to that felt about the NHS and BBC. However, in late 2015 the most recent birth cohort study, the Life Study, was cancelled for sheer lack of volunteers. The reasons for this are complex, and to my mind suggest something changing in British society in general (in the 1946 study it was assumed that mothers would simply comply with the request to participate as a sort of extension of wartime duty) – but one factor was surely the amount of questions to be answered and samples to be given:

But the Life Study aims to distinguish itself, in particular by collecting detailed information on pregnancy and the first year of the children’s lives — a period that is considered crucial in shaping later development.

The scientists plan to squirrel away freezer-fulls of tissue samples, including urine, blood, faeces and pieces of placenta, as well as reams of data, ranging from parents’ income to records of their mobile-phone use and videos of the babies interacting with their parents. (from Feb 2015 article in Nature by Pearson)

All very worthy, but it seems to me that the birth cohort studies were victims of their own success. Pearson describes that, almost from the start, they were torn between a more medical outlook and a more sociological outlook. Often this tension was fruitful, but in the case of Life Study it seems to have led to a Mission Overload.

I have often felt that there is a commonality of interest between the Health IT community, the research methodology community, and the medical education community and the potential of EHRs for epidemiology research, dissemination of best evidence at point of care  and realistic “virtual patient” construction is vast. I will come back to these areas of commonality again. However, there is also a need to remember the different ways a clinician, an IT professional, an epidemiologist, an administrator, and an educationalist might look at data. The Life Study perhaps serves as a warning.

Unintended consequences and Health IT

Last week along with other members of the Irish CCIO group I attended the UK CCIO Network Summer School. Among many thought provoking presentations and a wonderful sense of collegiality (and the scale of the challenges ahead), one which stood out was actually a video presentation by Dr Robert Wachter, whose review into IT in the NHS (in England) is due in the coming weeks and who is also the author of “The Digital Doctor: Hype, Hope and Harm at the Dawn of Medicine’s Computer Age”

digitaldoctor

Amongst many other things, Dr Wachter discussed the unintended consequences of Health IT. He discussed how, pretty much overnight, radiology imaging systems destroyed “radiology rounds” and a certain kind of discussion of cases. He discussed how hospital doctors using eHealth systems sit in computer suites with other doctors, rather than being on the wards. Perhaps most strikingly, he showed a child’s picture of her visit to the doctor. in which the doctor is turned away from the patient and her mother, hunched over a keyboard:

childspic.png

This reminded me a little of Cecil Helman’s vision of the emergence of a “technodoctor”, which I suspected was something of a straw man:

Like may other doctors of his generation – though fortunately still only a minority – Dr A prefers to see people and their diseases mainly as digital data, which can be stored, analysed, and then, if necessary, transmitted – whether by internet, telephone or radio – from one computer to another. He is one of those helping to create a new type of patient, and a new type of patient’s body – one much less human and tangible than those cared for by his medical predecessors. It is one stage further than reducing the body down to a damaged heart valve, an enlarged spleen or a diseased pair of lungs. For this ‘post-human’ body is one that exists mainly in an abstract, immaterial form. It is a body that has become pure information.

I still suspect this is overall a straw man, and Helman admits this “technodoctor” is “still only [part of] a minority” – but perhaps the picture above shows this is less of a straw man than we might be comfortable with.

Is there a way out of the trap of unintended consequences? On my other blog I have posted on Evgeny Morozov’s “To Solve Everything, Click Here.”  a book which, while I had many issue with Morozov’s style and approach (the post ended up being over 2000 words which is another unintended consequence), is extremely thought-provoking. Morozov positions himself against “epochalism” – the belief that because of technology (or other factors) we live in a unique era. He also decries “solutionism”, a more complex phenomenon, of which he writes:

I call the ideology that legitimizes and sanctions such aspirations “solutionism.” I borrow this unabashedly pejorative term from the world of architecture and urban planning – where it has come to refer to an unhealthy preoccupation with sexy, monumental and narrow-minded solutions – the kind of stuff that wows audiences at TED Conferences – to problems that are extremely complex, fluid and contentious. These are the kind of problems that, on careful examination, do not have to be defined in the singular and all-encompassing ways that “solutionists” have defined them; what’s contentious then, is not their proposed solution but their very definition of the problem itself. Design theorist Michael Dobbins has it right: solutionism presumes rather than investigates the problems that it is trying to solve, reaching “for the answer before the questions have been fully asked.” How problems are composed matters every bit as much as how problems are resolved.

As will be very clear from my other article, I don’t quite buy everything Morozov is selling (and definitely not the way he sells it!) , but in this passage I believe we are close to something that can help us avoid some of the traps that lead to unintended consequences. Of courses, these are by definition unintended, and so perhaps not that predictable, but by investigating rather than presuming the problems we are trying to solve, and not reaching for the answer before the questions have been fully asked, perhaps future children’s pictures of their trip to the hospital won’t feature a doctor turning their back on them to commune with the computer.

Anthony Burgess on decimalisation, the cashless society, and cognitive reserve

This quote made me wonder about the cognitive impact of decimalisation. There seems to be a consensus that cognitive challenging activities help to reduce and/or delay dementia, and I wonder, aside from the poetic and cultural losses Burgess enumerates, could the change from the rich arithmetic complexity of l. s. d. to the simplicity of the decimal system have had some kind of epidemiological effect? And now, with the abolition of cash openly mooted , the corresponding loss of the calculation of change – which I assume is one of the commonest conscious arithmetic calculations we make – well, who know what will happen?

Probably not all that much. Or possibly a lot. I haven’t been able to find solid empirical research or much theoretical discussion of the topic.

Anyhow, here is Anthony Burgess from his 1990 autobiography, You’ve Had Your Time  on decimalisation:

 

“Before the shameful liquidation of the British penny into a p, there had been an ancient and eminently rational coinage, with twelve pence to the shilling and twenty shillings to the pound. This meant divisibility of the shilling by all the even integers up to twelve. Time and money went together: only in Fritz Lang’s Metropolis is there a ten-hour clock. Money could be divided according to time, and for the seven-day week it was only necessary to add a shilling to a pound and create a guinea. A guinea was not only divisible by seven, it could be split ninefold and produce a Straits dollar. By brutal government fiat, at a time when computer engineers were protesting that decimal system was out of date and the octal principle was the only valid one for cybernetics, this beautiful and venerable monetary complex was abolished in favour of a demented abstraction that was a remnant of the French revolutionary nightmare. The first unit to go was the half-crown or tosheroon, the loveliest and most rational coin of all. It was a piece of eight, a genuine dollar though termed a half one (the dollar sign was originally an eight with a bar through it). It does not even survive as an American bit or an East Coast Malayan kupang. Britain’s troubles began with this jettisoning of a traditional solidity, rendering Falstaff’s tavern bill and ‘Sing a song of sixpence’ unintelligible. I have never been able to forgive this.”

Entertainingly enough, while searching for this quote online to save me having to type it out, I came across this page on the Royal Mint Museum’s website  – which quotes the “beautiful and venerable monetary complex” and nothing else!

#irishmed, Telemedicine and “Technodoctors”

This evening (all going well) I will participate in the Twitter #irishmed discussion, which is on telemedicine.

On one level, telemedicine does not apply all that much to me in the clinical area of psychiatry. It seems most appropriate for more data-driven specialties, or ones which have a much greater role for interpreting (and conveying the results of!) lab tests. Having said that, in the full sense of the term telemedicine does not just refer to video consultations but to any remote medical interaction. I spend a lot of time on the phone.

I do have a nagging worry about the loss of the richness of the clinical encounter in telemedicine. I am looking forward to having some interesting discussions on this this evening. I do worry that this is an area in which the technology can drive the process to a degree that may crowd out the clinical need.

The following quotes are ones I don’t necessarily agree with at all, but are worth pondering. The late GP/anthropologist Cecil Helman wrote quite scathingly of the “technodoctor.” In his posthumously published “An Amazing Murmur of the Heart”, he wrote:

 

Young Dr A, keen and intelligent, is an example of a new breed of doctor – the ones I call ‘techno-doctors’. He is an avid computer fan, as well as a physician. He likes nothing better than to sit in front of his computer screen, hour after hour, peering at it through his horn-rimmed spectacles, tap-tapping away at his keyboard. It’s a magic machine, for it contains within itself its own small, finite, rectangular world, a brightly coloured abstract landscape of signs and symbols. It seems to be a world that is much easier for Dr A to understand , and much easier for him to control, than the real world –  one largely without ambiguity and emotion.

Later in the same chapter he writes:

 

Like may other doctors of his generation – though fortunately still only a minority – Dr A prefers to see people and their diseases mainly as digital data, which can be stored, analysed, and then, if necessary, transmitted – whether by internet, telephone or radio – from one computer to another. He is one of those helping to create a new type of patient, and a new type of patient’s body – one much less human and tangible than those cared for by his medical predecessors. It is one stage further than reducing the body down to a damaged heart valve, an enlarged spleen or a diseased pair of lungs. For this ‘post-human’ body is one that exists mainly in an abstract, immaterial form. It is a body that has become pure information.

Now, as I have previously written:

One suspects that Dr A is something of a straw man, and by putting listening to the patient in opposition to other aspects of practice, I fear that Dr Helman may have been stretching things to make a rhetorical point (surely one can make use of technology in practice, even be something of a “techno-doctor”, and nevertheless put the patient’s story at the heart of practice?) Furthermore, in its own way a recourse to anthropology or literature to “explain” a patient’s story can be as distancing, as intellectualizing, as invoking physiology, biochemistry or the genome. At times the anthropological explanations seem pat, all too convenient – even reductionist.

… and re-reading this passage from Helman today, involved as I am with the CCIO , Dr A seems even more of a straw man (“horned rimmed spectacles” indeed!) – I haven’t seen much evidence that the CCIO, which is fair to say includes a fair few “technodoctors” as well as technonurses, technophysios and technoAHPs in general, is devoted to reducing the human to pure information. Indeed, the aim is to put the person at the centre of care.

 

And yet… Helman’s critique is an important one. The essential point he makes is valid and reminds us of a besetting temptation when it comes to introducing technology into care. It is very easy for the technology to drive the process, rather than clinical need. Building robust ways of preventing this is one of the challenges of the eHealth agenda. And at the core, keeping the richness of human experience at the centre of the interaction is key. Telemedicine is a tool which has some fairly strong advantages, especially in bringing specialty expertise to remoter areas. However there would be a considerable loss if it became the dominant mode of clinical interaction.  Again from my review of An Amazing Murmur of the Heart:

 

In increasingly overloaded medical curricula, where an ever-expanding amount of physiological knowledge vies for attention with fields such as health economics and statistics, the fact that medicine is ultimately an enterprise about a single relationship with one other person – the patient – can get lost. Helman discusses the wounded healer archetype, relating it to the shamanic tradition. He is eloquent on the accumulated impact of so many experiences, even at a professional remove, of disease and death: “as a doctor you can never forget. Over the years you become a palimpsest of thousands of painful, shocking memories, old and new, and they remain with you for as long as you live. Just out of sight, but ready to burst out again at any moment”.

A Medical Informatics Education, 1996.

Today I walked to UCD much as I did nearly 20 years ago on 21st September 1996, to begin college. This time I was walking not to Belfield itself, but to UCD Nexus, located a little further on in Belfield Office Park, for a meeting in my new roles as CCIO liaison to ARCH (if that’s too many acronyms, don’t ask)

Various nostalgic impressions mingled. Cyclists seem more aggressive than they were. UCD is a slicker operation and more given to self-promotion than it was. It had been a while since I had actually walked through campus; the last few times I had driven in, found parking near-impossible, gone to a meeting, and left. Belfield seemed to have become a bit like Docklands , a rather alienating landscape dominated by massive buildings without human scale.

 

Walking through, however, I find Belfield reassuringly unchanged at its core. The Science Block has greatly expanded, but the central lecture theatre structure is unchanged. The Arts Block, the fundamental library structure, the lake, the restaurant – all are different only superficially. The cafe that was officially known as “Finnegan’s Break” and was always called “Hilpers” is now gone.

I was also a little taken aback by how much human interaction there was. I expected serried ranks of screen-focused students. In the restaurant, I saw only one person texting while talking to here friends, and while that wouldn’t have happened in 1996, it would have in 2000. A few years ago there were PC terminals all over the place, which seem to have largely disappeared.

Given the nature of of the  meeting I was going to, I thought about one of the academic highlights of that first year of medicine; medical informatics. This was a subject which, frankly, was much derided. Why? Because it seemed irrelevant, I think, somewhat beneath those who knew anything much about computers and somewhat irksome to those who didn’t. Crucially, I can’t recall anything specifically medical about medical informatics.

We had lecturers on what a CPU was and so forth (more of which anon) and workshops on the use of Word, Excel, Access and the other Microsoft biggies at the time. The undoubted highpoint was the lecturer, Mel ´Ó Cinneide, suddently pulling  a mouse out of his pocket with the immortal words “for those who haven’t seen one, this is a mouse.”

Now, the wheel has come full circle; one wonders how many of a laptop and tablet focused cohort of students would have seen a mouse. UCD Netsoc was, for a few years, the only way to get internet access as a student, and the enthusiastic queued up from early morning to get an account.

As with many other pre clinical subjects at the time, Medical Informatices teaching was by academics in their specific discipline who no doubt found the prospect of teaching medical students even less enticing than teaching students who at least were pursuing the subject at more length.

In subsequent years, Medical Informatics was revamped and, I gather, made more clinically relevant. And now as Ireland slouches towards eHealth the relevance of IT to medicine is much more obvious. I am sure that Medical Informatics in UCD and equivalent courses in other medical schools is now taught in a clinically relevant, pedagogically sound manner with defined learning objectives and so forth. Nevertheless, I have my doubts that in twenty years anyone will recall a moment from this teaching as vividly as what would (mostly) be the class of 02 recall Mel whipping out the mouse.