Anthropologizing Environmentalism – review by E Donald Elliot of “Risk and Culture”, Mary Douglas and Aaron Wildavksy, Yale Law Review, 1983

Recently I have been posting  on the cultural theory of risk developed by Mary Douglas and Aaron Wildavsky. This is a PDF of a review of Douglas and Wildavksy’s 1982 book “Risk and Culture” by E Donald Elliott adjunct professor of Law at Yale.

The review summarises Wildavksy and Douglas’ thought very well, and gets to the heart of one issue I struggle with in their writing ; their oft dismissive approach to environmental risk:

Most readers will be struck not by the abstract theory but by its application to the rise of environmentalism. This emphasis is unfortunate. The attempt to “explain” environmentalism makes a few good points, but on the whole this part of the book is crude, shortsighted, and snide. On the other hand, the sections that consider the relationship between risk and culture on a more fundamental level are sensitive and thoughtful.

I think it unfortunate that cultural theory of risk has ended up so much overshadowed by this “crude, shortsighted, and snide” discussion of environmental risk (Wildavksy, if I recall correctly, was revealed to have taken undisclosed payments from the chemical industry) It remains a powerful explanatory tool, and in clinical practice and team working one finds that different approaches to risk are rooted in cultural practices.

Elliott’s review focuses on the environmental realm, but serves as a good and sceptical discussion of the more general focus of cultural theory of risk – and an introduction to what is sometimes a less than lucidly explained theory.

Engaging clinicians and the evidence for informatics innovations

A few weeks ago Richard Gibson from Gartner spoke to members of the CCIO group. It was a fascinating, wide-ranging talk – managing the time effectively was a challenge. Dr Gibson talked about the implications for acute care and long term care of technological innovations – as might be obvious from my previous post here, I have a concern that much of the focus on empowerment via wearables and consumer technology misses the point that the vast bulk of healthcare is acute care and long term care. As Dr Gibson pointed out, at the rate things are going healthcare will be the only economic, social, indeed human activity in years to go

One long term concern I have about connected health approaches is engaging the wide group of clinicians. Groups like the CCIO do a good job (in my experience!) of engaging the already interested, more than likely unabashedly enthusiastic. At the other extreme, there always going to be some resistance to innovation almost on principle. In between, there is a larger group interested but perhaps sceptical.

One occasional response from peers to what I will call “informatics innovations” (to emphasise that this not about ICT but also about care planning and various other approaches that do not depend on “tech” for implementation) is to ask “where is the evidence?” And often this is not a call for empirical studies as such, but for an impossible standard – RCTs!

Now, I advocate for empirical studies of any innovation, and a willingness to admit when things are going wrong based on actual experience rather than theoretical evidence. In education, I strongly support the concept of Best Evidence Medical Education and indeed in following public debates and media coverage about education I personally find it frustrating that there is a sense that educational practice is purely opinion-based.

With innovation, the demand for the kind of RCT based evidence is something of a category error. There is also a wider issue of how “evidence-based” has migrated from healthcare to politics. In Helen Pearson’s Life Project we read how birth cohorts went from ignored, chronically underfunded studies ran by a few eccentrics to celebrated, slightly less underfunded, flagship projects of British epidemiology and sociology. Since the 1990s, they have enjoyed a policy vogue in tandem with a political emphasis on “evidence-based policy.” My own thought on this is that it is one thing to have an evidence base for a specific therapy in medical practice, quite another for a specific intervention in society itself.

I am also reminded of a passage in the closing chapters of Donald Berwick’s Escape Fire (I don’t have a copy of the book to hand so bear with me) which essentially consists of a dialogue between a younger, reforming doctor and an older, traditionally focused doctor. Somewhat in the manner of the Socratic dialogues in which (despite the meaning ascribed now to “Socratic”) Socrates turns out to be correct and his interlocutors wrong, the younger doctor has ready counters for the grumpy arguments of the older one. That is until towards the very end, when in a heartfelt speech the older doctor reveals his concerns not only about the changes of practice but what they mean for their own patients. It is easy to get into a false dichotomy between doctors open to change and those closed to change; often what can be perceived by eager reformers as resistance to change is based on legitimate concern about patient care. There are also concerns about an impersonal approach to medicine. Perhaps ensuring that colleagues know, to as robust a level as innovation allows, that patient care will be improved, is one way through this impasse.

 

Risk and innovation: reflections post #IrishMed tweetchat on Innovation in Health Care:

riskgame

Last night there was an #Irishmed  tweetchat on Innovation and Healthcare . For those unfamiliar with this format, for an hour (from 10 pm Irish time) there is a co-ordinated tweet chat curated by Dr Liam Farrell and various guest. Every ten minutes or so a new theme/topic is introduced. There’s a little background here to last night’s chat. The themes were:

 

T1 – What does the term ‘Innovation in healthcare’ mean to you?

T2- What are the main challenges faced by healthcare organisations to be innovative and how do we overcome them?

T3 -What role does IT play in the innovation process?

T4 – How can innovations in health technology empower patients to own manage their own care?

T5 – How can we encourage collaboration to ensure innovation across specialties & care settings?

I’ve blogged before about some of my social media ambivalence, especially discussing complex issue. However I was favourably impressed – again – by the quality of discussion and a willingness to recognise nuance and complexity. The themes which tended to emerge were the importance of prioritising the person at the heart of healthcare, and  that innovation in healthcare should not be for its own sake but for improving outcomes and quality of care.

One aspect I ended up tweeting about myself was the issue of risk. In the innovation world, “risk-averse” is an insult. We can see this in the wider culture, with terms like “disruptive” becoming almost entirely positive, and a change in the public rhetoric around failure (whether this is actually leading to a deeper culture change is another question). In healthcare, for understandable reasons, risk is not something one simply tolerates blithely. It seems to me rather easy to decry this as an organisational failing – would you go to a hospital that wasn’t “risk-averse?” The other side of this is that pretending an organisation is innovative if it has very little risk tolerance is absurd. Innovation involves the unknown and the unknown inherently involves risk and unintended consequences . You can’t have innovation in a rigorously planned, predictable way, in healthcare or anywhere else.

I don’t have time to write about this in much detail, but it does strike me that this issue of risk and risk tolerance is key to this issue. It is easy to talk broadly about “culture” but in the end we are dealing not only with systems, but with individuals within that system with different views and experiences of risk. I have in the past found the writings of John Adams and the Douglas-Wildavsky  model of risk helpful in this regard (disclaimer: I am not endorsing all of the above authors views) and perhaps will return to this topic over the coming weeks. Find below an image of a “risk thermostat”: one of Adams’ ideas is that individuals and systems have a certain level of risk tolerance and reducing risk exposure in one area may lead to more risky behaviour in another (his example is drivers driving carefully by speed traps/black spot signs and more recklessly elsewhere)

risktherm.