A bit of a change of speed from my usual fretting over the myth of digital natives and whether we can learn about leadership from the bits of the Bible that are Angry at God….
I am aware of the irony of posting based on the slides here alone and not on the context of the presentation as a whole! This from Christian Bokhove from the University of Southampton is excellent on the various myths that can arise in science, education and technology … but also their at times equally mythical rebuttals! For instance, the persistent belief that spinach is an excellent source of iron is a myth… but so is the persistent claim that the myth arose because of a misplaced decimal point. There is also a slide on the claim that papers/articles featuring neuroimages are judged more favourably than those without… a myth (or rather selective selection of it-seems-true evidence?) I am afraid I may have helped perpetuate :
In 2007, Colorado State University’s McCabe and Castel published research indicating that undergraduates, presented with brief articles summarising fictional neuroscience research (and which made claims unsupported by the fictional evidence presented) rated articles that were illustrated by brain imaging as more scientifically credible than those illustrated by bar graphs, a topographical map of brain activation, or no image at all. Taken with the Bennett paper, this illustrates one of the perils of neuroimaging research, especially when it enters the wider media; the social credibility is high, despite the methodological challenges.
This is yet another stimulating blog post from Finding Nature. I know where the author is coming from in the structure of the post – pointing out the falseness of the dichotomy between the affective attraction to nature Gove discusses and science.
However, I do wonder if an emphasis on “what science tells us about connection, beauty, emotion, meaning and compassion” does run some risk of unweaving the rainbow a little. One of the findings of Miles Richardson and others is that factual knowledge about nature – identification of species and so on – is not correlated with emotional connection. As knowledge based activity often underpins nature education, this can mean opportunities for connection are missed. But could something similar happen if we only value nature connection Because These Peer Reviewed Papers tell us its ok to do so?
In ‘The Unfrozen Moment – Delivering A Green Brexit,’ Secretary of State Michael Gove sets out his vision on the future of our natural environment. In this speech, and at the Green Alliance event a week earlier, I was struck by the recurring themes of beauty, emotion, meaning and compassion. Four aspects of our relationship with the natural world that our recent research has linked to improving our connection with nature – see my blog and the open access paper for more detail. It is great to hear the Secretary of State speaking from the heart. However, the speech, see excerpt below, infers a distinction between such themes and science. Having evidence based policy makes sense. This blog points out that there is science of beauty, emotion, meaning and compassion and this should also form part of the evidence base that informs environmental policy.
“I grew up with an emotional attachment…
View original post 1,136 more words
From my other blog, some thoughts on technologies that initially seemed cool and impressive and very much fun/useful…but months later haven’t really become part of life. Think of all those unused wearables and the graveyards of cool-seeming but underused technology…. and wonder why is it underused? It the fault in our stars, or in ourselves?
A while back, I blogged enthusiastically about Edo blocks. These are Lego-ish blocks made of cardboard. At the time, they had proved great fun to make. They seemed to be a wonderful addition to play. And yet, months later, they moulder unused by actual children, taking up space.
Similarly, I blogged about Flic, a “wireless smart button” which again seemed just wonderful initially. And yet, again months later, Flic is largely unused. In this case, Flics was all too attractive to small children who rapidly disassembled them. The user interface of the Flic app was very easy to use, and as my blog post seemed to indicated there were all sorts of exciting potential uses. And yet, and yet …
In the initial assembly of the Edo blocks, it was rather slow going, and my children were more attracted by the cardboard box the Edo blocks came in than…
View original post 202 more words
From Mike Caufield, a piece that reminds me of the adage Garbage In, Garbage Out:
For many years, the underlying thesis of the tech world has been that there is too much information and therefore we need technology to surface the best information. In the mid 2000s, that technology was pitched as Web 2.0. Nowadays, the solution is supposedly AI.
I’m increasingly convinced, however, that our problem is not information overload but information underload. We suffer not because there is just too much good information out there to process, but because most information out there is low quality slapdash takes on low quality research, endlessly pinging around the spin-o-sphere.
Take, for instance, the latest news on Watson. Watson, you might remember, was IBM’s former AI-based Jeopardy winner that was going to go from “Who is David McCullough?” to curing cancer.
So how has this worked out? Four years later, Watson has yet to treat a patient. It’s hit a roadblock with some changes in backend records systems. And most importantly, it can’t figure out how to treat cancer because we don’t currently have enough good information on how to treat cancer:
“IBM spun a story about how Watson could improve cancer treatment that was superficially plausible – there are thousands of research papers published every year and no doctor can read them all,” said David Howard, a faculty member in the Department of Health Policy and Management at Emory University, via email. “However, the problem is not that there is too much information, but rather there is too little. Only a handful of published articles are high-quality, randomized trials. In many cases, oncologists have to choose between drugs that have never been directly compared in a randomized trial.”
This is not just the case with cancer, of course. You’ve heard about the reproducibility crisis, right? Most published research findings are false. And they are false for a number of reasons, but primary reasons include that there are no incentives for researchers to check the research, that data is not shared, and that publications aren’t particularly interested in publishing boring findings. The push to commercialize university research has also corrupted expertise, putting a thumb on the scale for anything universities can license or monetize.
In other words, there’s not enough information out there, and what’s out there is generally worse than it should be.
You can find this pattern in less dramatic areas as well — in fact, almost any place that you’re told big data and analytics will save us. Take Netflix as an example. Endless thinkpieces have been written about the Netflix matching algorithm, but for many years that algorithm could only match you with the equivalent of the films in the Walmart bargain bin, because Netflix had a matching algorithm but nothing worth watching. (Are you starting to see the pattern here?)
In this case at least, the story has a happy ending. Since Netflix is a business and needs to survive, they decided not to pour the majority of their money into newer algorithms to better match people with the version of Big Momma’s House they would hate the least. Instead, they poured their money into making and obtaining things people actually wanted to watch, and as a result Netflix is actually useful now. But if you stick with Netflix or Amazon Prime today it’s more likely because you are hooked on something they created than that you are sold on the strength of their recommendation engine.
Let’s belabor the point: let’s talk about Big Data in education. It’s easy to pick on MOOCs, but remember that the big value proposition of MOOCs was that with millions of students we would finally spot patterns that would allow us to supercharge learning. Recommendation engines would parse these patterns, and… well, what? Do we have a bunch of superb educational content just waiting in the wings that I don’t know about? Do we even have decent educational research that can conclusively direct people to solutions? If the world of cancer research is compromised, the world of educational research is a control group wasteland.
Recently I have been posting on the cultural theory of risk developed by Mary Douglas and Aaron Wildavsky. This is a PDF of a review of Douglas and Wildavksy’s 1982 book “Risk and Culture” by E Donald Elliott adjunct professor of Law at Yale.
The review summarises Wildavksy and Douglas’ thought very well, and gets to the heart of one issue I struggle with in their writing ; their oft dismissive approach to environmental risk:
Most readers will be struck not by the abstract theory but by its application to the rise of environmentalism. This emphasis is unfortunate. The attempt to “explain” environmentalism makes a few good points, but on the whole this part of the book is crude, shortsighted, and snide. On the other hand, the sections that consider the relationship between risk and culture on a more fundamental level are sensitive and thoughtful.
I think it unfortunate that cultural theory of risk has ended up so much overshadowed by this “crude, shortsighted, and snide” discussion of environmental risk (Wildavksy, if I recall correctly, was revealed to have taken undisclosed payments from the chemical industry) It remains a powerful explanatory tool, and in clinical practice and team working one finds that different approaches to risk are rooted in cultural practices.
Elliott’s review focuses on the environmental realm, but serves as a good and sceptical discussion of the more general focus of cultural theory of risk – and an introduction to what is sometimes a less than lucidly explained theory.
So this is something I posted on my other blog. During what was a busy day it sometimes came to me that there are parallels between this story and what can happen in medicine, and healthcare generally. I would like to think I am helping people and doing what I can to practice safely. And I imagine that, if such were possible, the greenfinches would have given me pretty good feedback… but in the end, rather than helping them live, I killed them.
It made me think particularly of polypharmacy and the need to consider the overall system you are intervening in when you are suggesting or making even the smallest change in a patients life.
I have used this blog as a sort of journal of various observations on bird feeding. Unfortunately, and humblingly, I have realised that my bird feeding activity has in fact been doing the precise opposite of what I hoped. Killing, not preserving life.
I was familiar with Trichomonas infections– an condition which especially effects greenfinches – and had washed and even replaced my feeders fairly regularly, I had thought (but far from regularly enough)
A few weeks ago I saw some definite cat / hawk kills in the garden with evident wounds. There were also a couple of less evidently predator related deaths. Foolishly I put these down to cat activity also, based on dim memories of cats killing birds but not eating them. I also wondered if there was some dehydration going on given recent hot weather and redoubled putting out water.
I had noticed also that…
View original post 298 more words