Education is more prone to this, and from a wider group of people. Everyone has their idea of what “they” should teach, ascribing to schools magical powers to end social ills by simply putting something on the curriculum.
Much of this is very worthy and well-intentioned. People want their children to be prepared for life. That the things suggested may not lend themselves to “being on the curriculum” with any degree of effectiveness is rarely considered.
That curricula are pretty overloaded anyway is rarely considered.
How often do you hear the phrase “Schools should teach…” in the media?
We’ve noticed that barely a week goes by without a well-meaning person or organisation insisting that something else is added to the curriculum, often without any consideration as to how it could be fitted into an already-squeezed school day. Obviously the curriculum needs to be updated and improved upon over time, and some of the topics proposed are incredibly important. However, there are only so many hours in the school week, and we believe that teachers and schools are the ones best placed to decide what their students need to know, and not have loads of additional things forced on them by government because of lobbying by others.
So far, as of today, this is the list:
So far this year we count 22 suggestions for what schools should do with pupils:
Why We Should Teach School Aged Children About Baby Loss
Make schools colder to improve learning
Schools ‘should help children with social media risk’
Pupils should stand or squat at their desks, celebrity GP says
MP’s call for national anthem teaching in schools to unite country
It’s up to us: heads and teachers must model principled, appropriate and ethical online behaviour
Primary school children need to learn about intellectual property, Government agency says
Call for more sarcasm at school is no joke
Schools should teach more ‘nuanced’ view of feminism, Girls’ School Association president says
Schools ‘should teach children about the dangers of online sexual content’
Schools should teach children resilience to help them in the workplace, new Education Secretary says
Government launches pack to teach pupils ‘importance of the Commonwealth’
Schools must not become like prisons in fight against knife crime, headteacher warns
Schools should teach all pupils first aid, MPs say
Call for agriculture GCSE to be introduced as UK prepares to leave the EU
Councils call for compulsory mental health counselling in all secondary schools
Set aside 15 minutes of dedicated reading time, secondary schools told
Pupils must be taught about architecture, says Gokay Deveci
A serious education on the consequences of obesity is needed for our most overweight generation
Teach girls how to get pregnant, say doctors
Start teaching children the real facts of life
I am confident there are a lot more out there PTE haven’t been linked with. From sarcasm to “how to get pregnant” to first aid to intellectual property to resilience.
I do wish someone would do my study on medical journals’ imperatives for me!
We hear a lot about digital natives. They are related to the similarly much-mentioned millenials; possibly they are exactly the same people (although as I am going to argue that digital natives do not exist, perhaps millenials will also disappear in a puff of logic). Born after 1980, or maybe after 1984, or maybe after 1993, or maybe after 2007, or maybe after 2010, the digital native grew up with IT, or maybe grew up with the internet, or grew up with social media, or at any rate grew up with something that the prior generation – the “digital immigrants” (born a couple of years before the first cut off above, that’s where I am too) – didn’t.
In 2007, Colorado State University’s McCabe and Castel published research indicating that undergraduates, presented with brief articles summarising fictional neuroscience research (and which made claims unsupported by the fictional evidence presented) rated articles that were illustrated by brain imaging as more scientifically credible than those illustrated by bar graphs, a topographical map of brain activation, or no image at all. Taken with the Bennett paper, this illustrates one of the perils of neuroimaging research, especially when it enters the wider media; the social credibility is high, despite the methodological challenges.
“The media add fuel to that fire. When anything goes wrong – in politics, business, a celebrity relationship – journalists pounce, dissecting the crisis with glee and demanding an instant remedy. When golfer Tiger Woods was outed as a serial philanderer, he vanished from the public eye for three months before finally breaking his silence to issue a mea culpa and announce he was in therapy for sex addiction. How did the media react to being made to wait that long? With fury and indignation. The worst sin for a public figure on the ropes is to fail to serve up an instant exit strategy.
“That impatience fuels a tendency to overhype fixes that later turn out to be complete turkeys. An engineer by training, Marco Petruzzi worked as a globetrotting management consultant for 15 years before abandoning the corporate world to build better schools for the poor in the United States. We will meet him again later in the book, but for now consider his attack on our culture of hot air. ‘In the past, hard-working entrepreneurs developed amazing stuff over time, and they did it, they didn’t just talk about it, they did it,’ he says. ‘We live in a world now where talk is cheap and bold ideas can create massive wealth without ever having to deliver. There are multi-billionaires out there who never did anything but capture the investment cycle and the spin cycle at the right moment, which just reinforces a culture where people don’t want to put in the time and effort to come up with real and lasting solutions to problems. Because if they play their cards right, and don’t worry about the future, they can get instant financial returns’
The idea that a new generation of students is entering the education system has excited recent attention among educators and education commentators. Termed ‘digital natives’ or the ‘Net generation’, these young people are said to have been immersed in technology all their lives, imbuing them with sophisticated technical skills and learning preferences for which traditional education is unprepared. Grand claims are being made about the nature of this generational change and about the urgent necessity for educational reform in response. A sense of impending crisis pervades this debate. However, the actual situation is far from clear. In this paper, the authors draw on the fields of education and sociology to analyse the digital natives debate. The paper presents and questions the main claims made about digital natives and analyses the nature of the debate itself. We argue that rather than being empirically and theoretically informed, the debate can be likened to an academic form of a ‘moral panic’. We propose that a more measured and disinterested approach is now required to investigate ‘digital natives’ and their implications for education.
Anyway, back the digital natives. Bennet et al begin with a quote from Marcel Proust:
The one thing that does not change is that at any and every time it appears that there have been
‘great changes’.
Marcel Proust, Within a Budding Grove
The authors summarise what a digital native is supposed to be like – and the not exactly extensive evidence base for their existence:
The claim made for the existence of a generation of ‘digital natives’ is based on two
main assumptions in the literature, which can be summarised as follows:
1. Young people of the digital native generation possess sophisticated
knowledge of and skills with information technologies.
2. As a result of their upbringing and experiences with technology, digital natives have particular learning preferences or styles that differ from earlier generations of students.
In the seminal literature on digital natives, these assertions are put forward with limited
empirical evidence (eg, Tapscott, 1998), or supported by anecdotes and appeals to
common-sense beliefs (eg, Prensky, 2001a). Furthermore, this literature has been referenced,
often uncritically, in a host of later publications (Gaston, 2006; Gros, 2003;
Long, 2005; McHale, 2005; Skiba, 2005). There is, however, an emerging body of
research that is beginning to reveal some of the complexity of young people’s computer
use and skills.
No one denies that a lot of young people use a lot of technology – but not all:
In summary, though limited in scope and focus, the research evidence to date indicates
that a proportion of young people are highly adept with technology and rely on it for a
range of information gathering and communication activities. However, there also
appears to be a significant proportion of young people who do not have the levels of access or technology skills predicted by proponents of the digital native idea. Such generalisations about a whole generation of young people thereby focus attention on
technically adept students. With this comes the danger that those less interested and less able will be neglected, and that the potential impact of socio-economic and cultural factors will be overlooked. It may be that there is as much variation within the digital native generation as between the generations.
It is often suggested that children who are merrily exploring the digital world are ground down with frustration by not having the same access to computers in school. This is part of a more general (with familiar rhetoric for the health IT world) demand for transformation (the word “disruptive” in its modern usage had not quite caught on in 2008) As is often the case, the empirical evidence (and also, I would say, a certain degree of common sense) is not with the disrupters:
The claim we will now examine is that current educational systems must change in
response to a new generation of technically adept young people. Current students have
been variously described as disappointed (Oblinger, 2003), dissatisfied (Levin & Arafeh,
2002) and disengaged (Prensky, 2005a). It is also argued that educational institutions
at all levels are rapidly becoming outdated and irrelevant, and that there is an urgent
need to change what is taught and how(Prensky, 2001a; Tapscott, 1998). For example,
Tapscott (1999) urges educators and authorities to ‘[g]ive students the tools, and they
will be the single most important source of guidance on how to make their schools relevant and effective places to learn’ (p. 11).Without such a transformation, commentators
warn, we risk failing a generation of students and our institutions face imminent
obsolescence.
However, there is little evidence of the serious disaffection and alienation among students
claimed by commentators. Downes’ (2002) study of primary school children
(5–12 years old) found that home computer use was more varied than school use and
enabled children greater freedom and opportunity to learn by doing. The participants
did report feeling limited in the time they were allocated to use computers at school and
in the way their use was constrained by teacher-directed learning activities. Similarly,
Levin and Arafeh’s (2002) study revealed students’ frustrations at their school Internet
use being restricted, but crucially also their recognition of the school’s in loco parentis
role in protecting them from inappropriate material. Selwyn’s (2006) student participants
were also frustrated that their freedom of use was curtailed at school and ‘were
well aware of a digital disconnect but displayed a pragmatic acceptance rather than the
outright alienation from the school that some commentators would suggest’ (p. 5).
In 2008 Bennett et al summarised similar issues relating to students actual rather than perceived technical adeptness and net savviness to the 2016 authors:
Furthermore, questions must be asked about the relevance to education of the everyday
ICTs skills possessed by technically adept young people. For example, it cannot be
assumed that knowing how to look up ‘cheats’ for computer games on the Internet
bears any relation to the skills required to assess a website’s relevance for a school
project. Indeed, existing research suggests otherwise. When observing students interacting
with text obtained from an Internet search, Sutherland-Smith (2002) reported
that many were easily frustrated when not instantly gratified in their search for immediate
answers and appeared to adopt a ‘snatch and grab philosophy’ (p. 664). Similarly,
Eagleton, Guinee and Langlais (2003) observed middle-school students often making
‘hasty, random choices with little thought and evaluation’ (p. 30).
Such research observes shallow, random and often passive interactions with text,which
raise significant questions about what digital natives can actually do as they engage
with and make meaning from such technology. As noted by Lorenzo and Dziuban
(2006), concerns over students’ lack of critical thinking when using Internet-based
information sources imply that ‘students aren’t as net savvy as we might have assumed’
(p. 2). This suggests that students’ everyday technology practices may not be directly
applicable to academic tasks, and so education has a vitally important role in fostering
information literacies that will support learning.
Again, this is a paper I could quote bits from all day – so here are a couple of paragraphs from towards the end that summarises their (and my) take on the digital natives:
Neither dismissive scepticism nor uncritical advocacy enable understanding of whether
the phenomenon of digital natives is significant and in what ways education might need
to change to accommodate it. As we have discussed in this paper, research is beginning
to expose arguments about digital natives to critical enquiry, but much more needs to be
done. Close scrutiny of the assumptions underlying the digital natives notion reveals
avenues of inquiry that will inform the debate. Such understanding and evidence are
necessary precursors to change.
The claim that there is a distinctive new generation of students in possession of sophisticated
technology skills and with learning preferences for which education is not
equipped to support has excited much recent attention. Proponents arguing that education
must change dramatically to cater for the needs of these digital natives have
sparked an academic form of a ‘moral panic’ using extreme arguments that have lacked
empirical evidence.
Finally, after posting the prior summary of Kirschner and deBruckyne’s paper, I searched hashtag #digitalnatives on Twitter and – self-promotingly – replied to some of the original tweeters with a link to the paper (interestingly quite a few #digitalnatives tweets were links to discussions of the Kirschner/deBruckyne paper) Some were very receptive, but others were markedly defensive. Obviously a total stranger coming along and pedantically pointing out your hashtag is about something that doesn’t exist may not be the most polite way of interacting on twitter – but also quite a lot of us are quite attached to the myth of the digital native
From Mike Caufield, a piece that reminds me of the adage Garbage In, Garbage Out:
For many years, the underlying thesis of the tech world has been that there is too much information and therefore we need technology to surface the best information. In the mid 2000s, that technology was pitched as Web 2.0. Nowadays, the solution is supposedly AI.
I’m increasingly convinced, however, that our problem is not information overload but information underload. We suffer not because there is just too much good information out there to process, but because most information out there is low quality slapdash takes on low quality research, endlessly pinging around the spin-o-sphere.
Take, for instance, the latest news on Watson. Watson, you might remember, was IBM’s former AI-based Jeopardy winner that was going to go from “Who is David McCullough?” to curing cancer.
So how has this worked out? Four years later, Watson has yet to treat a patient. It’s hit a roadblock with some changes in backend records systems. And most importantly, it can’t figure out how to treat cancer because we don’t currently have enough good information on how to treat cancer:
“IBM spun a story about how Watson could improve cancer treatment that was superficially plausible – there are thousands of research papers published every year and no doctor can read them all,” said David Howard, a faculty member in the Department of Health Policy and Management at Emory University, via email. “However, the problem is not that there is too much information, but rather there is too little. Only a handful of published articles are high-quality, randomized trials. In many cases, oncologists have to choose between drugs that have never been directly compared in a randomized trial.”
This is not just the case with cancer, of course. You’ve heard about the reproducibility crisis, right? Most published research findings are false. And they are false for a number of reasons, but primary reasons include that there are no incentives for researchers to check the research, that data is not shared, and that publications aren’t particularly interested in publishing boring findings. The push to commercialize university research has also corrupted expertise, putting a thumb on the scale for anything universities can license or monetize.
In other words, there’s not enough information out there, and what’s out there is generally worse than it should be.
You can find this pattern in less dramatic areas as well — in fact, almost any place that you’re told big data and analytics will save us. Take Netflix as an example. Endless thinkpieces have been written about the Netflix matching algorithm, but for many years that algorithm could only match you with the equivalent of the films in the Walmart bargain bin, because Netflix had a matching algorithm but nothing worth watching. (Are you starting to see the pattern here?)
In this case at least, the story has a happy ending. Since Netflix is a business and needs to survive, they decided not to pour the majority of their money into newer algorithms to better match people with the version of Big Momma’s House they would hate the least. Instead, they poured their money into making and obtaining things people actually wanted to watch, and as a result Netflix is actually useful now. But if you stick with Netflix or Amazon Prime today it’s more likely because you are hooked on something they created than that you are sold on the strength of their recommendation engine.
Let’s belabor the point: let’s talk about Big Data in education. It’s easy to pick on MOOCs, but remember that the big value proposition of MOOCs was that with millions of students we would finally spot patterns that would allow us to supercharge learning. Recommendation engines would parse these patterns, and… well, what? Do we have a bunch of superb educational content just waiting in the wings that I don’t know about? Do we even have decent educational research that can conclusively direct people to solutions? If the world of cancer research is compromised, the world of educational research is a control group wasteland.
In 2010, Dartmouth University neuroscientist Craig Bennett and his colleagues subjected an experimental subject to functional magnetic resonance imaging. The subject was shown ‘a series of photographs with human individuals in social situations with a specified emotional valence, either socially inclusive or socially exclusive’. The subject was asked to determine which emotion the individual in the photographs were experiencing. The subject was found to have engaged in perspective-taking at p<0.001 level of significance. This is perhaps surprising, as the subject was a dead salmon.
…
In 2007, Colorado State University’s McCabe and Castel published research indicating that undergraduates, presented with brief articles summarising fictional neuroscience research (and which made claims unsupported by the fictional evidence presented) rated articles that were illustrated by brain imaging as more scientifically credible than those illustrated by bar graphs, a topographical map of brain activation, or no image at all. Taken with the Bennett paper, this illustrates one of the perils of neuroimaging research, especially when it enters the wider media; the social credibility is high, despite the methodological challenges.
I am becoming quite addicted to Leandro Herrero’s Daily Thoughts and here is another. One could not accuse Herrero of pulling his punches here:
I have talked a lot in the past about the Neurobabble Fallacy. I know this makes many people uncomfortable. I have friends and family in the Neuro-something business. There is neuro-marketing, neuro-leadership and neuro-lots-of-things. Some of that stuff is legitimate. For example, understanding how cognitive systems react to signals and applying this to advertising. If you want to call that neuro-marketing, so be it. But beyond those prosaic aims, there is a whole industry of neuro-anything that aggressively attempts to legitimize itself by bringing in pop-neurosciences to dinner every day.
In case anyone doubts his credentials:
Do I have any qualifications to have an opinion on these bridges too far? In my previous professional life I was a clinical psychiatrist with special interest in psychopharmacology. I used to teach that stuff in the University. I then did a few years in R&D in pharmaceuticals. I then left those territories to run our Organizational Architecture company, The Chalfont Project. I have some ideas about brains, and some about leadership and organizations. I insist, let both sides have a good cup of tea together, but when the cup of tea is done, go back to work to your separate offices.
It is ironic that otherwise hard-headed sceptics tend to be transfixed by anything “neuro-” – and Leandro Herrero’s trenchant words are just what the world of neurobabble needs. In these days of occasionally blind celebration of trans-, multi- and poly- disciplinary approaches, the “separate offices” one is bracingly counter-cultural…
In 2012 I had beta tested a couple of apps in the general health field (I won’t go into any more specifics) – none of which seemed clinically useful. My interest in healthcare technology had flowed largely from my interest in technology in medical education. Versel’s column, and the comments attributed to “Cynical” in the follow up column by Brian Dolan, struck a chord. I also found they transcended the often labyrinthine structures of US Healthcare.
The key paragraph of Versel’s original column was this
What those projects all have in common is that they never figured out some of the basic realities of healthcare. Fitness and healthcare are distinct markets. The vast majority of healthcare spending comes not from workout freaks and the worried well, but from chronic diseases and acute care. Sure, you can prevent a lot of future ailments by promoting active lifestyles today, but you might not see a return on investment for decades.
..but an awful lot of it is worth quoting:
Pardon my skepticism, but hasn’t everyone peddling a DTC health tool focused on user engagement? Isn’t that the point of all the gamification apps, widgets and gizmos?
I never was able to find anything unique about Massive Health, other than its Massive Hype. It had a high-minded business name, a Silicon Valley rock star on board — namely former Mozilla Firefox creative lead Asa Raskin — and a lot of buzz. But no real breakthroughs or much in the way of actual products.
….
Another problem is that Massive Health, Google Health, Revolution Health and Keas never came to grips with the fact that healthcare is unlike any other industry.
In the case of Google and every other “untethered” personal health record out there, it didn’t fit physician workflow. That’s why I was disheartened to learn this week that one of the first twodevelopment partners for Walgreens’ new API for prescription refills is a PHR startup called Healthspek. I hate to say it, but that is bound to fail unless Walgreens finds a way to populate Healthspek records with pharmacy and Take Care Health System clinic data.
Predictably enough, there was a strong response to Versel’s column. Here is Dr Betsy Bennet:
As a health psychologist with a lot of years in pharma and healthcare, I am continually frustrated with the hype that accompanies most “health apps”. Not everyone enjoys computer games, not everyone wants to “share” the issues they’re ashamed of with their “social network”, not everyone is interested in being a “quantified self”. This is not to say that digital health is futile or a bad idea. But if we took the time to understand why so many doctors hate EHRs and patients are not interested in paying to “manage their health information” (What does that mean, anyway?) we would come a long way towards finding digital interventions that people actually want to use.
The most trenchant (particularly point 1) comment was from “Cynical”
Well written. This is one of the few columns (or rants) that actually understands the reality of healthcare and digital health (attending any health care conference will also highlight this divide). What I am finding is two fold:
1. The vast majority of these DTC products are created by people who have had success in other areas of “digital” – and therefore they build what they know – consumer facing apps / websites that just happen to be focused in health. They think that healthcare is huge ($$), broken, and therefore easily fixed using the same principals applied to music, banking, or finding a movie. But they have zero understanding of the “business of healthcare”, and as a result have no ability to actually sell their products into the health care industry – one of the slowest moving, convoluted, and cumbersome industries in the world.
2. Almost none of these products have any clinical knowledge closely integrated — many have a doctor (entrepreneur) on the “advisory board”, but in most cases there are no actual practicing physicians involved (physician founders are often still in med school, only practiced for a limited time, or never at all). This results in two problems – one of which the author notes – no understanding of workflow; the other being no real clinical efficacy for the product — meaning, they do not actually improve health, improve efficiency, or lower cost. Any physician will be able to lament the issues of self-reported data…
Instead of hanging out at gyms or restaurants building apps for diets or food I would recommend digital health entrepreneurs hang out in any casino in America around 1pm any day of the week – that is your audience. And until your product tests well with that group, you have no real shot.
This perspective from Jim Bloedau is also worth quoting., given how much of the rhetoric on healthcare and technology is focused on the dysfunctionality of the current system:
Who likes consuming healthcare? Nobody. How many providers have you heard say they wish they could spend more time in the office? Never. Because of this, the industry’s growth has been predicated on the idea that somebody else will do it all for me – employers will provide insurance and pay for it, doctors will provide care. This is also the driver of the traditional business model for healthcare that many pundits label as a “dysfunctional healthcare system.” Actually, the business of healthcare has been optimized as it has been designed – as a volume based business and is working very well.
Coming up to four years on, and from my own point of viewing having had further immersion in the health IT world, how does it stack up? Well, for one thing I seem not to hear the word “gamification” quite that much. There seems to be a realisation that having “clinical knowledge closely integrated” is not a nice to have have but an absolute sine qua non. Within the CCIO group and from my experience of the CCIO Summer school, there certain isn’t a sense that healthcare is going to be “easily fixed” by technology. Bob Wachter’s book and report also seem to have tempered much hype.
Yet an awful lot of Versel’s original critique and the responses he provoked still rings true about the wider culture and discussion of healthcare and technology, not in CCIO circles in my experience but elsewhere. There is still often a rather inchoate assumption that the likes of the FitBit will in some sense transform things. As Cynical states above, in the majority of cases self-reported data is something there are issues with, (there are exceptions such as mood and sleep diaries, and Early Warning Signals systems in bipolar disorder, but there too a simplicity and judiciousness is key)
Re-reading his blog post I am also struck by his lede, which was that mobile tech has enabled what could be described as the Axis of Sedentary to a far greater degree than it has enable the forces of exercise and healthy eating. Versel graciously spent some time on the phone with me prior to the EuroChallenges workshop linked to above and provided me with very many further insights. I would be interested to know what he makes of the scene outlined in his column now.