The Lure of "Common Sense" Nudges: Blood Donor Edition

By Allison Daminger and Jamie Kimmel (ideas42)

Quick: what happened to the blood you donated at last month’s office blood drive? If you’re like most people, this question is impossible to answer with any specificity. It went to a blood bank, you likely assume, and from there to a hospital where someone awaited a transfusion. But how long it took to get there, or even whether your donation was accepted in the first place, is impossible to know for sure.

Or is it? A blood donation center in Stockholm, Sweden, recently began a novel campaign to let people know when their blood has been used. The program sends two text messages to donors: the first comes when their blood has been accepted, and the second when it has been given to an individual recipient. (See below for an image of what this looks like--in Swedish, of course!)

This example has gotten a lot of traction on social media in the weeks since it was announced. And from what we’ve seen, most people’s reaction has more or less boiled down to, “What a smart idea!” Some of that excitement has even come from within the behavioral science community:


These presumed effects are extrapolated from known behavioral principles. For instance, we know that when we make the consequences of an action salient, people tend to behave differently—whether by buying fewer sodas or recycling more of their waste. We know, too, that people have trouble remembering to do things in the future (see: prospective memory) and often respond well to timely reminders. 

But when we began looking for experimental evidence to estimate the likely effects of the Swedish text message campaign on donation rates, we came up short. There simply haven’t been many evaluations of similar programs. And as a result, we can’t say for sure that Sweden’s new program will increase rates of blood donation.

But that’s not an argument for pulling the program. Instead, it’s an argument for systematically measuring its effects, ideally by comparing blood donation rates among those who received the texts to rates among those who didn’t.

We advocate for evaluation because “common sense” initiatives like this one often end up having more complicated results than we initially expect. If it turns out that the amount of time between donation and use of blood is a few weeks or more, for instance, donors might conclude that the supply of blood is much higher than they previously believed (and feel less motivated to donate in the future). If a donor never receives a notification that her blood was used (because it was rejected, for example), she may be similarly disinclined to donate again: why go through the hassle and pain if the blood will just end up in the trash?

Neither of these obstacles would be insurmountable. The program designers might, for example, correct misperceptions about the blood supply by adding texts with feedback around the current levels if donation-to-use times are over a certain threshold. Or perhaps the program could completely avoid the acceptance text altogether.

Of course, just as there’s uncertainty with the effect of the current texting system, so too are we unsure about the effect of these proposals. The bottom line is that evidence-based program and policymaking depend on an evidence base—and building that base takes time. So while the blood donation text appears to be a common sense (and behaviorally-informed) solution to the problem of low blood donation rates, we won’t know for sure until the results are in.